Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
615399
1
null
null
0
5
I have a SEM model with three factors. One factor is measured with 5 items. Two factors with 35 and 10 respectively. Since the two factors with the many items did not produce a good model under the CFA of the two-step procedure (first test CFA of the measurement models, then SEM), I reduced the items for both factors to 3 parcels. The factor with the 5 items met the model fit after a modification (allowing cross-loading between items 1 and 2). The factor loadings look something like this: ITEM1 0.35 ITEM2 0.48 ITEM3 0.81 ITEM4 0.64 ITEM5 0.72 Some literature suggests that factor loadings should be .4 or higher. I could certainly achieve such factor loadings by parceling. Moreover, I have read in Bandalos (2002) that parceling even makes the estimates parameters closer to the real values in the population. What is more important, to include the questionnaire as it is (since the model fit is good) or to achieve higher factor loadings and maybe more accurate parameter estimates? I would also appreciate a citation of a source if answered. Thanks! Ps. The authors are reporting a unidimensionality of the questionnaire.
Is an achieved model fit of a measurement model (SEM/CFA) "more important" than a minimum level of factor loadings (e.g.: > .4)?
CC BY-SA 4.0
null
2023-05-10T06:32:56.550
2023-05-10T06:32:56.550
null
null
380073
[ "structural-equation-modeling", "confirmatory-factor" ]
615400
2
null
615397
9
null
Whether you divide the thresholds by a number or multiply the $p$ values by the same number is mathematically completely equivalent: $$ p\leq \frac{\alpha}{N} \quad\iff\quad Np\leq\alpha. $$ So there is no cause for concern if you get values $Np\geq 1$ in the inequality on the right. Also, it makes no difference, whether your adjusted $p$ value is $1$ or larger, since you won't reject the null hypothesis in either case. Actually, I would simply report the original $p$ values and note which ones remain significant after correction.
null
CC BY-SA 4.0
null
2023-05-10T06:55:03.830
2023-05-10T06:55:03.830
null
null
1352
null
615402
2
null
615343
1
null
Below is a solution that appears to work. The most important change from OP code is in the `MASS::mvrnorm()` function where the value for `mu` is now a vector comprising `fit2$coef` and `fit2$scale`: ``` ### lognormal distribution ### fit2 <- survreg(Surv(time, status) ~ 1, data = lung, dist = "lognormal") mu <- fit2$coef sigma <- fit2$scale var_cov <- vcov(fit2) # plot raw data as censored plot(survfit(Surv(time, status) ~ 1, data = lung), xlim = c(0, 1000), ylim = c(0, 1), bty = "n", xlab = "Time", ylab = "Fraction surviving") # overlay lognormal fit x = seq(from = 1, to = 1000, by = 1) curve(plnorm(x,meanlog=mu,sdlog=sigma,lower.tail=FALSE),from=0,to=1000,add=TRUE,col="red",lwd=2) # repeat the following to add randomized predictions for periods >= 500 sim_param <- MASS::mvrnorm(1, mu = c(mu, sigma), Sigma = var_cov) # critical change curve(1 - plnorm(x, meanlog = sim_param[1], sdlog = sim_param[2]), from = 500, to = 1000, add = TRUE, col = "blue", lty = 2) ``` Plot from running simulation 5 times, per last indicated section of code for repeating: [](https://i.stack.imgur.com/JAdkO.png)
null
CC BY-SA 4.0
null
2023-05-10T06:59:21.767
2023-05-10T06:59:21.767
null
null
378347
null
615403
2
null
615395
1
null
[JASP](https://jasp-stats.org/) is actively developed by employees and students of the University of Amsterdam and the open-source community. It uses R and Stan as its workhorse. On its page, JASP lists its advisory board consisting of lecturers from the University of Amsterdam and other universities and provides some references describing it. So I’d send your professor the link to the JASP page.
null
CC BY-SA 4.0
null
2023-05-10T07:41:57.127
2023-05-10T07:41:57.127
null
null
35989
null
615404
1
null
null
0
20
I am currently studying the relationship between academic freedom (independant variable) and university rankings (dependant variable) using OLS. Each individual is a university, and my variables (except for rank) are at the country level. I clustered standard errors by country. Here are my results : [](https://i.stack.imgur.com/JAvUx.png) With the base model, with just Rank and Academic freedom, I observe a negative significant coefficient (a negative coefficient means that greater academic freedom gives you better rankings, since the lower ranking you have, the closer you are to the 1st place). When I introduce a measure of the Rule of Law in my model (to tackle the problem of omitted variable bias), the coefficient of academic freedom becomes positive and significant. What exactly can I deduce about correlation between Academic freedom and university ranking in this example ? Can I make any interpretation about what the "true" correlation between my variables is (positive or negative), or would I need more information ? What explains the change of coefficient here ? Is it because I corrected for omitted variable bias, or is it due to something else entirely ?
The introduction of a new variable made my coefficient of interest flip sings : what can I say about correlation?
CC BY-SA 4.0
null
2023-05-10T07:42:20.813
2023-05-10T08:12:15.100
2023-05-10T08:12:15.100
382870
382870
[ "correlation", "least-squares" ]
615405
2
null
320083
1
null
As discussed in the comments, nothing in the demonstration of Clarinetist states that the correlation has to be positive. Nevertheless, it cannot be negative otherwise, the covariance matrix would not be semi-positive definite. You can suspect that an assumption would be incorrect because when B goes to infinity, Var(X¯B) is negative if rho is negative as well.
null
CC BY-SA 4.0
null
2023-05-10T08:06:12.270
2023-05-10T08:06:12.270
null
null
387632
null
615407
2
null
614631
0
null
There may be no paper dealing with PCs in vector fitting over ordination. However, there is abundant literature about using PCs to replace a high number of correlated observed variables with a couple of surrogate variables. Opinions diverge, but I think the dominant modern view is: don't do this! Naturally, there are many ways of implementing this, and some ways have a better justification, but just plugging in PCs from exploratory analysis is something I wouldn't do. Ask yourself these simple questions: How do I interpret those fitted factors? Can I do it? Can I explain to myself (as a starter) what they mean? However, it should not be difficult to find someone who disagrees with this message. A side note: Ordination plots should always have equal aspect ratio. In your plot one unit of axis is more than two times longer on dimension 2 than on dimension 1. This is particularly important in NMDS where all you try to find is the configuration, or distances among points, and these are distorted if you stretch dim 2. If you use ggplot2 you should add `+ coord_fixed(ratio=1)` in your plotting string (and in vegan plot this is taken care automatically).
null
CC BY-SA 4.0
null
2023-05-10T08:16:48.320
2023-05-10T08:16:48.320
null
null
340028
null
615409
1
null
null
0
24
I try to understand the "score residual" from cox ph model in R. There is one reference site I have used for it. [https://www.mayo.edu/research/documents/biostat-58pdf/doc-10027288](https://www.mayo.edu/research/documents/biostat-58pdf/doc-10027288) Can someone help me to get the score residual manually (I mean, using formula) in R? Many thanks in advance. [R code] library(survival) cph1 <- coxph(Surv(futime, fustat)~rx+age , data=ovarian) residual <- residuals(cph1, type="score")
How to calculate score residual for cox ph model?
CC BY-SA 4.0
null
2023-05-10T08:21:39.780
2023-05-11T13:18:00.817
null
null
290408
[ "survival", "residuals", "cox-model", "schoenfeld-residuals" ]
615410
1
null
null
0
8
Sorry if the example is vague but I hope it is minimally reproducible. Imagine that I have a sample of points from a population. $\{x_i,...,x_n\} \in X$ and I'm interested in calculating a parameter out of it $f(X)$. From the same population, I have a second proxy parameter $g(X)$. $g(X)$ can be estimated more accurately in the sense that $MSE(g(x),g(X)) < MSE(f(x),f(X))$, and a behavior that is sufficiently close to $f(x)$ for our needs (for example, it increases monotonically with $f(x)$). My intuition is that both arguments above are enough to justify measuring $g(x)$ over $f(x)$. It "does the job", and it is less influenced by estimation errors. However, one can argue that choosing a proxy because of lower MSE is not a fair argument, especially if $g(x)$ and $f(x)$ do not have the same unit for example if $f(x) = g(x)^2$. A measure may have more or less MSE simply because it is measured on a different unit. Is there a measure of errors that can be used to compare estimators with different units but the same underlying motivation? Should one even take estimation errors into consideration or always measure the variables of interest for a more conceptually sound measure?
Choosing between estimating the variable of interest or a easier-to-estimate proxy
CC BY-SA 4.0
null
2023-05-10T08:42:29.360
2023-05-10T08:42:29.360
null
null
132297
[ "estimators" ]
615411
1
null
null
0
7
I'm trying to write a good estimator $\hat{n}_n$ for a real time series $x_n$ from a set of - noisy measurements $a_n = x_n + m_n$ (here $m_n$ is a high-level noise), and - accurate measurements of its derivative $b_n = (x_n - x_{n-1}) \delta t + p_n$ (where $p_n$ is a low-level noise). Is there any standard (and simple) way of performing this simple task, and to get an estimate of the error $\sigma_n^2 = \mathbf{E}[(\hat{x}_n - x_n)^2]$? Thanks a lot! Cheers, Jb
Estimating a quantity from noisy value and accurate measurements of its derivative
CC BY-SA 4.0
null
2023-05-10T09:01:56.077
2023-05-11T10:20:17.833
2023-05-11T10:20:17.833
387634
387634
[ "estimation" ]
615412
1
615530
null
2
62
The code below simulates the uncertainty in the lognormal distribution parameters, using the `MASS::mvrnorm()` function and the `lung` dataset from the `survival` package. Although the parametric distribution providing the best fit is Weibull, for illustrative purposes I'm using the lognormal distribution. When running the code, per the image at the bottom of this post, the solid green line shows the Kaplan-Meier curve (probabilities) of the `lung` data, the dashed-green lines the confidence interval surrounding the K-M probabilities, the red line the fitted survival curve for `lung` data using the lognormal distribution, and the dashed-blue lines show 5 simulation runs. My question is, how could I introduce the inherent uncertainty in fitting the original data when running the simulation? In addition to the uncertainty currently simulated of the lognormal parameters. Note in the image the width of the 95% confidence intervals around the K-M curve. It seems that the simulation runs (dashed blue lines) should at least be as wide around the fitted lognormal survival curve (red line) as the 95% CI lines around the K-M curve. Code: ``` library(MASS) library(survival) fit <- survreg(Surv(time, status) ~ 1, data = lung, dist = "lognormal") time <- seq(0, 1000, by = 1) meanlog <- fit$coef # mean on the log scale sdlog <- fit$scale # standard deviation on the log scale var_cov <- vcov(fit) # extract the variance-covariance matrix # Compute the lognormal survival function survival <- 1 - plnorm(time, meanlog = meanlog, sdlog = sdlog) num_simulations <- 5 # Generate random lognormal parameter estimates for simulations sim_params <- MASS::mvrnorm(num_simulations, mu = c(meanlog, sdlog), Sigma = var_cov) # Compute the survival curves for each simulation sim_curves <- sapply(1:num_simulations, function(i) 1 - plnorm(time, meanlog = sim_params[i, 1], sdlog = sim_params[i, 2])) # Compute the Kaplan-Meier survival curve for the lung dataset lung_surv <- survfit(Surv(time, status) ~ 1, data = lung) # Plot the lognormal survival curve, simulation lines, and Kaplan-Meier plot plot(time, survival, type = "l", xlab = "Time", ylab = "Survival Probability", main = "Lognormal Survival Curve of Lung Dataset", col = "red", lwd = 2) lapply(1:num_simulations, function(i) lines(time, sim_curves[, i], col = "blue", lty = "dashed")) lines(lung_surv, col = "green") # Store the coordinates of the simulation lines sim_lines <- lapply(1:num_simulations, function(i) { curve <- sim_curves[, i] lines(time, curve, col = "blue", lty = "dashed") return(data.frame(time = time, survival = curve)) }) ``` Output of the above code: [](https://i.stack.imgur.com/4nXYO.png)
How to introduce uncertainty in fitting the original data when simulating survival curves?
CC BY-SA 4.0
null
2023-05-10T09:10:55.373
2023-05-31T05:29:28.070
null
null
378347
[ "r", "survival", "simulation", "standard-error" ]
615413
2
null
18058
0
null
Covariance is a statistical measure that describes the relationship between two variables. If two variables have a positive covariance, it means that they tend to increase or decrease together. If they have a negative covariance, it means that they tend to move in opposite directions. If they have a covariance of zero, it means that they are independent and do not affect each other. To explain covariance to someone who understands only the mean, you could start by explaining that the mean is a measure of the central tendency of a distribution. The mean tells you the average value of a set of numbers. Covariance, on the other hand, measures how two variables vary together. It tells you whether they tend to increase or decrease together, or whether they move in opposite directions. For example, suppose you have two sets of numbers, X and Y. The mean of X tells you the average value of X, and the mean of Y tells you the average value of Y. If the covariance between X and Y is positive, it means that when X is above its mean, Y tends to be above its mean as well. And when X is below its mean, Y tends to be below its mean as well. If the covariance is negative, it means that when X is above its mean, Y tends to be below its mean, and vice versa. If the covariance is zero, it means that there is no relationship between X and Y. So, in summary, covariance measures the tendency of two variables to vary together, and can be positive, negative, or zero.
null
CC BY-SA 4.0
null
2023-05-10T09:22:16.230
2023-05-10T09:22:16.230
null
null
216371
null
615414
1
null
null
1
15
I am interested in the density of the Wishart distribution under the constraint that the determinant of the outcome is 1. It suffice do divide the density of the Wishart by the marginal density of the determinant, however, I don't see how to compute this marginal density. Does it have an explicite expression (maybe in small dimension) ? Or are there techniques to numerically estimate it ? Thank you for your help.
Wishart conditionned by the determinant
CC BY-SA 4.0
null
2023-05-10T09:23:34.787
2023-05-10T09:23:34.787
null
null
310225
[ "conditioning", "wishart-distribution", "determinant" ]
615415
1
null
null
0
20
I'm simulating data where I have four factors and one response that I want to examine. I have two different DoE designs, one 2⁴ factorial design and one Latin Hypercube Design. The ultimate goal with the research is to see if I can see any difference between the two models, i.e. do they give different significant factors for my response depending on the DoE model used. I understand how to plot and analyze the results from the factorial designs by aggregating the results and then performing ANOVA tests or fitting linear models by: ``` doe.model <- lm(response ~ A + B + C + A * B + A * C + B * C + A * B * C, data = ss.data.doe1) summary(doe.model) ``` However, I cannot seem to find any way of performing a similar test on my Latin Hypercube design, the 4 factors are assumed continuous and the design is 16 runs, i.e. every simulation has different values for the parameters. Is there any way of performing a similar analysis for my LHC design? Am I missing something?
Finding significant factors in a Latin Hypercube Design
CC BY-SA 4.0
null
2023-05-10T09:42:42.160
2023-05-10T09:49:22.680
2023-05-10T09:49:22.680
362671
387637
[ "r", "anova", "latin-hypercube" ]
615416
1
null
null
1
10
I have four treatment groups (A, B, C, D) and measure three dependent variables (X, Y, Z). My hypotheses are that each dependent variable is strongest in one of the treatment groups, compared to all others. I plan to test this using planned comparisons. So, for example, the contrast weights could be 3 -1 -1 -1 for X, -1 3 -1 -1 for Y and -1 -1 3 -1 for Z. I found one article which did a similar analysis, [https://doi.org/10.1177/0146167207309193](https://doi.org/10.1177/0146167207309193). They go about it as follows: - They defined focal contrasts for each dependent variable as I described above. > For each of the three theoretical predictions, a contrast was created that described the hypothesized rank order of means regarding one group-based emotion (A > B = C = D). This is represented in the focal contrast with the coefficients 3-1-1-1. - To check whether there is systematic variance other than that predicted, orthogonal contrasts were computed in addition to the focal contrast. > Orthogonal contrasts are important because they reveal whether there is residual variance that is not explained by the focal contrast. If the hypothesis represented in the focal contrast is correct, the focal contrast should be significant, and ideally, the orthogonal contrast should not be significant. Given that there were four exper imental conditions, we had 2 df to compute two orthogo nal contrasts (001-1 and 0-211). - For each contrast, they added the other two dependent variables as covariates in the contrast analyses. > To account for the intercorrelations between the emotions, and thus general emotionality, we included the nonfocal emotions as covariates in the contrast analyses This leaves a few questions open. - These comparisons are, of course, not orthogonal, so is this analytic approach at all appropriate? - If it is, is alpha-level correction necessary? Would simple Bonferroni-correction be sufficient? - Is it appropriate to include the nonfocal dependent variables as covariates? - How would you do an a-priori power analysis for this type of analysis? Bonus question: Do you have any recommendations for the appropriate way/library to do this analysis in R? I am grateful for any help or literature recommendations on this!
How to perform non-orthogonal planned contrasts (and a-priori power-analysis for this)?
CC BY-SA 4.0
null
2023-05-10T09:50:59.430
2023-05-10T09:50:59.430
null
null
387638
[ "statistical-power", "contrasts", "planned-comparisons-test" ]
615417
2
null
318780
1
null
TP, TN, FP, FN - in 3x3 matrix could be defined PER CLASS In the above example: For M class: TP - real M predicted as M (64) TN - real F predicted as F and real I predicted as I (237+165) FP - real F and I predicted as M (12+52) FN - real M predicted as F or I (46+139) Then you can calculate Precision and Recall metrics (per class).
null
CC BY-SA 4.0
null
2023-05-10T09:53:14.387
2023-05-10T09:53:14.387
null
null
387639
null
615418
2
null
252162
0
null
I know this is an old one but I keep encountring the same question over and over. Apparently, there is no consensus as to the definition of the standard error of the weighted mean. Even different statistical softwares use different definitions. However, the most coherent answer that I keep seeing is this for an unbiased estimation of the standard error on a weighted mean: $$ se= \frac{\sigma_w}{\sqrt{\sum_i^n w_i}} $$ where the $\sigma_w$ is the unbiased estimator of the standard deviation of your random variable $X$ and $\sum_i^n w_i$ is the sum of the individual weights that contribute to your unbiased estimation of $X$. The unbiased estimator of the standard deviation of your random variable with degres of freedom $=1$ is the following: $$ \sigma_w = \sqrt{\frac{\sum_i^n w_i x_i}{\sum_i^n w_i - 1}} $$ Here's a [link](https://www.analyticalgroup.com/download/weighted_mean.pdf) to a note that compares how it is computed in SPSS vs WinCross. Python's `statsmodels` implemented a class that computes all sorts of weighted statistics including the standard deviation and standard error (method under the name `std_mean` here in their [source](https://www.statsmodels.org/dev/_modules/statsmodels/stats/weightstats.html#DescrStatsW) code. As we can see from their implementation, they use either a biased estimation of the standard error if the degres of freedom is equal to $0$ like so: $$ se= \frac{\sigma_w}{\sqrt{\sum_i^n w_i-1}} $$ or an unbiased estimator of the standard error (which is your case) if the degres of freedom parameter is given which activates a condition to apply a degres of freedom correction to the standard deviation first like so: $$ \sigma_w \leftarrow \sigma_w \times \sqrt{\frac{\sum_i^n w_i- ddof}{\sum_i^n w_i}} $$ For $ddof=1$ and if you plugin the new value of the corrected $\sigma_w$ in the biased estimation of the standard error, then you get the formula for the unbiased estimation of the standard error of the weighted mean $se = \sigma_w/\sqrt{\sum_i^n w_i}$ Here's how to numerically verify your estimators using manual definitions vs `statsmodels`'s implementation if you use python: ``` # make sure you install statsmodels using pip install statsmodels import numpy as np from statsmodels.stats.weightstats import DescrStatsW # define the x measurements and their weights x = np.array([10, 12, 15.2, 12.5, 11]) w = np.array([100, 120, 108, 80, 98]) # calculate the unbiased estimators of avg, std and se (with ddof=1) sum_w = np.sum(w) avg_w = np.sum(w * x) / sum_w std_w = np.sqrt(np.sum(w*(x-avg_w)**2) / (sum_w-1)) se_w = std_w / np.sqrt(sum_w) # calculate the weighted stats using scipy's formula (with ddof=1) weighted_stats = DescrStatsW(x, weights=w, ddof=1) print('manual weighted avg = %0.5f' %avg_w) print('manual weighted std = %0.5f' %std_w) print('manual weighted se = %0.5f' %se_w) print('statsmodels weighted avg = %0.5f' %weighted_stats.mean) print('statsmodels weighted std = %0.5f' %weighted_stats.std) print('statsmodels weighted se = %0.5f' %weighted_stats.std_mean) >>> OUTPUT: manual weighted avg = 12.17312 manual weighted std = 1.78484 manual weighted se = 0.07935 statsmodels weighted avg = 12.17312 statsmodels weighted std = 1.78484 statsmodels weighted se = 0.07935 ```
null
CC BY-SA 4.0
null
2023-05-10T09:59:46.163
2023-05-10T09:59:46.163
null
null
346672
null
615419
2
null
615412
1
null
In the code below is a solution that follows the characterization of lognormal survival $logT ∼ α + σW$ per resource [https://grodri.github.io/survival/ParametricSurvival.pdf](https://grodri.github.io/survival/ParametricSurvival.pdf). Also see the plot beneath which illustrates 1000 simulations. Post [How to simulate variability (errors) in fitting a gamma model to survival data by using a generalized minimum extreme value distribution in R?](https://stats.stackexchange.com/questions/616872/how-to-simulate-variability-errors-in-fitting-a-gamma-model-to-survival-data-b) also has a discussion on randomizing values for $W$, $α$, and $σ$. Code: ``` library(MASS) library(survival) time <- seq(0, 1000, by = 1) fit <- survreg(Surv(time, status) ~ 1, data = lung, dist = "lognormal") # Compute the lognormal survival function using the fitted model meanlog <- fit$coef # mean on the log scale sdlog <- fit$scale # standard deviation on the log scale # Compute lognormal survival function for the base fitted model survival <- 1 - plnorm(time, meanlog = meanlog, sdlog = sdlog) # Generate random values for simulations where survival form for lognormal is logT ∼ α + σW simFX <- function(){ W <- rnorm(165) # randomize W for model error newCoef <- MASS::mvrnorm(1, mu = c(meanlog, sdlog), Sigma = vcov(fit)) # randomize α and σ newTimes <- exp(newCoef[1] + newCoef[2] * W) # apply survival form for lognormal logT ∼ α+ σW newFit <- survreg(Surv(newTimes)~1,dist="lognormal") params <- c(newFit$coef,newFit$scale) return(1 - plnorm(time, meanlog = params[1], sdlog = params[2])) } plot(time,survival,type="n",xlab="Time",ylab="Survival Probability",main="Lung Survival (Lognormal)") replicate(1000,lines(simFX(), col = "blue", lty = 2)) # run this line to add simulations to plot lines(survival, type = "l", col = "yellow", lwd = 3) # plot base fitted survival curve ``` [](https://i.stack.imgur.com/6YJC8.png)
null
CC BY-SA 4.0
null
2023-05-10T10:11:40.223
2023-05-31T05:29:28.070
2023-05-31T05:29:28.070
378347
378347
null
615420
1
615427
null
2
45
I built a GAM with two categorical variables and two smooth terms, following this structure: ``` model <- ik ~ population_id_cat + s_status + s(n_locs, bs = "re") + s(animal_id, bs = "re") ``` The model summary for the parametric coefficients is: ``` Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.90509 0.07798 24.432 < 2e-16 *** population_id_catB -1.08790 0.13199 -8.242 2.27e-16 *** population_id_catC 0.06718 0.08880 0.757 0.449368 population_id_catD -0.27599 0.07689 -3.589 0.000336 *** population_id_catE -0.33914 0.13555 -2.502 0.012391 * population_id_catF -0.71104 0.07586 -9.373 < 2e-16 *** population_id_catG -0.39243 0.07276 -5.393 7.32e-08 *** population_id_catH 0.31115 0.18431 1.688 0.091449 . population_id_catI 0.06530 0.10203 0.640 0.522174 s_status_2a_m -0.16331 0.05263 -3.103 0.001930 ** s_status_2fam -0.36656 0.05200 -7.050 2.10e-12 *** ``` When I plot these coefficients with the `mgcViz::pterm()` function, the values do not seem to match the std. error (but their significance does match with the p-value provided). For example, the plot for the parametric coefficients mentioned above: ``` gviz_model <- getViz(model) gviz_model_1 <- pterm(gviz_model, 1) plot(gviz_model_1) + l_ciBar(colour = "blue") + l_fitPoints(colour = "red") + l_rug(alpha = 0.3) ``` [](https://i.stack.imgur.com/SOiUw.png) Question 1: What values are actually plotted in this plot? They seem to match the "Estimate" value from the summary, but they do not match the "Std. Error" (e.g., look at population H, according to the std error, the o value would be significant and therefore would be above 0). If I plot the `Estimate+-Std. Error` in a `ggplot` (which is my goal, because I want to plot these parametric values for two models in the same plot), I get this instead: ``` gviz_model_pop <- termplot(gviz_model, se = TRUE, plot = FALSE)$population_id_cat ggplot(gviz_model_pop , aes(x=x, y=y)) + geom_errorbar(aes(ymin=y-se, ymax=y+se), width=.15, size=1, position=position_dodge(0.2)) + geom_point(position=position_dodge(0.2), size=3.5, shape=16) ``` [](https://i.stack.imgur.com/71h1Q.png) Similar trends, but the interval is definitely not the same (and influences the significance of each category). Question 2: If I want to plot these parametric coefficients for two models in the same plot, and I would need to use the output from pterm(), instead of extracting the estimates/std error as I did previously, how can I do this? Preferably with a ggplot output. Question 3: If I want to plot these parametric coefficients including the intercept (to make the plot more interpretable), as mentioned [here in "Transformed standard errors (2)"](https://noamross.github.io/gams-in-r-course/chapter2), how do I do it? Do I need separate calculations or is there a function for the parametric coefficients too? Preferably with a ggplot output. Thank you very much in advance for any help!
GAM - parametric coefficients - what is mgcViz::pterm() actually plotting?
CC BY-SA 4.0
null
2023-05-10T10:17:55.120
2023-05-10T12:07:41.650
null
null
117281
[ "data-visualization", "generalized-additive-model", "mgcv", "ggplot2", "parametric" ]
615421
1
615541
null
4
1137
I have standard data: where rows are observations, and columns are features. ``` target colum_1 colum_2 colum_10 colum_100110 colum_499999999 [1,] 1 -0.35 -1.58 1.26 1.08 0.30 [2,] 1 -1.21 2.05 -0.95 1.59 -0.59 [3,] 1 -0.15 -1.63 0.63 -0.74 0.60 [4,] 0 0.78 0.55 -1.31 0.24 -0.22 [5,] 0 0.68 0.36 0.25 -0.23 1.73 [6,] 1 -0.32 1.07 -0.13 -0.31 -1.26 [7,] 1 -0.37 0.47 1.11 -1.14 -0.43 [8,] 1 -0.85 0.96 -1.61 0.62 0.06 [9,] 1 0.19 0.62 -1.28 1.31 0.30 [10,] 1 0.16 1.35 -0.11 1.14 -2.03 ``` The problem is that there are so many features that I cannot run any learning algorithm. I'm familiar with dimensionality reduction algorithms, but for some reason I don't want to use them. If I convert my data to a more compact form and create a new feature as a column id `n_colum`, something like this: ``` target n_colum val 1 1 colum_1 -0.35 2 1 colum_2 -1.58 3 1 colum_10 1.26 4 1 colum_100110 1.08 5 1 colum_499999999 0.3 6 1 colum_1 -1.21 7 1 colum_2 2.05 8 1 colum_10 -0.95 9 1 colum_100110 1.59 10 1 colum_499999999 -0.59 11 1 colum_1 -0.15 12 1 colum_2 -1.63 13 1 colum_10 0.63 14 1 colum_100110 -0.74 15 1 colum_499999999 0.6 16 0 colum_1 0.78 17 0 colum_2 0.55 18 0 colum_10 -1.31 19 0 colum_100110 0.24 20 0 colum_499999999 -0.22 21 0 colum_1 0.68 22 0 colum_2 0.36 23 0 colum_10 0.25 24 0 colum_100110 -0.23 25 0 colum_499999999 1.73 26 1 colum_1 -0.32 27 1 colum_2 1.07 28 1 colum_10 -0.13 29 1 colum_100110 -0.31 30 1 colum_499999999 -1.26 31 1 colum_1 -0.37 32 1 colum_2 0.47 33 1 colum_10 1.11 34 1 colum_100110 -1.14 35 1 colum_499999999 -0.43 36 1 colum_1 -0.85 37 1 colum_2 0.96 38 1 colum_10 -1.61 39 1 colum_100110 0.62 40 1 colum_499999999 0.06 41 1 colum_1 0.19 42 1 colum_2 0.62 43 1 colum_10 -1.28 44 1 colum_100110 1.31 45 1 colum_499999999 0.3 46 1 colum_1 0.16 47 1 colum_2 1.35 48 1 colum_10 -0.11 49 1 colum_100110 1.14 50 1 colum_499999999 -2.03 ``` If I train an algorithm on compact data, will I lose performance or something, or is it equivalent to the first option? ================UPD=================== Wrote a small prototype of the idea.. sorry for the messy code, i haven't had coffee yet so I remade the dataset into a compact form ``` ir <- iris[ sample(1:150,150) , ] # make data dat <- as.data.frame(matrix(ncol = 3,nrow = 0)) for(i in 1:nrow(ir)){ nc <- ncol(ir)-1 tmp_dat <- cbind.data.frame( target = rep( ir$Species[i] , nc), n_colum = 1:nc, value = unlist(ir[i,-5]) ) dat <- rbind.data.frame(dat, tmp_dat) } row.names(dat) <- NULL head(ir) head(dat,20) ``` ... ``` target n_colum value 1 setosa 1 4.8 2 setosa 2 3.4 3 setosa 3 1.6 4 setosa 4 0.2 5 setosa 1 5.7 6 setosa 2 4.4 7 setosa 3 1.5 8 setosa 4 0.4 9 versicolor 1 5.6 10 versicolor 2 3.0 11 versicolor 3 4.5 12 versicolor 4 1.5 13 virginica 1 6.9 14 virginica 2 3.1 15 virginica 3 5.1 16 virginica 4 2.3 ``` next I train the model ``` Y <- dat$target X <- dat[,-1] # train model tr <- 1:500 ts <- 501:nrow(X) table(Y[tr]) library(randomForest) rf <- randomForest(Y[tr]~., X[tr,]) ``` then I make a prediction, the prediction is made immediately on four rows, because the original data has four columns ``` # predict result <- as.data.frame(matrix(ncol = 2,nrow = 0)) for(i in ts){ if(X$n_colum[i]==4){ X_rows <- X[(i-3):i, ] # X_rows # n_colum value # 597 1 6.7 # 598 2 3.3 # 599 3 5.7 # 600 4 2.1 pr <- predict( rf , X_rows , t="prob") pr <- cbind.data.frame( predicted = as.factor(names(which.max(colMeans(pr)))), original = Y[i] ) result <- rbind(result , pr) } } print(result) predicted original 1 versicolor versicolor 2 versicolor versicolor 3 versicolor versicolor 4 setosa setosa 5 versicolor versicolor 6 setosa setosa 7 versicolor versicolor 8 virginica virginica 9 virginica virginica 10 virginica virginica 11 versicolor versicolor 12 setosa setosa 13 virginica virginica 14 virginica virginica 15 virginica virginica 16 versicolor versicolor 17 versicolor versicolor 18 virginica virginica 19 virginica virginica 20 setosa setosa 21 virginica virginica 22 setosa setosa 23 virginica virginica 24 virginica virginica 25 setosa setosa ``` If I didn’t make any mistakes, then it turns out that you can train the model in this way, and this is very good))
A way to train a model on data with a very large number of features
CC BY-SA 4.0
null
2023-05-10T10:38:41.837
2023-05-21T17:17:31.180
2023-05-11T08:11:30.977
303632
303632
[ "machine-learning", "data-transformation", "dimensionality-reduction", "high-dimensional" ]
615422
1
null
null
0
34
It is well known that for $X_t \sim ARMA(p,q)$ where $\phi(B)X_t = \theta(B)Z_t, Z_t\sim WN(0, \sigma^2)$, if $\phi(z)\neq0$ in the unit circle, $\{X_t\}$ is stationary. Now assume $\{Y_t, t=0, \pm1, ...\}$ is a stationary time series. $\phi(B)X_t = \theta(B)Y_t$. If $\phi(z)\neq0$ in the unit circle, is $\{X_t\}$ still stationary? This question is from Brockwell Problem 3.4. I think we should go from the definition of (weak) stationary, but found it hard to calculate the expectation and acvf of $\{X_t\}$. How could we prove it? --- By the comment from [@Zhanxiong](https://stats.stackexchange.com/users/20519/zhanxiong), the original question is that $\{X_t\}$ has a stationary solution. It's clear that $\mathbb{E}(X_t)$ can come from one of the root of $\phi(z) = \theta(\mathbb{E}Y_t)$. But how can we define the acvf $\gamma_X(h)$ to let our solution of $X_t$ stationary?
Stationarity of ARMA-like time series
CC BY-SA 4.0
null
2023-05-10T11:19:21.427
2023-05-11T10:36:27.253
2023-05-11T10:36:27.253
383159
383159
[ "time-series", "arima", "stationarity" ]
615423
1
null
null
0
27
I am comparing 2 groups, performing a total of 180 comparisons. When doing these 180 pair-wise comparisons using the Wilcoxon rank-sum test more than 40 show a significant (p<0.05) difference between the groups (in all cases one group has a higher median value compared to the other group). The group sizes are relatively small: 20 and 25. When using standard multiple comparison corrections (Bonferroni, Benjamini-Hochberg) all of the significant differences disappear. As the expected false discovery rate (assuming all null hypothesis are true) would be 180*0.05 = 9, I think these methods are very conservative. Therefore, I have been looking into alternatives that might be suitable for many comparisons and relatively small sample sizes, and have a few options below. - Not correct for multiple comparison, as the chance of having so many positive findings by chance is less than 5%, a justification made in (1), where they state: "A correction for multiple comparisons was not necessary, because the number of channels with P-values below 0.05 ranged from 13 and 38 and the likelihood of having this many channels out of 150 by chance is less than 2% (cf. binomial distribution)." - A tmax permutation test. For this I am not sure what statistic would be most suitable, but I have attempted to use t-statistic, Welch's t-statistic and also O'Brian's test statistics (2) (I have not managed to implement the adjusted test). Almost all of the differences do disappear in these cases too. I was wondering if there are any multiple comparison corrections that might be suitable for me that I have somehow missed? And any advice on how to proceed would be appreciated. (1) Montez, T., Poil, S. S., Jones, B. F., Manshanden, I., Verbunt, J. P., van Dijk, B. W., Brussaard, A. B., van Ooyen, A., Stam, C. J., Scheltens, P., & Linkenkaer-Hansen, K. (2009). Altered temporal correlations in parietal alpha and prefrontal theta oscillations in early-stage Alzheimer disease. Proceedings of the National Academy of Sciences of the United States of America, 106(5), 1614–1619. [https://doi.org/10.1073/pnas.0811699106](https://doi.org/10.1073/pnas.0811699106) (2) Huang, P., Tilley, B. C., Woolson, R. F., & Lipsitz, S. (2005). Adjusting O'Brien's test to control type I error for the generalized nonparametric Behrens-Fisher problem. Biometrics, 61(2), 532–539. [https://doi.org/10.1111/j.1541-0420.2005.00322.x](https://doi.org/10.1111/j.1541-0420.2005.00322.x)
How to implement multiple comparison correction after Wilcoxon rank-sum test with relativelty small sample sizes and many comparisons
CC BY-SA 4.0
null
2023-05-10T11:40:26.233
2023-05-10T15:02:51.127
null
null
387295
[ "hypothesis-testing", "multiple-comparisons", "wilcoxon-mann-whitney-test", "permutation-test" ]
615424
1
null
null
0
11
I have two crosstabs that I'd like to compare for each cell for significance. Both crosstabs have the same variables and the same amount of total observations, but the frequency distribution is different across cells. This is a before/after manipulation task, so the same participants did the task twice. I need to see whether the manipulation caused any significant changes in the frequency distribution across cells. I compared the two crosstabs based on the standard residuals to see which cell has the strongest association, and compared whether the strongest cells are the same, but did this without any statistical test. So which test can I use in order to conclude that the manipulation made a difference (or not) in frequency distribution for each cell? Also, I tried using the goodness of fit test, task1 as expected and task2 as observed counts. My logic was that if the analysis is significant, it means the two distributions are different and I can see which ones are by looking at the residuals. I see that SPSS computes unstandardized residuals, but do you think if I compute them myself this would be a good way to achieve my goal or would I need another test that can perhaps compare two crosstabs?
Comparing cells from two crosstabs / chi-squared goodness of fit test?
CC BY-SA 4.0
null
2023-05-10T11:44:23.853
2023-05-10T11:44:23.853
null
null
380361
[ "distributions", "chi-squared-test", "goodness-of-fit", "frequency" ]
615425
1
615603
null
1
35
I have made three generalized linear models, one with zero-inflated Poisson, the second with negative binomial, and the third with binomial conditional distributions. I am now trying to interpret the results, and have tried to back transform the estimates by taking the exponent of the estimates for the models with Poisson and neg. binomial, and know that I should use inverse logit function on the binomial but that is still on the to-do list. Is it correct to use the natural exponent? And is it possible to get negative values? All my transformed estimates are positive so far, but how can I tell if a predictor has a negative impact if the sign is always positive?
Can glm(m) model estimates be negative after back transforming?
CC BY-SA 4.0
null
2023-05-10T11:50:39.903
2023-05-11T18:50:28.273
null
null
380763
[ "logistic", "generalized-linear-model", "glmm", "logarithm" ]
615426
2
null
594703
1
null
I am adding this here since I do not have the reputation to comment. The Stack Overflow question at [https://stackoverflow.com/questions/40325980/how-is-the-vader-compound-polarity-score-calculated-in-python-nltk?rq=2](https://stackoverflow.com/questions/40325980/how-is-the-vader-compound-polarity-score-calculated-in-python-nltk?rq=2) gives an in depth view of the normalization function used in VADER.
null
CC BY-SA 4.0
null
2023-05-10T11:51:26.983
2023-05-10T11:53:34.730
2023-05-10T11:53:34.730
307980
307980
null
615427
2
null
615420
3
null
### Q1 The blue bars are confidence intervals formed (likely) as +/- 2 * `Std. err` ### Q2 I think this would be easier with `parametric_effects()` from my {gratia} package; what `pterm()` seems to be doing is wrapping up the model object with some extra information, which doesn't seem to be the information computed for the plot. Using the example from `?pterm`, we have ``` library("gratia") set.seed(3) dat <- gamSim(1,n=1500,dist="normal",scale=20) dat$fac <- as.factor( sample(c("A1", "A2", "A3"), nrow(dat), replace = TRUE) ) dat$logi <- as.logical( sample(c(TRUE, FALSE), nrow(dat), replace = TRUE) ) bs <- "cr"; k <- 12 b <- gam(y ~ x0 + x1 + I(x1^2) + s(x2,bs=bs,k=k) + fac + x3:fac + I(x1*x2) + logi, data=dat) p <- parametric_effects(b) ``` Which gives ``` > p <- parametric_effects(b) Interaction terms are not currently supported. > p # A tibble: 6,005 × 6 term type value level partial se <chr> <chr> <I<dbl>> <fct> <dbl> <dbl> 1 x0 numeric 0.168 NA -0.310 0.309 2 x0 numeric 0.808 NA -1.49 1.48 3 x0 numeric 0.385 NA -0.711 0.708 4 x0 numeric 0.328 NA -0.605 0.602 5 x0 numeric 0.602 NA -1.11 1.11 6 x0 numeric 0.604 NA -1.12 1.11 7 x0 numeric 0.125 NA -0.230 0.229 8 x0 numeric 0.295 NA -0.544 0.541 9 x0 numeric 0.578 NA -1.07 1.06 10 x0 numeric 0.631 NA -1.17 1.16 # ℹ 5,995 more rows # ℹ Use `print(n = ...)` to see more rows ``` The confidence is easy to add: ``` p <- p |> dplyr::mutate(lower_ci = partial - (1.96 * se), upper_ci = partial + (1.96 * se)) ``` I would also append a model column to indicate which model this is: ``` p <- p |> dplyr::mutate(model = rep("Model 1", nrow(p)) ``` If you repeat that for the other model, to create `p2` say, you should be able to bind the two objects together with ``` p_both <- dplyr::bind_rows(p, p2) ``` You can also select which terms you want to extract the info for using the `terms` argument to `parametric_effects()`. ### Q3 The best way to do this is to just predict from the model including only the constant term and parametric term of you choice. In the example from `?pterm` that I used above, there is a factor variable `fac` which we'll use to illustrate the point. ``` # get a data slice where only `fac` varies # hold all other variables are representative values ds <- data_slice(b, fac = evenly(fac)) |> dplyr::mutate(logi = as.logical(logi)) # <-- work around bug # use fitted_values() to predict from the model using only the effects # of `fac` and the constant term fv <- fitted_values(b, data = ds, terms = c("(Intercept)", "fac")) fv ``` There's a bug in `data_slice()` currently; `data_slice()` seems to be converting logical variables to numeric. We work around that by coaxing them back to logical. And we only need this to keep `predict.gam()` happy; none of the other variables but `fac` will be used in the predictions we computed in `fv`. Having done the above `fv` is: ``` > fv # A tibble: 3 × 10 fac x0 x1 x2 logi x3 fitted se lower upper <fct> <dbl> <dbl> <dbl> <lgl> <dbl> <dbl> <dbl> <dbl> <dbl> 1 A1 0.511 0.521 0.493 FALSE 0.476 5.75 2.55 0.753 10.8 2 A2 0.511 0.521 0.493 FALSE 0.476 4.60 2.54 -0.367 9.57 3 A3 0.511 0.521 0.493 FALSE 0.476 2.94 2.63 -2.21 8.09 ``` Here, as `fac` only had three levels, we get the three values needed for the plot. The `fitted` column is now has the intercept added to each of the parametric effects, hence `A1`, being the reference level has the same value as the intercept: ``` > coef(b)[1] (Intercept) 5.753834 ``` while the other values in `fitted` are `(Intercept)` plus the respective coefficient for the other levels. The `lower` and `upper` columns contain the confidence interval as it would be plotted in the output from mgcViz. Note that `fitted_values()` returns, by default, predicted values on the response scale. If you have a non-Gaussian family, you can choose to keep the values on the link scale using `scale = "link"` in the call to `fitted_values()`
null
CC BY-SA 4.0
null
2023-05-10T12:07:41.650
2023-05-10T12:07:41.650
null
null
1390
null
615428
2
null
615204
5
null
The question gives a number of possible advantages, so I will post possible disadvantages. It is then up to the scientist to evaluate the tradeoff between possible advantages and disadvantages. - The reference makes it sound like there are considerable computational advantages. Especially in deep learning settings, the computational issues really do have to be considered. While it is great to be able to prove, mathematically/statistically, that some method is superior to a “trick” in deep learning, if that method cannot be computed in a reasonable amount of time, it is not useful. However, I am not sold on the computational advantages. For instance, if the classifier makes a high-confidence prediction that an observation right near one of the bin boundaries is in the bin on the other side of the boundary (probably only a mild mistake), the entire loss could be dominated by that when you start taking logarithms of small numbers in the cross-entropy loss. This puts the model in a position to get hung up on fixing a mild mistake, perhaps at the expense of making improvements in other areas where the errors are more egregious. - The fact that a classifier returns the probability of being in a particular category is appealing. However, neural networks are known to be overconfident in their predictions of these probabilities, and calibrating a multi-class output is not straightforward. Further, techniques exist to estimate conditional distributions, such as quantile estimation. - There is not an especially high penalty for bad misses. If the prediction puts high probability on the next bin over from where the observed value is, that incurs the same penalty as putting that same probability in a much higher or lower bin. While this could be argued to give robustness similar to how minimizing absolute loss gives robustness in that large misses are not penalized as severely as they are for square loss (for better or for worse), at least absolute loss penalizes more for large misses than small misses. There is a limit to how much robustness is desired. (The first and third disadvantages can be combined to say that this approach risks giving large penalties to small misses and small penalties to large misses.) - Some of the appeal of this seems to come from classification accuracy being easier to interpret than regression metrics like (root) mean squared error. However, people goof up in interpreting accuracy all the time. I cited a paper on here a few weeks ago (Sundaram & Yermack (2007)) that seemed to be raving about achieving a classification accuracy of $97\%$, despite the majority class making up $97.71\%$ of the observations, meaning that a naïve model could achieve $97.71\%$ classification accuracy (better than their model achieves) by predicting the majority category every time. (This article was published in the top journal in its field (not “a” top journal, “the” top journal), so it is not just the fringe that makes mistakes in evaluating classification accuracy.) Even when the Sundaram & Yermack (2007) classification accuracy scores are above the scores achieved by predicting the majority category every time, the reductions in error rates, which is probably more informative (and is equivalent to Cohen's kappa), does not scream out, "This model gets an $\text{A}$," the way that a classification accuracy of $97\%$ might. Further, regression metrics like root mean squared error and mean absolute error are in the original units of your measured outcomes, which should have an interpretation by someone who knows the field. [This answer to "Why should binning be avoided at all costs?"](https://stats.stackexchange.com/a/390722/247274) is worth a read, even if it is not about the exact same topic. I especially like the last sentence, which I will quote below > My recommendation would be to learn the analytical methods that are applied to the underlying continuous data, and then you will be in a position to determine whether a crude approximation via binning is necessary in a given situation. REFERENCE Sundaram, Rangarajan K., and David L. Yermack. "Pay me later: Inside debt and its role in managerial compensation." The Journal of Finance 62.4 (2007): 1551-1588.
null
CC BY-SA 4.0
null
2023-05-10T12:20:23.867
2023-05-23T00:53:04.933
2023-05-23T00:53:04.933
247274
247274
null
615429
1
null
null
0
45
I would like to perform a Laplace approximation of a log-posterior. The evolution of a cancer cell at given time $t_j$, $j = 1,\cdots,n$ for an experiment $i$ follows the following Poisson distribution, $$ y_i(t_j) \sim \text{Pois}(\mu = \alpha_0 \exp(-\alpha_1 e^{-\alpha_2 t_j})), $$ with $\alpha_k > 0$ for $k = 0,1,2$ Let denote $y_i(t_j) \equiv y_{i,j}$. I computed the Likelihood $L(\vec{\alpha}|y_i)$ for a given experiment $i$, $$ L(\vec{\alpha}|y_i) = \prod_{j=1}^{n} p(y_{i,j}) = \prod_{j=1}^{n} \left( \mu^{y_{i,j}} \cdot e^{-\mu} \right) $$ Let $\theta_k = \log(\alpha_k)$ and considering large variance priors, I chose arbitrarily a Gamma distribution as prior for the $theta_k$ with parameters $a = 1/2$ and $b = 0.001$ in order for the variance of this distribution to be large. This gives me the following log-posterior, $$ p(\vec{\theta}|y_i) \propto \sum_{j=1}^{n} y_{i, j} \cdot \ln(\mu^{y_{i,j}} \cdot e^{-\mu}) + \ln(f(\theta_0, a, b)) + \ln(f(\theta_1, a, b)) + \ln(f(\theta_2, a, b)) $$ where $f(\theta_k, a, b)$ is the PDF of the gamma distribution for the parameter $\theta_k$, $k = 0,1,2$. This results in the following R code: ``` log.mu <- function(alpha, t_j) { log(alpha[1]) - alpha[2] * exp(- alpha[3] * t_j) } log.likelihood <- function(theta, t, experiment) { n <- length(experiment) alpha <- exp(theta) result <- sum(sapply(1:n, function(j) { dpois(x = t[j], lambda = exp(log.mu(alpha, t[j])), log = TRUE) })) return(result) } log.prior <- function(theta) { alpha.prior <- 0.5 beta.prior <- 0.001 dgamma(theta, shape = alpha.prior, rate = beta.prior, log = TRUE) } log.posterior <- function(theta, t, experiment) { log.likelihood(theta, t, experiment) + log.prior(theta[1]) + log.prior(theta[2]) + log.prior(theta[3]) } ``` Then I tried to perform a Laplace approximation of the log-posterior distribution $(\vec{\theta}|\text{experiment 1} \equiv D_1)$. That is the following R code: ``` laplace.approximation <- function(log.posterior, inits, n_samples, ...) { fit <- optim( par = inits, fn = log.posterior, control = list(fnscale = -1), hessian = TRUE, ... ) mean <- fit$par var_cov_matrix <- solve(-fit$hessian) samples <- rmvnorm(20000, mean, var_cov_matrix) return(list( mean = mean, var_cov_matrix = var_cov_matrix, samples = samples )) } inits <- c(theta0 = 0.001, theta1 = 0.001, theta2 = 0.001) lapprox <- laplace.approximation(log.posterior, inits, 10000, t = df$day, experiment = df$Exp1) lapprox$mean lapprox$var_cov_matrix ``` This gives me the following mean vector and variance-covariance matrix for $\theta_k$. [](https://i.stack.imgur.com/AgJ0t.png) [](https://i.stack.imgur.com/wxrPG.png) I'm wondering if that makes sense that the variance and covariance are so tiny. Or is there a problem in my algorithm or computation of the log posterior ?
Laplace approximation from a log-posterior in R
CC BY-SA 4.0
null
2023-05-10T12:25:15.763
2023-05-16T00:48:28.153
2023-05-16T00:48:28.153
11887
355645
[ "r", "bayesian", "prior", "posterior", "laplace-approximation" ]
615430
1
null
null
0
19
in brms (which is heavily based on mgcv) there is a possibility to define non-linear formulas (meaning not linear in parameters). However, for different reasons I need to use mgcv. E.g. the model `bf(y ~ b1 * exp(b2 * x), b1 ~ 1, b2 ~ 1, nl = TRUE)` is not linear in its parameters. Although for some easy examples there might be a possibility to rearrange the formulas such that they become linear in parameters, this does obviously not apply to more complex ones. Is there something similar in mgcv such that the user can specify non-linear formulas? Thank you very much!
Non-linear formulas in mgcv
CC BY-SA 4.0
null
2023-05-10T12:36:46.107
2023-05-10T12:42:34.880
null
null
368665
[ "nonlinear-regression", "mgcv", "brms" ]
615431
2
null
615381
2
null
Note that you have the plots in the wrong order in the included figure; panel 2 should be panel 3 and vice versa. The first plot is the result from evaluating the smooth function $f_j(\mathtt{dur}_i)$ at 100 evenly spaced values over the range of `dur`. But as it also include the confidence interval, which is very wide for large values of `dur`, the y axis range is much larger than the values taken by the smooth function itself. That this axis includes 0 is simply due to the sum-to-zero constraints imposed on the smooths. The 0 on the axis is the expected value of the response on the link scale; in other words, 0 represents the model constant (intercept) term, and values above 0 on the plot are larger than the expected value, etc. We need to do things like this so we can have an intercept in the model. The third plot, the result of the call `predict(g, type="terms")` is showing you essentially the same thing as the first plot; the different axis range is simply the result of you not showing the uncertainty in the estimated partial effects of $f_j(\mathtt{dur}_i)$ via the confidence band. As the default is to return these term-wise contributions on the link scale, and for the same reason as above, the 0 on the axis is the result of the sum-to-zero constraint applied to the smooth. The second plot, the result of the `predict()` call while trying to exclude certain terms is not what you think it is. {mgcv} is a little bit naughty here; the variables you tried to exclude from the prediction aren't being excluded because there are no terms in the model with the names you supplied. Here you'd need to provide the names of the smooths as they are shown in `summary(g)`. So, what your plot showed is simply the fitted values for `wesdr` data set observations, but on the link scale. As the link scale is the logit scale, it is perfectly fine for the values to be negative (technically they could range from minus- to plus-infinity, but because of the way the logit works, they are most likely to be in the range +/-4). If you were to apply the inverse of the link function to the predicted values, you'd get values between 0 and 1. The reason the values are all over the place (and not on a nice smooth line) is because you didn't actually exclude the effects of $f_j(\mathtt{gly}_i)$ and $f_j(\mathtt{bmi}_i)$ from the predictions. What you needed to do is to call: ``` p.s <- predict(g, exclude = c("s(gly)","s(bmi)"), type="link") ``` Note that both your second and third plots are not going to result in something like plot 1 because you are predicting for the observed values of `dur`, which are not evenly spaced or in ascending order in the observed data frame. Hence if you were to plot the fitted values using a line, you'd get a horrible mess too. If you really want to do this stuff by hand, you want to create a data slice along `dur` holding `gly` and `bmi` at fixed representative values, and then use `predict(..., type = "terms")` (and then apply the inverse of the link function to map them to the 0,1 scale if that's what you want) or `predict(g, exclude = c("s(gly)","s(bmi)"), type="response")` I find this easier with my gratia package: ``` ds <- data_slice(g, dur = evenly(dur)) fv <- fitted_values(g, data = ds, exclude = c("s(gly)","s(bmi)"), scale = "response") fv ``` This results in ``` > fv # A tibble: 100 × 7 dur gly bmi fitted se lower upper <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> 1 1.2 12.2 22.9 0.262 0.285 0.169 0.383 2 1.75 12.2 22.9 0.278 0.253 0.190 0.387 3 2.29 12.2 22.9 0.295 0.222 0.213 0.393 4 2.84 12.2 22.9 0.313 0.196 0.237 0.400 5 3.38 12.2 22.9 0.330 0.173 0.260 0.409 6 3.93 12.2 22.9 0.348 0.156 0.282 0.421 7 4.47 12.2 22.9 0.366 0.144 0.303 0.434 8 5.02 12.2 22.9 0.383 0.136 0.323 0.448 9 5.56 12.2 22.9 0.400 0.131 0.340 0.463 10 6.11 12.2 22.9 0.416 0.128 0.356 0.478 # ℹ 90 more rows # ℹ Use `print(n = ...)` to see more rows ``` and note that `fitted-values()` did the right thing and computed the fitted values and their confidence band on the link scale and then backtransformed the fitted values and the upper and lower bounds of the band back to the response scale. Now you can plot this: ``` library("ggplot2") fv |> ggplot(aes(x = dur, y = fitted)) + geom_ribbon(aes(ymin = lower, ymax = upper), alpha = 0.2) + geom_line() ``` which gives [](https://i.stack.imgur.com/bagvK.png) In general, you were on the right track but just missed a couple of important details: - the plot produced by plot(g) (or gratia::draw(g)) for the $f_j(\mathtt{dur}_i)$ term uses 100 evenly spaced values over the range of dur and it is drawn on the link scale, and as the smooth is subjected to the sum-to-zero identifiability constraint 0 values are perfectly OK, and - you didn't provide the correct term labels to exclude the effects of the other smooths when you predicted.
null
CC BY-SA 4.0
null
2023-05-10T12:38:33.723
2023-05-10T12:38:33.723
null
null
1390
null
615432
1
null
null
1
41
The area under the receiver-operator characteristic curve has a interpretation of how well the predictions of two categories are separated. [This](https://stats.stackexchange.com/a/306085/247274) post gives the area under the precision-recall curve as the average precision across all thresholds. This is unsatisfying. Are there other interpretations?
Interpretation of area under the precision-recall curve
CC BY-SA 4.0
null
2023-05-10T12:38:36.150
2023-05-10T12:38:36.150
null
null
247274
[ "machine-learning", "roc", "auc", "precision-recall" ]
615434
2
null
615430
1
null
No there isn't; {mgcv} is not intended for fitting non-linear models of the form you show. > (which is heavily based on mgcv) {brms} isn't actually heavily based on {mgcv}, in fact most of the package uses code that has nothing to do with {mgcv}. The only thing {brms} uses from {mgcv} is the code needed to create the basis expansions for smooths, and the code used to convert these smooths into their respective random effect form. The `nls()` function in base R and `nlme()` in the package of the same name can fit non-linear models.
null
CC BY-SA 4.0
null
2023-05-10T12:42:34.880
2023-05-10T12:42:34.880
null
null
1390
null
615435
2
null
615421
5
null
This is an interesting idea. However, I think you are doing less data reduction than you think (or perhaps none at all). For instance, by making a categorical feature with each original variable as a category, you now have data with size $(50,000,000\times 250,000)\times 3$, equal to $12.5$-trillion rows. That is with three columns, so you have $37.5$-trillion values to handle in the new data set, as opposed to just $50,000,000\times 250,000=12.5$-trillion in the original data. Okay, but your goal is to pass subsets of the data into an algorithm and train in batches, either explicitly with a neural network or at least in a manner that is evocative of using batches to optimize neural network parameters. Then it would seem that you can pass batches into your training, maybe $1250$ rounds of passing data of size $1$-billion$\times 3$. However... ...you have to do something with your column that has $50$-million categories. A common way to handle categorical data is make indicator variables where the indicator takes $1$ if that category is represented and $0$ otherwise, meaning that you have $50$-million new variables. Thus, while you think your data set is $12.5$-trillion$\times 3$, you data set is actually more like $12.5$-trillion$\times 50$-million, and you have made the problem $250,000$-times worse.
null
CC BY-SA 4.0
null
2023-05-10T13:09:09.130
2023-05-10T13:09:09.130
null
null
247274
null
615436
2
null
615205
1
null
Sometimes for positive data it can make sense to report the mean and standard deviation of the log of the data rather than the data itself. This is arguably the best summary you can give if the data seems to follow an approximately [log-normal](https://en.wikipedia.org/wiki/Log-normal_distribution) distribution. The answer to [this question](https://stats.stackexchange.com/questions/241187/calculating-standard-deviation-after-log-transformation) probably gives a better discussion of this option than I can.
null
CC BY-SA 4.0
null
2023-05-10T13:14:01.603
2023-05-10T13:14:01.603
null
null
387647
null
615437
2
null
571956
0
null
the degree of freedom should be the number of parameter estimates. As stated in this book - "To be sure, we use a Ljung-Box test, being careful to set the degrees of freedom to match the number of parameters in the model." (see section 9.9 here [https://otexts.com/fpp3/seasonal-arima.html](https://otexts.com/fpp3/seasonal-arima.html)). So, for a non-seasonal ARIMA model dof will be p+q. For a seasonal ARIMA model, the dof should p+q+P+Q. Hope this helps.
null
CC BY-SA 4.0
null
2023-05-10T13:24:36.997
2023-05-10T13:24:36.997
null
null
387649
null
615438
2
null
615421
13
null
Reshaping the data doesn't solve the problem because at the end of it, you have the same amount of data, plus an additional "index" column. If loading 1 row is expensive, then loading 1 row plus some more data must also be expensive. Moreover, using the index column to "decode" the new format isn't free, so any comparison would need to account for the costs (memory, compute) of how you decode this format. If you're not making any effort to decode the new indexing column, and just treating it as data, then obviously that's not going to work because (1) the index data will change based on how the arbitrary choice of ordering the columns and (2) the original data are treated as a single feature, so you can't model the outcome as arising from interactions among different features. However, there are some extremely simple steps that you can take to reduce the number of features you have. - Remove any columns that are comprised entirely of the same value. I know this is obvious, but it bears mentioning because it's easy & simple. - Screen out perfect-correlation features. There are ways to compute correlation coefficients one observation at a time, such as Online update of Pearson coefficient Then screen out the values with perfect correlation. The reasoning here is that perfectly-correlated features add no new information to the model, so removing one of them will save on memory and compute. You could relax "perfect correlation" to also exclude highly-correlated features, but then you'd have to make a choice about which half of each highly-correlated pair to keep, which could be important for inference or explanatory modeling. Moreover, whereas one feature in a pair of correlated features provides no new information compared to the other feature, this may not be the case for highly-but-not-perfectly-correlated features. I've used these two extremely simple feature screening methods to dramatically cut down the number of features in real data sets. (As an aside, some people use univariate feature screens for association between the target variable and a single feature. I don't recommend using these because it can leave out features that are predictive only in conjunction with other features. You've stated that this is the case for your data, so I think it would not be useful to solve your problem.) However, these two steps are mostly preludes to help economize a more advanced analysis. Because you have 200 times as many columns as features, it makes sense to re-express the data as a full-rank matrix (in your case, at most $250~000 \times 250~000 $). One way to do this is using [pca](/questions/tagged/pca), for which there are [online-algorithms](/questions/tagged/online-algorithms). This is a review article I found as the first hit on Google. "[Online Principal Component Analysis in High Dimension: Which Algorithm to Choose?](https://arxiv.org/abs/1511.03688)" by Hervé Cardot and David Degras > In the current context of data explosion, online techniques that do not require storing all data in memory are indispensable to routinely perform tasks like principal component analysis (PCA). Recursive algorithms that update the PCA with each new observation have been studied in various fields of research and found wide applications in industrial monitoring, computer vision, astronomy, and latent semantic indexing, among others. This work provides guidance for selecting an online PCA algorithm in practice. We present the main approaches to online PCA, namely, perturbation techniques, incremental methods, and stochastic optimization, and compare their statistical accuracy, computation time, and memory requirements using artificial and real data. Extensions to missing data and to functional data are discussed. All studied algorithms are available in the R package onlinePCA on CRAN. Of course PCA is just one example of a dimension-reduction algorithm that can be conducted in an online fashion. There are many more. Another example would be incremental [svd](/questions/tagged/svd) (the gold-standard rank-revealing algorithm) or QR decomposition. Both have incremental/online algorithms. Depending on the qualities of the data (are some columns sparse or binary?) then you may wish to use methods tailored to those qualities. If you don't want to do dimensionality reduction, then you can use machine learning that are trained with online algorithms. As an example, models that are trained using [stochastic-gradient-descent](/questions/tagged/stochastic-gradient-descent) (such as neural networks or regression) can work with only loading 1 example into memory at a time. Finally, a literature review will be extremely enlightening! From one of OP's comments, I infer that these are data about gene sequences. If this surmise is correct, then there is a wealth of published academic literature where researchers have encountered this exact problem and devised different approaches to solving it. Even if my guess is not correct, and the problem is not exactly the same as gene sequencing, a literature review will still reveal numerous examples of researchers having large datasets & reducing them to smaller data sets.
null
CC BY-SA 4.0
null
2023-05-10T13:24:55.123
2023-05-21T17:17:31.180
2023-05-21T17:17:31.180
22311
22311
null
615442
1
null
null
0
10
I am doing a classification model on imbalanced dataset (fraud). The way to preprocess imbalanced dataset follows [this link](https://stats.stackexchange.com/questions/164693/adding-weights-to-logistic-regression-for-imbalanced-data) - basically I don't do down sample/over sample, but apply `weight` to 1 for all fraud observations(rare), and 10 for all good observations. After apply `weight` to the full data, I split the full dataset to training dataset and validation dataset, obviously the validation is also imbalanced with `weight`. My question is : - Should I use the weight column for validation data when computed metrics(for Gains/Lift, AUC, confusion matrices, logloss, etc.) (ideally consistently between training and testing) ? I am trying to use skleanr.metrics.classification_report to evaluate the performance and noticed there is sample_weight parameter. However, checking this post where it answers that: > The sample-weight parameter is only used during training. - I am trying to evaluate if there is data drift and if the model needs to be retrained . For the incoming new test dataset, which is directly sampled from the last month data (let's call it new_data ), I do not have a weight column for this new_data. It is possible that the distribution of the minority class changes significantly over time. In this regard, how to measure data drift ? Thanks!
how to apply `weight` for imbalanced validation data when to evaluate model performance?
CC BY-SA 4.0
null
2023-05-10T14:23:58.907
2023-05-10T14:23:58.907
null
null
181290
[ "classification", "predictive-models", "boosting", "model-evaluation", "unbalanced-classes" ]
615443
1
null
null
0
16
Let us say we want to predict a one dimensional, real valued $Y$ from $X$. In gradient boosting, the final model $f$ is built by adding the prediction of several trees, essentially $f_1 + \dots + f_M$. Each tree $f_m$ is built using the following recursion: $$f_m(x) = f_{m-1} + \nu \cdot t(x),$$ where $t$ is a new tree and $\nu$ is the learning rate. Now, it makes sense to fit $t$ to the residuals, since we want to predict$y$ as $\tilde{y} = f_{m-1}(x) + f_{m}(x)$, and then $f_{m}(x) = y - f_{m-1}(x)$. This means, in principle, that the new tree being built should predict $y-f_{m-1}(x)$ for each $(x,y)$ pairs in the dataset. The learning rate reduces the influence of each newly added tree, yielding a more robust method. As far as I know, this is the standard boosting method. However, I am finding trouble understanding why using the gradient is helpful. Why would we want to fit the tree on the gradient of the loss function?
Why use the "gradient" in gradient boosting?
CC BY-SA 4.0
null
2023-05-10T14:26:08.513
2023-05-10T14:26:08.513
null
null
138788
[ "machine-learning", "boosting", "gradient-descent" ]
615444
2
null
338081
0
null
Without having the packages at the top of the code, I cannot determine what exactly these functions do. However, it seems like the culprit is that you are predicting values on a continuum and then trying to determine the confusion matrix. However, a confusion matrix requires discrete categories. When you have predictions on a continuum, probably every single prediction is at least a little bit incorrect. There are also likely to be more distinct predictions than categories, which is completely consistent with the error message that you have more levels (distinct values) in the predictions than in the original data. A way to solve this is to apply a threshold to the continuous output to bin the continuous predictions into discrete categories. There are [issues](https://stats.stackexchange.com/questions/603663/academic-reference-on-the-drawbacks-of-accuracy-f1-score-sensitivity-and-or-sp) with doing this, and I encourage all readers to go through that link if they are unfamiliar with this material. However, that will lead to predicted categories for which a confusion matrix can be calculated.
null
CC BY-SA 4.0
null
2023-05-10T14:29:41.377
2023-05-10T14:29:41.377
null
null
247274
null
615445
1
null
null
0
30
I'm trying to compare multiple learners on my dataset (called "data") in order to predict a target called "lesionResponse", with custom resampling. Since mlr3 package doesn't allow to group and stratify at the same time, I used a customized resampling in order to avoir the leakage of datas between sets. There is also a ratio of 3 for the 0/1 modalities of my target. Here is a short example of my dataset : ``` structure(list(PatientID = c("P1", "P1", "P1", "P1", "P1", "P1", "P2", "P2", "P3", "P4", "P5", "P5", "P5", "P5", "P5", "P6", "P6", "P6"), LesionResponse = structure(c(2L, 1L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 1L),.Label = c("0", "1"), class = "factor"), F1 = c(1.25, 1.25, 1.25, 1.25, 1.25, 1.25, 0.625, 0.625, 0.625, 0.625, 0.625, 0.625, 1.25, 0.625, 0.625, 1.25, 1.25, 1.25), F2 = c(1, 5, 3, 2, 1, 1, 6, 9, 0, 5, 0, 4, 4, 4, 5, 2, 1, 1), F3 = c(0, 4, 3, 1, 1, 0, 3, 8, 4, 5, 0, 4, 4, 3, 5, 2, 0, 0), F4 = c(0, 9, 0, 7, 4, 0, 3, 8, 4, 5, 9, 1, 1, 3, 5, 3, 9, 0)), row.names = c(1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 16L, 17L, 18L), class = "data.frame") ``` I manually splited the datas in train, validation, and test sets, with inner and outer resampling for the analyse : ``` data data$weights = ifelse(data$LesionResponse == "1", 3, 1) task = as_task_classif(data, target = "LesionResponse") task$set_col_roles("weights", roles = "weight") # Création du OUTER resampling via customisation resampling_outer = rsmp("custom") resampling_outer$instantiate(task, train = list(train_rows_outer), test = list(test_rows)) #Création du INNER resampling via customisation resampling_inner = rsmp("custom") resampling_inner$instantiate(task, train = list(train_rows), test = list(valid_rows)) ``` Explanation : data = my dataset outer resampling = I divide my entire dataset into train_rows_outer and test_rows for final prediction and benchmark. Inner resampling = Inside the train_rows_outer, I created a train and a valid sets to tune the parameters of my models. Here are my models : -ranger -rpart -svm -knn -xgboost I'll just post two of them to show the code : ``` #Auto tuning Ranger learner_ranger = lrn("classif.ranger", predict_type = "prob", num.trees = to_tune(1, 2000), mtry.ratio = to_tune(0, 1), sample.fraction = to_tune(1e-1, 1)) rr_ranger = tune_nested( tuner = tnr("grid_search", resolution = 5), task = task, learner = learner_ranger, inner_resampling = resampling_inner, outer_resampling = resampling_outer, measure = msr("classif.ce"), term_evals = 20, store_models = TRUE, terminator = trm("none") ) rr_ranger$score()[, .(iteration, task_id, learner_id, resampling_id, classif.ce)] predictions = rr_ranger$prediction() predictions$confusion truth response 0 1 0 163 65 1 0 0 ``` ``` #Auto tuning svm learner_svm = lrn("classif.svm", type = "C-classification", cost = to_tune(p_dbl(1e-5, 1e5, logscale = TRUE)), gamma = to_tune(p_dbl(1e-5, 1e5, logscale = TRUE)), kernel = to_tune(c("polynomial", "radial")), degree = to_tune(1, 4)) rr_svm = tune_nested( tuner = tnr("grid_search", resolution = 10), task = task, learner = learner_svm, inner_resampling = resampling_inner, outer_resampling = resampling_outer, measure = msr("classif.ce"), term_evals = 20, store_models = TRUE, terminator = trm("none") ) extract_inner_tuning_results(rr_svm)[, .SD, .SDcols = !c("learner_param_vals", "x_domain")] extract_inner_tuning_archives(rr_svm, unnest = NULL, exclude_columns = c("resample_result", "uhash", "x_domain", "timestamp")) rr_svm$score()[, .(iteration, task_id, learner_id, resampling_id, classif.ce)] predictions = rr_svm$prediction() predictions$confusion truth response 0 1 0 163 65 1 0 0 ``` Even with the weights added, the normalization, the feature selection, I have all my models predicting the majority modality of my set. Could the problem be In my code or my choice of classif ?? Would another metric be better ?
Overfitting models in mlr3
CC BY-SA 4.0
null
2023-05-10T14:34:18.157
2023-05-10T14:34:18.157
null
null
378883
[ "r", "machine-learning", "predictive-models", "confusion-matrix", "mlr" ]
615446
2
null
615368
3
null
The matching order is controlled by the `m.order` argument, as explained in the [documentation](https://kosukeimai.github.io/MatchIt/reference/matchit.html) for `matchit()`. There are four options currently available: - "largest" - matches treated units with the highest propensity score first (in theory, these will be the hardest to match); this is the default for propensity score matching - "smallest" - matches the treated units with the lowest propensity score first (in theory, these will be the easiest to match) - "data" - matches treated units in the order they appear in the dataset; this is the default for Mahalanobis distance matching or when matching on a distance matrix - "random" - matches treated units in a random order All of these are deterministic except `"random"`, meaning you will get the same results each time you run them. With `m.order = "random"`, changing the seed should yield different results each time and you must set a seed to be able to replicate the match. Only `"data"` is affected by the order of the data, so reordering your data will not change matches using the other options. Another option is to use `method = "optimal"`, which performs optimal matching. Optimal matching is also a deterministic algorithm that minimizes the overall within-pair distances in the full sample, not just in one pair at a time. In this way, it avoids the arbitrariness of greedy matching. The literature has also described another method that can work well but isn't implemented in `MatchIt`, which is to match treated units with the closest control unit first. This algorithm is slower and often performs similarly to `"smallest"` anyway.
null
CC BY-SA 4.0
null
2023-05-10T14:37:46.043
2023-05-15T18:55:11.850
2023-05-15T18:55:11.850
116195
116195
null
615447
2
null
188719
1
null
This makes some sense. If the model is (over)fitting to coincidences in the data instead of the true pattern, then it is plausible that it will (overfit) to coincidence when you permute the labels. However, if you are willing to be so heavy on computing and fit your model over and over, alternatives come to mind. A notable one is to bootstrap the data and redo the entire modeling process on the bootstrap sample before evaluating on the entire sample. I discuss this process [here](https://stats.stackexchange.com/a/563402/247274). Another possibility is to use some kind of cross-validation where you fit a model to a subset of the data and then test on the remaining data. Then you repeat this for many different sets of training data with holdout data for testing. An advantage of bootstrap is that you use the entire data set for developing your model, rather than decreasing the sample size by sacrificing precious data to the holdout set. Neither bootstrap nor cross-validation seems inherently more computationally demanding than your proposed ideas to permute the labels.
null
CC BY-SA 4.0
null
2023-05-10T14:40:47.807
2023-05-10T14:40:47.807
null
null
247274
null
615448
1
null
null
0
7
Context: -dataset with np ( 200x80) -no categorical variable. My goal is to estimate the performance of machine learning methods which include feature selection on my samples in order to use the best one for further predictions. I perform my analysis in R with package caret, with supervised ML methods (LASSO, Stepwise etc...). However, i don't understand the whole CV performance assessing in Caret: With the Caret::resample function, i get the average of MSE, R2 and MAE. These parameters could each be calculated separately for the training set or for the test set, if i wanted to. What interests me, is how the ML method performs when generalized to unseen data. --- My question is therefore: At each turn of the CV process, which parts (training, or test) of the dataset are used to calculate these criterions ? --- The most obvious thing would be that the test set is used to calculate the criterions previously mentioned, but i don't find anything that can guarantee it. I have tried to run separately the glmnet with cross validation both with the original R package called "glmnet" (glmnet::cv.glmnet() ), but also with what should be the same method included in Caret (same hyperparameters, same number of folds). However, i get different R2, MSE, MAE from both ways of running glmnet. This is what explain the origins of my question. I would rather compare each ML method in Caret in order to ensure the reliability of my comparisons (each ML method running on the same folds) than having to load each package separatly and run the ML methods individually.
Is model performance estimation through Cross Validation in Caret done on the training set, test set, or the whole dataset after fitting of model?
CC BY-SA 4.0
null
2023-05-10T14:54:03.703
2023-05-10T14:54:03.703
null
null
386070
[ "regression", "cross-validation", "feature-selection", "model", "caret" ]
615449
2
null
195144
4
null
When you perform a regression on the ten variables, you are using all of them together to predict the outcome. Thus, each variable can, on its own, be a rather poor predictor that has fairly low correlation with the outcome. For instance, in the simulation below, I get an $R^2$ close to your value of $0.98$, yet the correlations between individual variables and the outcome are all around $0.3$. ``` set.seed(2023) N <- 158 p <- 10 X <- matrix(rnorm(N * p), N, p) B <- rep(1, p) Ey <- X %*% B e <- rnorm(N, 0, 0.5) y <- Ey + e L <- lm(y ~ X) cors <- rep(NA, p) for (i in 1:p){ cors[i] <- cor(X[, i], y) print(cors[i]) } summary(L)$r.squared # I get almost 0.98 summary(cors) # Range from 0.236 to 0.38 ``` While $R^2=0.98$ is suspiciously high for some fields, the goal of predictive modeling is to make accurate predictions. If you validate the performance (some kind of out-of-sample testing or [bootstrap](https://stats.stackexchange.com/questions/563390/is-it-valid-to-do-roc-analysis-without-using-test-data/563402#563402), perhaps a congratulations is in order for getting a model to reliably make such accurate predictions!
null
CC BY-SA 4.0
null
2023-05-10T14:57:59.330
2023-05-10T15:46:08.413
2023-05-10T15:46:08.413
247274
247274
null
615450
2
null
615423
0
null
You are allowed to not make any p-value correction. You just need to understand and acknowledge that the chances of at least one of your reported significant results being a type-I error is relatively high. It may be helpful to remember that the p-value threshold of 0.05 isn't magic. This kind of approach makes sense, for example, in a screening study, with many treatments, to identify which treatments may warrant further study. Essentially because you are more concerned with type-II errors than type-I errors.
null
CC BY-SA 4.0
null
2023-05-10T15:02:51.127
2023-05-10T15:02:51.127
null
null
166526
null
615451
2
null
587451
1
null
Yes. The `pROC` package in `R` even writes the x-axis this way. [](https://i.stack.imgur.com/PTzJ9.png) ``` library(pROC) set.seed(2023) n <- 1000 p <- rbeta(N, 1, 1) y <- rbinom(N, 1, p) r <- pROC::roc(y, p) plot(r) ``` [The Frank Harrell in the comments has an interesting blog post about how sensitivity and specificity might be less useful than one might hope.](https://www.fharrell.com/post/backwards-probs/) Briefly, they condition on the unknown to predict the known.
null
CC BY-SA 4.0
null
2023-05-10T15:08:44.490
2023-05-10T15:08:44.490
null
null
247274
null
615452
1
null
null
0
25
To build a confidence interval, I have my pivotal quantity say T and the confidence interval is build on $\mathbb{P}(q_{\alpha/2}\leq T\leq q_{1-\alpha/2})=1-\alpha$, where $q_{\alpha/2}$ and $q_{1-\alpha/2}$ are my quantiles of $\alpha/2$ and $1-\alpha/2$ of my pivotal quantity $T$. Suppose $X \sim \mathcal{N}(\mu, \sigma^2)$ and I obtain $\{X_1,\ldots,X_n\}$ and I want to build a confidence interval for the mean $\mu$ and suppose $\sigma^2$ is known. I use the empirical mean $\bar{X}$ as my estimator for $\mu$. How come the confidence interval is $\mathbb{P}\left( {\overline{X}-z_{1-\alpha /2}\frac{ \sigma }{\sqrt{n}}\ }\ \leq \mu \leq \ {\overline{X}% +z_{1-\alpha /2}\frac{\sigma }{\sqrt{n}}}\right) =1-\alpha$? Is it because of the symmetric property of the Gaussian that my $z_{\alpha/2}$ becomes $z_{1-\alpha/2}$?
Confidence interval for a standard normal distribution
CC BY-SA 4.0
null
2023-05-10T15:14:33.967
2023-05-10T15:14:33.967
null
null
387653
[ "normal-distribution", "confidence-interval" ]
615453
2
null
609753
0
null
In some sense, this seems like a pretty standard supervised learning problem. You have features that predict the shape of the plot. You have five characteristics that determine the shape of the plot. Use the features to predict those five characteristics in a multivariate regression. Now, multivariate regression with a five-variable outcome makes for complicated statistics (for instance, how are the five values related), but that seems like the core of your problem and a possible starting point. It seems, however, like your main goal is to predict the value of the `data` at a particular `time`. If that is the case, it would seem like you could use `time` as an additional feature, perhaps interacted with the original features, to predict the `data` value instead of the five plot parameters. [functional-data-analysis](/questions/tagged/functional-data-analysis) might prove useful for this.
null
CC BY-SA 4.0
null
2023-05-10T15:16:02.343
2023-05-10T15:16:02.343
null
null
247274
null
615454
2
null
408171
0
null
To complement Xi'an's answer, one simple approach to solve numerically such equations would be by using a Taylor series approximation. If we denote $m,s^2$ the target mean and variance of the truncated distributrion, and $\mu,\sigma^2$ the mean and variance of the underlying Gaussian distribution then: \begin{align} m &= \mu+z(\mu/\sigma)\,\sigma\\ s^2 &= \left[1-z(\mu/\sigma)\,\mu/\sigma-z(\mu/\sigma)^2 \right]\sigma^2\\ z(\mu/\sigma) &= \phi(\mu/\sigma)/\Phi(\mu/\sigma) \end{align} The function $z(\mu/\sigma)$ can be approximated with a Taylor's series expansion around $m/s$. Denote $\hat{z}=z(m/s)$, then: \begin{align} z(\mu/\sigma) &\approx \hat{z}+(\mu/\sigma-m/s)\frac{\phi'(m/s)\Phi(m/s)-\phi(m/s)^2}{\Phi(m/s)^2}\\ &= \hat{z}+(\mu/\sigma-m/s)\,(-\hat{z} m/s-\hat{z}^2)\\ &= [1+(m/s)^2+\hat{z} m/s]\hat{z}-(\mu/\sigma)(m/s+\hat{z})\hat{z}\\ &= A - B\mu/\sigma \end{align} Where $A=[1+(m/s)^2+\hat{z} m/s]\hat{z}>0$ and $B=(m/s+\hat{z})\hat{z}>0$. Using this approximation into the first equation we get: \begin{align} m &\approx \mu + (A-B\mu/\sigma)\sigma =(1-B)\mu+A\sigma\\ \Rightarrow \mu &\approx (1-B)^{-1}(m-A\sigma) \end{align} Replacing both approximations into the second equation we get: \begin{align} s^2 &\approx [1-(A-B\mu/\sigma)\mu/\sigma-(A-B\mu/\sigma)^2]\sigma^2\\ &= (1-A^2)\sigma^2-A(1-2B)\mu\sigma+B(1-B)\mu^2\\ &= (1-A^2)\sigma^2-\frac{A(1-2B)}{1-B}(m-A\sigma)\sigma+\frac{B}{1-B}(m-A\sigma)^2\\ \Rightarrow (1-B)s^2 &\approx (1-B)(1-A^2) \sigma^2 -A(1-2B)(m-A\sigma)\sigma +B(m-A\sigma)^2\\ 0 &\approx (1-B)\sigma^2-Am\sigma + Bm^2-(1-B)s^2 \end{align} Such equation has as an approximate solution for $\sigma^2$: \begin{equation} \hat{\sigma}^2 =\frac{2BD+C^2-2D+C\sqrt{C^2-4D(1-B)}}{2(1-B)^2} \end{equation} With $C=Am$, $D=Bm^2-(1-B)s^2$, and provided that $C^2-4D(1-B)>0$ and $\hat{\sigma}^2>0$. The approximate solution for $\mu$ is then: $$ \hat{\mu} = \frac{m-A\hat{\sigma}}{1-B} $$ Thus, given $(m,s)$, one approximate solution $(\hat{\mu},\hat{\sigma}^2)$ for $({\mu},{\sigma}^2)$ can be found following these steps: - $\hat{z}=\phi(m/s)/\Phi(m/s)$ - $A=[1+(m/s)^2+\hat{z} m/s]\hat{z}$; $B=(m/s+\hat{z})\hat{z}$ - $C=Am$; $D=Bm^2-(1-B)s^2$ - $\hat{\sigma}^2 =\frac{2BD+C^2-2D+C\sqrt{C^2-4D(1-B)}}{2(1-B)^2}$ (provided $C^2-4D(1-B)>0$ and $\hat{\sigma}^2>0$) - $\hat{\mu}=\frac{m-A\hat{\sigma}}{1-B}$
null
CC BY-SA 4.0
null
2023-05-10T15:28:50.723
2023-05-10T17:30:44.980
2023-05-10T17:30:44.980
387635
387635
null
615455
1
null
null
1
12
I have a general question (data can be provided if needed). If I ran a nested anova and there is no significant effect of the nested factor on the model, is it reasonable to remove that variable and just run a one way anova?
Pooling data after a nested anova
CC BY-SA 4.0
null
2023-05-10T15:43:52.940
2023-05-10T17:51:39.280
null
null
384860
[ "anova", "nested-models" ]
615456
1
null
null
0
4
I have a device that can estimate 2 continuous, independent quantities A and B of its users. There is an accepted criterion for "accuracy" for each quantity A and B. Let's say I want to prove that my device has a sufficiently good accuracy ratio for A and B. I know I have sufficiently good existing data for that. Hence, I'll want to conduct a retrospective study. I still want to show that I control my type 2 risk, and perform a sample size computation using a separate calibration study that I have. I want a statistical power of 0.8. Should I compute the sample size for A and B separately since this is all retrospective, or should I use a conjunctive test ? On the one hand, using a conjunctive test seems more logical since I know I'll use the same dataset for both measures. On the other hand, if we do that, why not include in the conjunction any other metric that my device outputs, in particular a metric C that the dataset was built for ?
Conjunctive testing or not in retrospective study?
CC BY-SA 4.0
null
2023-05-10T15:45:59.043
2023-05-10T15:45:59.043
null
null
176476
[ "hypothesis-testing", "observational-study" ]
615457
1
null
null
0
16
I am currently working on a univariate time series that I try to modelize (following the Box-Jenkins methodology, I try to identify the model before I estimate it, using correlograms, in order to later validate it by running PP, ADF tests and so on, and finally, to make previsions). Yet I have trouble to modelize it in a stationary framework. Here is my code : ``` auto.arima(log_prod_periode_1) # ARIMA(2,1,2)(2,1,1)[12] library(lmtest) # Fit the ARIMA model spe <- arima(log_prod_periode_1, order=c(2,1,2), seasonal=list(order=c(2,1,1), period=12)) coeftest(spe) # AR(2), MA(2) et SMA(1) have significant coefficients acf(spe$residuals,lag.max=36) ``` I get the [](https://i.stack.imgur.com/y4r2T.png), which indicates that the model fitted isn't sufficient to stationarize my series which is in logarithm (I have already transformed it as the AIC and BIC were lower when i fitted auto.arima to the time series in log than when I fitted it to the initial time series). Could anyone indicate how I could solve this problem ? Here is the structure of my dataset :
How to stationarise a univariate time series in R when auto.arima gives a model to fit that doesn't stationarise it?
CC BY-SA 4.0
0
2023-05-10T15:50:23.583
2023-05-10T15:50:23.583
null
null
364061
[ "r", "time-series", "arima", "identifiability", "univariate" ]
615458
1
615463
null
1
20
Consider a regression analysis with a continuous outcome, two dichotomous predictors, and their interaction. We can use R's built-in `CO2` dataset, in which plants are Quebec-type or Mississippi-type and the treatment is chilled vs not chilled. ``` summary(lm(uptake ~ Treatment * Type, data=CO2)) # Estimate Std. Error t value Pr(>|t|) # (Intercept) 35.333 1.747 20.225 < 2e-16 *** # Treatmentchilled -3.581 2.471 -1.449 0.151141 # TypeMississippi -9.381 2.471 -3.797 0.000284 *** # Treatmentchilled:TypeMississippi -6.557 3.494 -1.877 0.064213 . ``` It seems to me that this is essentially estimating two separate models for the two plant types (or, equivalently, for the two treatment types). For a Quebec-type plant (the reference class), the intercept is `Intercept` and the effect of chilled treatment is `Treatmentchilled`. For a Mississippi-type plant, the intercept is `(Intercept + TypeMississippi)` and the effect of chilled treatment is `(Treatmentchilled + Treatmentchilled:TypeMississippi)`. If we run a regression just for plants of the reference class: ``` summary(lm(uptake ~ Treatment, data=CO2 %>% filter(Type=="Quebec"))) # Estimate Std. Error t value Pr(>|t|) # (Intercept) 35.333 2.099 16.830 <2e-16 *** # Treatmentchilled -3.581 2.969 -1.206 0.235 # F=1.455 (1,40), p=.23 ``` The coefficient estimates are indeed the same, but the standard errors are larger. Why is this? Where is the additional information coming from in the multiple regression model? It's hard for me to understand how Mississippi-type plants improve the precision of parameter estimates for Quebec-type plants, while the Mississippi-type plants are also being used to estimate parameters that apply only to their own type.
Multiple regression with every predictor interacted with a dichotomous variable, vs separate models
CC BY-SA 4.0
null
2023-05-10T15:58:49.480
2023-05-10T16:46:13.177
2023-05-10T16:21:51.323
9816
9816
[ "regression", "interaction", "standard-error" ]
615459
1
null
null
0
15
I know that the process is AR(1) and some of the data is still reflected in this period. But can we just take the power of $5$ of $0.879$?
Is the following true: For the process $y_t =0.879y_{t-1} + e_t$, less than 60% of the value of five periods ago is still reflected in current period?
CC BY-SA 4.0
null
2023-05-10T16:07:48.447
2023-05-10T16:37:29.250
2023-05-10T16:37:29.250
5176
347300
[ "autoregressive" ]
615461
1
null
null
1
38
How can I compare the variability in a dependent variable for individuals across groups? I don't really have the statistics vocabulary to ask this question, so I'll paint a picture instead. Let's say I have two groups individuals: group A and group B. Group A took a medication (or has some gene) that unpredictably alters reaction time: sometimes the people in Group A respond more slowly, but other times they respond really fast. Individuals in Group B tend to vary less in reaction times. The mean reaction times for individuals in Group A and Group B are about the same: only the variances differ. If I have multiple measurements of reaction time for each individual, how would I show that individuals in group A react more variability compared to those in group B? What statistical tests or tools would I use?
Comparing within-subject variability across two groups
CC BY-SA 4.0
null
2023-05-10T16:23:07.920
2023-05-10T16:23:07.920
null
null
387659
[ "variance", "psychology", "variability" ]
615462
1
null
null
0
23
I'm trying to predict the amount of personal votes a political candidate gets based on how attractive they look. I have data from a general election and a local election. The dependent variable is highly right skewed, so I have log-transformed it. In order to make the personal vote data comparable across elections, I have aslo z-transformed the log(personal votes). My question is: What is the substantive interpretation of the coefficient for the independent variable, when the dependent variable has been first log-transformed then z-transformed? ``` #Example data set.seed(123) attractiveness <- rnorm(1000, mean = 5, sd = 1) age <- rnorm(1000, mean = 35, sd = 10) gender <- sample(c(0, 1), 1000, replace = TRUE) # personal votes - general election personal_votes_ge <- exp(rnorm(500, mean = 100, sd = 1)) personal_votes_ge_transformed <- (log(personal_votes_ge) - mean(log(personal_votes_ge)))/sd(log(personal_votes_ge)) # personal votes - local election personal_votes_le <- exp(rnorm(500, mean = 50, sd = 1)) personal_votes_le_transformed <- (log(personal_votes_le) - mean(log(personal_votes_le)))/sd(log(personal_votes_le)) personal_votes <- c(personal_votes_ge, personal_votes_le) personal_votes_transformed <- c(personal_votes_ge_transformed, personal_votes_le_transformed) election <- rep(1:2, each = 500) #The model model <- lm(personal_votes_transformed ~ attractiveness + gender + age, data = data) ```
Interpreting coefficient from linear model when the dependent variable has been first log transformed then z-transformed
CC BY-SA 4.0
null
2023-05-10T16:39:09.963
2023-05-10T16:39:09.963
null
null
387656
[ "regression", "interpretation", "regression-coefficients", "logarithm", "z-score" ]
615463
2
null
615458
2
null
Recall that the expression for the variance of the regression coefficients is $$ \hat\Sigma = {\hat\sigma}^2(X'X)^{-1} $$ The estimate of the residual variance $\hat\sigma^2$ will be different in the full model, where it is assumed to be equal among the groups, from the subgroup models, in which it is allowed to vary. If the assumption of homoscedasticty is true and the residual variance is the same in each group, then the full model will yield a more accurate estimate of the residual variance and therefore of the standard errors. If not, though, then you will see substantial differences between the residual variance of the full model and those of the subgroups, leading to different standard errors across the parameterizations. A solution to this is to use robust standard errors, which are consistent even in the case of unequal residual variance across groups. See below (some output abridged): ``` library(estimatr) summary(lm_robust(uptake ~ Treatment * Type, data=CO2)) #> Standard error type: HC2 #> #> Coefficients: #> Estimate Std. Error t value Pr(>|t|) CI Lower #> (Intercept) 35.333 2.094 16.873 4.445e-28 31.166 #> Treatmentchilled -3.581 2.969 -1.206 2.313e-01 -9.489 #> TypeMississippi -9.381 2.645 -3.547 6.548e-04 -14.644 #> Treatmentchilled:TypeMississippi -6.557 3.494 -1.877 6.421e-02 -13.511 summary(lm_robust(uptake ~ Treatment, data=CO2 |> subset(Type=="Quebec"))) #> Standard error type: HC2 #> #> Coefficients: #> Estimate Std. Error t value Pr(>|t|) CI Lower CI Upper DF #> (Intercept) 35.333 2.094 16.873 8.651e-20 31.101 39.57 40 #> Treatmentchilled -3.581 2.969 -1.206 2.349e-01 -9.582 2.42 40 ``` When using a robust standard error, the standard errors are the same. Note that the p-values will still differ because the tests use different degrees of freedom. The full model, the df are larger.
null
CC BY-SA 4.0
null
2023-05-10T16:46:13.177
2023-05-10T16:46:13.177
null
null
116195
null
615464
1
null
null
0
24
This might seem like a duplicate of the following link, but I think that one is asking how to create a completely new dataset with specific distributions, rather than how to model an existing dataset distribution. [How do you create a multivariate distribution with both continuous and discrete data?](https://stats.stackexchange.com/questions/43520/how-do-you-create-a-multivariate-distribution-with-both-continuous-and-discrete) Problem I have a dataset containing a mixture of continuous and discrete columns, which are all dependent on one another, so it's a multivariate distribution with both continuous and discrete variables. I want to model the distribution so that new/synthetic data points can be sampled from it. Ideally this would be done using existing Python libraries, but if this is not possible I'll give it a go in Python myself. Solution that I can't use There is a solution to this, and it involves using GANs to train a model that understands the distribution of the dataset. It's possible to do this in Python with `Synthetic Data Vault`. Unfortunately, I don't have GPU capabilities yet, and have too much data to train these GANs without a GPU. Hence the following... Continuous data If I had a multivariate distribution of continuous data, I would typically use something like sklearn's `KernelDensity` class to model the distribution and easily sample from it. Discrete data With discrete data, you could set up a probability mass function and sample from it with something like `random.choices()` in Python. Combining continuous and discrete However, I'm a bit stuck with datasets that are a combination of the two. I don't think `KernelDensity` is strictly valid for discrete variables, and I'm not sure how probability mass functions can be sampled from for multivariate data. I'm hoping there is a solution that allows for some kind of Bayesian modelling of the relationships between different columns across rows which leads to the ability to set up a conditional distribution and sample from it. But I'm not sure, can't find any online, and have ended up here for help.
How can I model the multivariate probability distribution of a dataset with both continuous and discrete variables for sampling?
CC BY-SA 4.0
null
2023-05-10T16:49:06.687
2023-05-10T16:49:06.687
null
null
197566
[ "distributions", "python", "sampling", "conditional-probability", "synthetic-data" ]
615467
1
null
null
0
59
I want to calculate the sample size for a survival study of patients with acute myeloid leukemia, with two cohorts, one receiving chemotherapy and the other not. I want an event-free survival (EFS) at 1 year of 40% with a margin of 5%. The difference between the two cohorts has to be 5-10%, with a power of 0.8 and alpha of 0.05. I have considered a non-inferiority design, as the new treatment may not be as efficient but has lower toxicity, and I want to prove that the new treatment is not significantly worse than the reference treatment, which has already been tested. I want a maximum difference in efficiency of 10%. Does this make sense? Would it be better to do a superiority design? I have calculated the sample size for each cohort, which must be the same, using the epi.ssninfb function from the epiR package, but I have seen that there are many R packages (that take into account different types of distribution, constant or non-constant hazard ratios, among other assumptions) and many programs to perform the calculation, and I don't know which is the best method. The hypotheses planned are: Ho: prob. standard treatment - prob.new treatment >= 0.1 H1: prob. standard treatment - prob.new treatment < 0.1 And the R code: ``` install.packages("epiR") library(epiR) epi.ssninfb(treat = 0.4, control = 0.4, delta = 0.10, n = NA, r = 1, power = 0.8, nfractional = TRUE, alpha = 0.05) $n.total [1] 593.5255 $n.treat [1] 296.7627 $n.control [1] 296.7627 $delta [1] 0.1 $power [1] 0.8 ``` Has the sample size been correctly determined? I have also found other R functions to perform the calculation, such as power.t.test (survival package), sample.size.NI (dani package), power.prop.test (stats package), nSurvival (gsDesign package, which requires accrual and follow-up times that I don't have), etc. I'm very confused. Can anyone help me? Thank you very much! :) EI_Stats
Survival sample size calculation
CC BY-SA 4.0
null
2023-05-10T17:20:55.960
2023-05-12T12:23:07.853
2023-05-12T12:23:07.853
387468
387468
[ "r", "survival", "statistical-power", "epidemiology", "case-cohort" ]
615469
2
null
263657
0
null
Although from a different community, the sensitivity of output of DNN to input is also of high interest of earth science or other applied physics field. Because it's more-or-less a representation of the interpretation of the neural network. It told us how the output can be influenced by the input. Please find my publication about it in data assimilation over here: [https://www.sciencedirect.com/science/article/pii/S0034425722002309](https://www.sciencedirect.com/science/article/pii/S0034425722002309). We tries to make the DNN "physically plausible" by calculating the sensitivity (or Jacobian), and tried to see whether the sensitivity makes physical sense. We also find some problems about that and are addressing them. Just follow our work.
null
CC BY-SA 4.0
null
2023-05-10T17:38:14.680
2023-05-10T17:38:14.680
null
null
303835
null
615470
1
615478
null
0
20
Suppose I have a dataset with a continuous outcome, a dichotomous variable representing treatment, and another dichotomous variable representing membership in some group. As an example we can use R's built-in `CO2` dataset, taking `uptake` as the outcome, predicted by `Treatment` (chilled vs not chilled) and plant `Type` (Quebec vs. Mississippi). My research questions are: - Q1: Does treatment affect uptake for Quebec-type plants? - Q2: Does treatment affect uptake for Mississippi-type plants? I could run two separate regressions estimating the effect of treatment within each subset of the data. But suppose that I expect the two types to have similar untreated outcomes, and similar responses to other covariates that might be in the model (not interacting with treatment or type, just soaking up additional variance). Therefore, I'd expect to have higher power to detect the two treatment effects if I could use all my data in a single model. I could do a multiple regression interacting treatment with type: ``` summary(lm(uptake ~ Treatment * Type, data=CO2)) # Estimate Std. Error t value Pr(>|t|) # (Intercept) 35.333 1.747 20.225 < 2e-16 *** # Treatmentchilled -3.581 2.471 -1.449 0.151141 # TypeMississippi -9.381 2.471 -3.797 0.000284 *** # Treatmentchilled:TypeMississippi -6.557 3.494 -1.877 0.064213 . # F (3,80) = 23.82, p=4.1e-11 ``` The coefficients and t-tests answer a different set of questions: - A1: Does treatment affect uptake for Quebec-type plants? (same as my question 1) - A2: Does treatment affect uptake for Mississippi-type plants any more or less than treatment affects Quebec-type plants? (not the same as my question 2) Let's say I don't care about A2 at all and I only want to know if the effect on Mississippi-type plants is significantly different from zero. I could run the same model but using Mississippi as the reference class: ``` summary(lm(uptake ~ Treatment * (Type=="Quebec"), data=CO2)) # (Intercept) 25.952 1.747 14.855 < 2e-16 *** # Treatmentchilled -10.138 2.471 -4.103 9.74e-05 *** # Type == "Quebec"TRUE 9.381 2.471 3.797 0.000284 *** # Treatmentchilled:Type == "Quebec"TRUE 6.557 3.494 1.877 0.064213 . ``` The `Treatmentchilled` parameter is precisely what you'd have calculated for Mississippi-type plants using the previous model's `Treatmentchilled + Treatmentchilled:TypeMississippi` parameters, but now it has a simple t-test telling me whether it's statistically significant. Is there anything wrong with running a model twice with different reference classes in order to test the treatment effect within each class? Specifically: - Does it introduce any multiple-comparison or interpretation issues that are different from what I'd have to deal with if I ran only the first model and were interested in the treatment and interaction effects? - Is there a more elegant way to accomplish what I want?
Testing treatment effect within different groups by re-running a regression with the reference class changed
CC BY-SA 4.0
null
2023-05-10T17:40:32.867
2023-05-10T18:45:03.913
null
null
9816
[ "regression", "interaction" ]
615471
1
616161
null
0
32
I am studying chapter 5 "The many variables & the spurious waffles" of the book Rethinking and trying to answer the following question: > How is biased sample like conditioning on a collider? I do understand that when conditioning on a collider will open the path between two or more variables that causally influence a variable. This may result in bias when we estimate the causal association. But I don't understand how bias sample could be translated to conditioning on a collider? I assume it may be related to creating variables which are biased sample in such a way that causally influences a variable.
Biased sample same like conditioning on a collider?
CC BY-SA 4.0
null
2023-05-10T17:40:41.327
2023-05-17T18:35:29.827
2023-05-12T05:41:01.780
11887
323003
[ "self-study", "causality", "bayesian-network", "dag", "collider" ]
615472
2
null
615455
1
null
The trouble with testing a variable for significance and removing it if it leads to a p-value above a threshold (such as $0.05$) and doing the rest of the analysis without that variable is that the downstream analysis, unless you are quite careful (most people will not be), proceeds as if you had not done the pre-test. This means that all of your downstream inferences, such as p-values, confidence intervals, and variance estimates, are at least a bit dubious. There are ways to be careful with such an approach in order to obtain valid results, but this is a form of stepwise regression, and the issues I have mentioned here are related to issues 1, 2, 3, 4, and 7 [here](https://www.stata.com/support/faqs/statistics/stepwise-regression-problems/). (While the link is for a Stata website, the content is 100% about statistics and is independent of the software used to run your model.)
null
CC BY-SA 4.0
null
2023-05-10T17:51:39.280
2023-05-10T17:51:39.280
null
null
247274
null
615474
1
null
null
0
11
I am new to the concept of Assumed Density Filtering (ADF) but I think that it is very similar to Variational Inference (VI). In VI, we estimate a intractable posterior distribution (p) by finding a approximate distribution (q) that is closest to the distribution of interest i.e. minimizing the KL divergence KL(q || p). Isn't this the same as what we have in ADF? Please enlighten me if I seem to be missing the nuances in the two methods. Are there any pros and cons of the two methods? References would be helpful.
Differences between Assumed Density Filtering vs Variational Inference
CC BY-SA 4.0
null
2023-05-10T17:57:46.277
2023-05-10T17:57:46.277
null
null
232693
[ "variational-bayes", "variational-inference" ]
615475
1
null
null
0
22
The prediction intervals are much wider on my "VAR in differences" model than in my "VAR in levels" model. Any ideas of why this might be the case? I know there is a strong cointegration relationship between two variables. Can that explain the effect?
Prediction intervals - "VAR in levels" vs "VAR in differences"
CC BY-SA 4.0
null
2023-05-10T18:07:11.133
2023-05-11T07:28:20.780
2023-05-11T07:28:20.780
53690
377742
[ "vector-autoregression", "cointegration", "prediction-interval", "differencing" ]
615476
1
615481
null
3
54
Suppose that we have a probability density function $\pi(x_1, \ldots, x_n)$ which is the density of a vector-valued random variable $X$ in $\mathbb{R}^n$. Here the density is proper, i.e., $\int_{\mathbb{R}^n} \pi(x_1, \ldots, x_n) dx_1 \ldots dx_n = 1 < \infty$. Without any additional restrictions on the joint density, does this imply that any coordinate-wise marginal density of the form $$ \pi(x_i) = \int \pi(x_1, \ldots, x_n) dx_1 \ldots dx_{i-1} dx_{i+1} \ldots dx_n $$ is also proper, i.e., that $\int_{\mathbb{R}} \pi(x_i) dx_i = 1 < \infty$ for any $1 \leq i \leq n$? The motivation for this question is that I am considering a hierarchical Bayesian model involving an improper prior, and although I know that the posterior density over all model parameters is proper, I have seemingly found a marginal that is not proper (although this may be an error in my analytic computation). Intuitively, I would expect that any marginal of a proper joint should also be proper?
Is every coordinate-wise marginal of a proper joint distribution also proper?
CC BY-SA 4.0
null
2023-05-10T18:27:10.567
2023-05-10T18:57:57.283
null
null
369476
[ "marginal-distribution" ]
615478
2
null
615470
1
null
Question 1: The model is fundamentally the same regardless of your choice of reference level. Although the coefficient values reported by `summary()` will differ between the two choices of reference, predictions made for any particular scenario would be identical. There are no additional multiple-comparison issues. Question 2: There are better ways to proceed. Trying to work solely from the model `summary()` is likely to lead to confusion when there are interactions. Use post-modeling tools instead. For example, the `Anova()` function in the R [car package](https://cran.r-project.org/package=car) can evaluate overall statistical significance of predictors in a way that incorporates all interaction terms appropriately, even if the design is imbalanced. The [emmeans package](https://cran.r-project.org/package=emmeans) provides tools for comparing scenarios of interest from many types of models, with appropriate corrections for multiple comparisons.
null
CC BY-SA 4.0
null
2023-05-10T18:45:03.913
2023-05-10T18:45:03.913
null
null
28500
null
615479
1
null
null
0
12
I am using both IQR and Z score > +/- 3SD for outlier detection. It seems like Z score > +/- 3SD is more strict and yields fewer outliers than IQR, which is better for my purposes (Regression, Airbnb Price Prediction) However, there are still lots of outliers being detected with the Z score method. Is there a more sophisticated/systematic way to go about how to remove these outliers? Some have suggested just eyeballing with boxplot/histogram/scatterplot, but I am unsatisfied by the nonquantitative nature of it. Hoping there is a more systematic approach that I can follow and use as a first "screening" step in most problems similar to this. Below is my dataframe of how many outliers are found in each feature - I can obviously remove them all but that is also unsatisfying... ``` {'Column': {0: 'price', 1: 'minimum_nights', 2: 'date_of_review_year', 3: 'last_review_year', 4: 'number_of_reviews', 5: 'reviews_per_month', 6: 'number_of_reviews_ltm', 7: 'listing_id', 8: 'id', 9: 'calculated_host_listings_count', 10: 'host_id'}, 'Outlier Count': {0: 575, 1: 1053, 2: 1107, 3: 1515, 4: 2236, 5: 4075, 6: 4799, 7: 8052, 8: 8573, 9: 8925, 10: 10844}, 'Percentage': {0: 0.2152816258068381, 1: 0.39424617734713135, 2: 0.41446393003159965, 3: 0.5672202836475821, 4: 0.8371647222679826, 5: 1.5256915220223743, 6: 1.7967591691252454, 7: 3.0146915669507135, 8: 3.2097554400730837, 9: 3.3415452353496176, 10: 4.060024261303222}, 'Max Allowable Value': {0: 1903.2474464485751, 1: 81.1471012097081, 2: 2028.4407611958543, 3: 2029.3504386277002, 4: 793.1969405905453, 5: 13.758606675183554, 6: 193.4008759750987, 7: 3.518988602860466e+17, 8: 3.6651180886284006e+17, 9: 35.72229351455516, 10: 382071191.0582348}, 'Min Allowable Value': {0: -1517.6718245654786, 1: -66.25626209809113, 2: 2009.3519993510733, 3: 2015.490896942066, 4: -326.6803844975137, 5: -6.8684181831946525, 6: -113.21529946962494, 7: -3.129647879837069e+17, 8: -3.248157618682771e+17, 9: -26.539592422796517, 10: -248407483.67581975}} ```
Systematic/Repeatable Framework for Outlier Removal?
CC BY-SA 4.0
null
2023-05-10T18:46:06.290
2023-05-10T18:46:06.290
null
null
361781
[ "standard-deviation", "outliers", "z-score", "pandas", "interquartile" ]
615480
2
null
603791
0
null
it is essentially a alternative way of binning . I don't see anything wrong with it mathematically. you also want to make sure your leaves are not too small, and the maybe do some bagging to check your final bins are stable.
null
CC BY-SA 4.0
null
2023-05-10T18:52:22.827
2023-05-10T18:52:22.827
null
null
29851
null
615481
2
null
615476
5
null
The marginal density of a probability density is again a probability density (a.s.). The same notations should not be used for different entities, so let me define $$\pi_i(x_i) = \int_{\mathbb R^{n-1}} \pi(x_1, \ldots, x_n) dx_1 \ldots dx_{i-1} dx_{i+1} \ldots dx_n$$ as the $i$-th marginal. Then $$\int_{\mathbb{R}} \pi_i(x_i) dx_i = \int_{\mathbb{R}}\int_{\mathbb R^{n-1}} \pi(x_1, \ldots, x_n) dx_1 \ldots dx_{i-1} dx_{i+1} \ldots dx_n dx_i\\ =\int_{\mathbb R^{n}} \pi(x_1, \ldots, x_n) dx_1 \ldots dx_{i-1} dx_i dx_{i+1} \ldots dx_n = 1$$ by [Fubini's Theorem](https://en.wikipedia.org/wiki/Fubini%27s_theorem). This applies to the Bayesian setting in the question: if the joint posterior is proper, then any posterior marginal or conditional is equally proper (a.s.).
null
CC BY-SA 4.0
null
2023-05-10T18:57:57.283
2023-05-10T18:57:57.283
null
null
7224
null
615482
1
null
null
0
30
I am trying to estimate the distance walked by an animal given several predictor variables using a GAM model, and, though having read similar questions being raised on this platform, I am still struggling to find an appropriate family distribution. I have tried fitting my model with log-normal, gamma, and Tweedie(p=1.3, link="log") distributions also considering that the marginal distribution of my response variable is exponential. Evaluating the goodness of fit of these models is confusing for me because Q-Q plots and AIC favor the Tweedie one, but the residuals are largely spread out around the model. In contrast, with the log-normal distribution residuals are not spread that much, but the goodness of fit suffers as far as I get it. It is stated that the choice of the distribution should be based on the conditional rather than the marginal distribution. In that case, my question would be if the model contains several predictor variables, based on which of those the conditional distribution should be chosen if they display different behavior? I would highly appreciate your guidance in estimating the correct distribution for this case. I am not a statistician, so apologies in advance if any of the statements are incorrect. This is the model I am trying to fit, where ToD is the time of the day; alt is altitude; roaddist, aspect, slope (occurrence at different distances to roads, aspects, and slopes), exp(experience after release), forest(tree species), binary.snow(TRUE, FALSE): ``` gam(dist.km ~ ToD + s(alt, k=40, by=ToD, bs='ad')+ s(roaddist, k=4)+s(exp, k=55)+forest+ s(aspect)+s(slope)+binary.snow, data = N_hour_comb, method="REML", family=gaussian(link="log")) ``` Here are the outputs of the `gam.check()` (the first block being for log-normal, and the last one- for Tweedie). [](https://i.stack.imgur.com/9OLcI.jpg)
What family distribution to choose for distance walked response variable using GAM in R?
CC BY-SA 4.0
null
2023-05-10T19:05:02.953
2023-05-10T20:47:43.340
2023-05-10T20:47:43.340
387665
387665
[ "distributions", "nonlinear-regression" ]
615483
1
null
null
0
14
Suppose I have a 1D random walk, where the step size is not a constant but a random variable that is a function of a set of independent continuous random variables $x^i$ with given probability distributions $\phi^i(x^i)$. Each step the computation of the step size is done independently of all previous steps. Is it correct to conclude that the probability distribution of the distance from the origin converges to gaussian, regardless of the forms of $\phi^i$ or the step size function?
Do 1D random walk probability distributions generally converge to gaussian functions?
CC BY-SA 4.0
null
2023-05-10T19:26:47.210
2023-05-10T19:26:47.210
null
null
303969
[ "probability", "random-walk" ]
615484
1
null
null
0
25
I know that covariance stationary processes can always be represented as an $MA(\infty)$ process, i.e. $$ Y_t = \alpha + \sum_{s=0}^{\infty} \beta_{s} \varepsilon_{t-s}. $$ with uncorrelated innovations $\varepsilon_t^2 \sim (0,\sigma^2)$ and conditions on $\{\beta_s\}_{s=0}^{\infty}$. Is there an analogous theorem showing that we can represent any stationary process as an $AR(\infty)$ process of the following form? $$ Y_t = \delta + \sum_{s=1}^{\infty} \gamma_s y_{t-s} + \varepsilon_t.$$ If so, could you provide a source or a name of a theorem?
Stationarity and AR(p) representation
CC BY-SA 4.0
null
2023-05-10T19:30:38.440
2023-05-10T19:59:17.337
2023-05-10T19:59:17.337
261146
261146
[ "time-series", "stationarity" ]
615485
1
615493
null
2
78
How can you draw a random sample from a random variable whose density is given by $\exp(x - \exp(x))$? I am trying to work through the blog at [https://staffblogs.le.ac.uk/bayeswithstata/2015/03/27/poisson-regression-with-two-random-effects-mcmc-by-data-augmentation/](https://staffblogs.le.ac.uk/bayeswithstata/2015/03/27/poisson-regression-with-two-random-effects-mcmc-by-data-augmentation/) The section Simulating the unobserved data describes a way to sample from an exponential variable (using -log(runif)/mu) but then, as far as I can understand, seems to jump into slightly different Stata code for sampling from $\exp(x - \exp(x))$ ( I have no access to Stata and so cannot step through it unfortunately which may have helped my understanding). These are my attempts to sample, in R, which appear successful but I am unsure. In the first method below, the Stata code from the link uses two calls to a random uniform generator and does a few extra calculations. ``` # set parameters set.seed(1) n = 1000000 mu = 10 # Trying to translate the code from web u = runif(n) v = -log(u)/mu hist(log(v) + log(mu), breaks=100, xlim=c(-10, 10), probability = TRUE, main="Method 1") plot(function(x) exp(x - exp(x)), from=-10, to=10, add=TRUE, col="red", lwd=2) ``` I can also get samples that appear a good fit using the inbuilt random exponential sampler using the same approach as above. ``` # using in built functions v = rexp(n, rate=mu) hist(log(v) + log(mu), breaks=100, xlim=c(-10, 10), probability = TRUE, main="Method 2") plot(function(x) exp(x - exp(x)), from=-10, to=10, add=TRUE, col="red", lwd=2) ``` Are these correct? Why? [](https://i.stack.imgur.com/XUMlD.png)
How can you draw a random sample from an RV with density $ \exp(x - \exp(x))$?
CC BY-SA 4.0
null
2023-05-10T19:40:30.277
2023-05-12T00:36:13.097
2023-05-10T21:37:00.220
919
43842
[ "sampling" ]
615486
1
null
null
1
8
I have trouble understanding how I should interpret my results from ARDL regression. Does anyone know how the following table should be interpreted? d indicating first difference, ln indicating log (hence, realinterest is not transformed) [](https://i.stack.imgur.com/T2FgO.png) Thank you
Question regarding interpretation of ARDL results - logged & first differenced variables
CC BY-SA 4.0
null
2023-05-10T20:00:18.130
2023-05-10T20:00:18.130
null
null
387672
[ "interpretation", "ardl" ]
615487
1
null
null
0
24
we have that $P(X \leq x, Y \leq y) = \int \int_{s \leq x, t \leq y} f_{X,Y}(s,t) dsdt$ But how would for example $P(X \leq x, Y \geq y)$ Be defined? Would it just be: $P(X \leq x, Y \geq y) = \int \int_{s \leq x, t \geq y} f_{X,Y}(s,t) dsdt$ ? I am not 100% sure if I remember correctly, but I remember seeing a formula like that somewhere: Would $P ( Y \geq y) = P(X \geq x) + P(X \leq x, Y \geq y) $ hold in general? Or should there be any specific conditions on the random variables $X$ and $Y$?
Question about joint cdf
CC BY-SA 4.0
null
2023-05-10T20:39:04.457
2023-05-10T20:57:46.413
2023-05-10T20:57:46.413
20519
121420
[ "probability", "distributions", "random-variable", "joint-distribution", "cumulative-distribution-function" ]
615491
1
null
null
2
40
I am reading the [article](https://www.imo.universite-paris-saclay.fr/%7Ethanh-mai.pham-ngoc/HoangPhamRivoirardTran.pdf). I am getting stuck with the first proof proposition 4 on page 32. To be more specific, they understood the reason why they obtained $F(x) \le \frac{2K}{1-\frac{2R\epsilon}{\alpha}}+\frac{2CR}{\alpha -2R\epsilon}\int_{x}^{\epsilon}F(y) dy$. Then, they choose $\epsilon$ such that $0 <\epsilon < \frac{\alpha}{2R}$ and they said that "Applying Gronwall's inequality to this inequality, we obtain $F(x) \le \frac{K}{1-\frac{2R\epsilon}{\alpha}} \exp\left(\frac{2CR\epsilon}{\alpha - 2R\epsilon}\right)$ where $ x \in [0;\epsilon].$ I don't know which version of Gronwall's inequality did they use? Thank you for reading my question. Any help is appreciated.
Gronwall's inequality
CC BY-SA 4.0
null
2023-05-10T21:50:09.087
2023-05-14T13:51:54.227
null
null
387684
[ "mathematical-statistics", "nonparametric", "kernel-smoothing" ]
615492
1
615497
null
1
39
> Let $X$ and $Y$ be two random variables with joint probability density function $f(x, y) = 1$ if $− y < x < y, 0 <y< 1$ and $0$ elsewhere. Find the regression equation of $Y$ on $X$ and that of $X$ on $Y$. So we're asked to find $E(Y|X)$ and $E(X|Y)$. I found the marginal density of $X$ to be $f_X(x)=1-|x|,-y<x<y$ and $f_Y(y)=2y, 0<y<1$. So then we have $f_{Y|X}(y)=\frac{1}{1-|x|}, f_{X|Y}(x)=\frac{1}{2y}$. Shouldn't they be function of $y$ & $x$ instead of $x$ & $y$ respectively? getting confused, how do I compute the conditional expectations?
Finding regression equations from joint distribution
CC BY-SA 4.0
null
2023-05-10T21:52:17.757
2023-05-10T22:31:02.563
2023-05-10T22:05:30.440
20519
339153
[ "regression", "self-study", "conditional-expectation", "joint-distribution", "marginal-distribution" ]
615493
2
null
615485
4
null
Just to formalise [whuber](https://stats.stackexchange.com/users/919/)'s excellent answer in comments, taking $y = e^x$ gives $dy/dx = e^x$ so that: $$\begin{align} f_X(x) &= \exp(x - e^x) \\[16pt] &= \exp(-e^x) \cdot e^x \\[12pt] &= e^{-y} \cdot \bigg| \frac{dy}{dx} \bigg| \\[6pt] &= \text{Exp}(y|1) \cdot \bigg| \frac{dy}{dx} \bigg|. \\[6pt] \end{align}$$ Consequently, using the rules for monotonic transformations of random variables, we can generate the random variable $X$ using the transformation: $$X = \log Y \quad \quad \quad \quad \quad Y \sim \text{Exp}(1).$$ You can also use the [relationship between the exponential and uniform distributions](https://en.wikipedia.org/wiki/Exponential_distribution#Random_variate_generation) to generate $X$ from a continuous uniform random variable on the unit interval (which is the foundational unit of a [standard PRNG](https://stats.stackexchange.com/questions/465711#465711)): $$X = \log Y \quad \quad \quad \quad \quad Y = -\log(U) \quad \quad \quad \quad \quad U \sim \text{U}(0,1).$$ The code you present appears to include a rate parameter, which is not part of the density function you initially describe. Your code appears to implicitly generate $Y$ from an exponential distribution with rate $\lambda = 10$ but then remove this latter part from the result when presented in the histogram --- this part is unecessary and can be removed entirely. (If you want to build the parameter into the density then you will need to generalise the form of your density function appropriately.) To implement random generation from this distribution in `R` you can use the following function: ``` rdist <- function(n) { log(-log(runif(n))) } ``` We can now confirm graphically that this method simulates from the stipulated density: ``` #Generate random variables set.seed(1) x <- rdist(10^6) #Plot histogram with overlaid density hist(x, breaks = 100, xlim = c(-10, 10), probability = TRUE, main = 'Histogram of randomly generated values') plot(function(x) exp(x - exp(x)), from = -10, to = 10, add = TRUE, col = 'red', lwd = 2) ``` [](https://i.stack.imgur.com/bf8u3.jpg)
null
CC BY-SA 4.0
null
2023-05-10T21:55:02.247
2023-05-12T00:36:13.097
2023-05-12T00:36:13.097
173082
173082
null
615495
1
null
null
0
9
I am trying to compare the initial roof life of two different data sets of homes. I want to know if the data can be said to show a statistical difference in roof life. I have two populations. One has 143 roofs and the other has 477. The number and age at which roofs were replaced by year of age is shown below. ``` N1 and Lipizzan combined Rest of SR Age (Yrs) Repl Total % Repl Repl Total % Repl 16 7 7 100% 1 35 3% 15 5 20 25% 10 57 18% 14 11 21 52% 10 38 26% 13 8 15 53% 6 47 13% 12 0 5 0% 1 45 2% 11 0 7 0% 4 40 10% 10 1 21 5% 2 38 5% 9 0 47 0% 3 177 2% Tot 32 143 22% 37 477 8% ``` The percent replaced seems different, but the smaller sample size of the first data set may add uncertainty. So I scaled the larger data set by each year to match the smaller data set, keeping the failure rate percent constant. I also looked at the surviving roofs rather than the failures, hoping that the weighted mean may be different. That results in the data set below. ``` Number of Roofs **Not Replaced** Age of Roof Rest of SR N1/LIP Sample Size 9 46.2 47 47 10 19.9 20 21 11 6.3 7 7 12 4.9 5 5 13 13.1 7 15 14 15.5 10 21 15 16.5 15 20 16 6.8 0 7 Total 129.1 111 143 ``` Our hypotheses is that the "N1/LIP" population has been replacing roofs earlier than necessary compared to the control group of "Rest of SR". The weighted mean (age x number of roofs) of the Rest of SR is 10.4 and the N1/Lip is 8.5, so the data would suggest a 1.9 year difference? I would like to be able to state with 95% confidence that they are statistically different. I am afraid I can't come up with the correct statistical test. Any help would be appreciated.
Comparing roof life of two different population of homes
CC BY-SA 4.0
null
2023-05-10T22:18:30.077
2023-05-11T14:56:09.120
2023-05-11T14:56:09.120
919
387685
[ "failure-rate" ]
615496
1
null
null
1
19
I am currently using linear mixed models for the first time in a small repeated measures experiment, and I feel like I am missing out on some basics in diagnostic plots. I want to model a labeling skill using dprime as a DV. The data I collected is slightly right skewed - from what I see in relevant papers, this is mainly due to my sample size being too small, in bigger studies on the same topic this isn't an issue. This was my original model, with there being a slight trend in the residual plot and the Q-Q plot having a pretty nasty tail. ``` lmm.main <- lmer(dprime ~ strength + treatment + stance + congruence + gender + age + (1|id)) ``` [](https://i.stack.imgur.com/6myJn.png) I found a potential interaction effect between stance and congruence, which turned out to be very relevant as a predictor. However, I'm not sure if my model assumptions are now even "more violated" (is that a thing?) than before. My residual plot shows a clearly upward sloping trend that I can not "log-transform away". What is the right way to think about residual plots and trends in an exploratory analysis with few participants, and how can I potentially improve the normality of my residuals? Would you consider the second or first model "better" when it comes to fulfilling the assumptions for LMMs - or would you consider both a no-go? ``` lmm.int <- lmer(dprime ~ strength + treatment + stance * congruence + stance + congruence + gender + age + (1|id)) ``` [](https://i.stack.imgur.com/lYBfE.png)
How do I interpret this residual plot (lmer) and the changes introduced by including an interaction effect?
CC BY-SA 4.0
null
2023-05-10T22:19:29.020
2023-05-10T22:19:29.020
null
null
387679
[ "lme4-nlme", "residuals" ]
615497
2
null
615492
2
null
You are not entirely wrong on conditional densities, but need to be more careful on domains. For example, the reason you felt that $f_{X|Y}(x|y) = \frac{1}{2y}$ is only a function of $y$ is because you did not write out the domain of $x$ explicitly (see my enhancement below). Also in the conditional density expression like this, $y$ should be viewed as a fixed number in its domain instead of an argument to a function. The marginal density of $X$ should be \begin{align} f_X(x) = \begin{cases} 1 - |x| & |x| < 1, \\[1em] 0 & |x| \geq 1. \end{cases} \end{align} The marginal density of $Y$ found by you is correct. The conditional density of $X$ given $Y = y$ ($0 < y < 1$) is: \begin{align*} f_{X|Y}(x|y) = \frac{f(x, y)}{f_Y(y)} = \frac{1}{2y}I_{(-1, 1)}(x). \tag{1} \end{align*} The conditional density of $Y$ given $X = x$ ($-1 < x < 1$) is: \begin{align*} f_{Y|X}(y|x) = \frac{f(x, y)}{f_X(x)} = \frac{1}{1 - |x|}I_{(0, 1)}(y). \tag{2} \end{align*} In terms of $(1)$ and $(2)$, conditional densities have their respective expected arguments $x$ and $y$. Now it is easy to find regression functions based on $(1)$ and $(2)$: \begin{align} & E[X|Y = y] = \int_{-1}^1 xf_{X|Y}(x|y)dx = 0, \quad 0 < y < 1; \\ & E[Y|X = x] = \int_0^1 yf_{Y|X}(y|x)dy = \frac{1}{2(1 - |x|)}, \quad -1 < x < 1. \end{align} Again, it is important to include domains such as "$0 < y < 1$" and "$-1 < x < 1$" above in your calculations, because by definition, regression functions are functions, and a complete specification of a function should include its domain.
null
CC BY-SA 4.0
null
2023-05-10T22:31:02.563
2023-05-10T22:31:02.563
null
null
20519
null
615498
1
null
null
0
22
I have a unet segmentation model, which outputs 5 classes, I would like to find the optimal threshold value for each class using the precision-recall curve: ``` def optimal_threshold_precision_recall_curve(Y_orig,y_pred): for c in range(classes): precision, recall, thresholds = precision_recall_curve(Y_orig[:, c].ravel(), y_pred[:, c].ravel()) #optimal thresholds - using Youden's J Statistic: optimal_thresholds = sorted(list(zip(np.abs(precision - recall), thresholds)), key=lambda i: i[0], reverse=False)[0][1] print("Youden - Ideal threshold is: ", optimal_thresholds) ``` when I pass 1 prediction (and its corresponding gt-mask) , I get the desired optimal thresholds for each class. however when I pass another image, I get different thresholds values, and when I pass multiple predictions (on my validation set) I get something like average thresholds values, which are not the real optimal thresholds. How I can calculate the optimal thresholds for a validation set ?
find the optimal threshold for each class in model.predict (multiclass segmentation)
CC BY-SA 4.0
null
2023-05-10T22:44:49.010
2023-05-10T22:44:49.010
null
null
387683
[ "predictive-models", "tensorflow", "multi-class", "threshold", "image-segmentation" ]
615500
1
null
null
2
12
There's two regression models for same data. The two are known very precise before but now they are generating conflicting predictions. In this case how can i handle this? Should i perform stepwise/forward/backward model selction? or do bayesian/frequentist model averaging?
what to do if I have two different models for same data?
CC BY-SA 4.0
null
2023-05-10T22:57:37.380
2023-05-10T22:57:37.380
null
null
339581
[ "regression", "multiple-regression", "model" ]
615501
1
null
null
0
11
Let's say that I have a data set comprised of age data for a large number of individuals, as well as a unique identifier for each individual. For terminology sake let's call this my base population. The ages are broken out into categories. Let's say that the distribution of ages in my data set are: - Age 0-29: 10% - Age 30-49: 30% - Age 50+: 60% Now I am given a new population of data comprised of only unique identifiers, and I want to merge this new population with my base population so that I can estimate the distribution of age categories across the new population. Let's say that I merge the new population with the base population, and I get a match rate of ~10% (meaning that ~90% of the new population's unique identifiers were not found in the base population.) However, my population sizes are very large, so that 10% match rate contains well over a million individuals. Among the 10% of individuals that were matched, I can now see that the distribution of age categories within that 10% is as follows: - Age 0-29: 25% - Age 30-49: 40% - Age 50+: 35% Here's my question: How can I use this information to estimate the distribution of age categories of the new population from this 10% sample of matched individuals? The challenge I am running into is the fact that the 50+ age group is so heavily over-represented in the base data, making them much more likely to be merged to the new population. So even though ~35% of the matched sample is in the 50+ age group, it's likely that this proportion is highly inflated because this group is so heavily over-represented in the base data. I am struggling with how to estimate the true proportions of age groups in the new population given the skew present in the base data.
Population Estimation And Conditional Probability
CC BY-SA 4.0
null
2023-05-10T23:08:35.150
2023-05-10T23:08:35.150
null
null
387688
[ "sampling", "conditional-probability", "population", "density-estimation", "selection-bias" ]
615502
1
null
null
0
11
In the setup for a moving Hidden Markov chain study of some birds, I have to make a simulation study to investigate possible consequences of assuming that the different variables are conditionally independent of each other, given the states. Therefore, you will simulate data that are similar to the original data but where the different variables exhibit different degrees of correlation (e.g., no correlation, some correlation, and high correlation). I'm not sure what is meant and what to show? So I have to make 3 studies where all the variables (longitude, latitude,altitude) are high correlated, some correlated and no correlated, or do I have to make the one variable for example longitude high correlated, and for example latitude some correlated and altitude no correlated? And what do you expect the "possible consequences" could be?
Simulations for moving Hidden Markov chain
CC BY-SA 4.0
null
2023-05-10T23:08:44.720
2023-05-10T23:08:44.720
null
null
326360
[ "r", "correlation", "simulation", "hidden-markov-model", "moving-average-model" ]
615503
1
615611
null
2
125
I'm currently writing a thesis where I am trying to disect the ECB monetary press releases and their impact on the European stock market. I am using an event study methodology. Computing Daily Excess returns on indices and testing on those on event days are going fine. Now I have reached the point in my analysis where I focus on the volatility and whether this increases on the day of these releases or not. I would also like to conduct analysis of whether it increases on the day before and after. The problem is I do not know how to exactly do this. I have seen people suggest GARCH(1,1) models when trying to do event studies on volatility the problem is I do not know exactly how to do this. My hypothesis is that volatility increases during these events but I do not know how to test this. So my questions are: - Which model should I use to test this hypothesis (Hypothesis: Volatility in the European stock market increases on ECB Monetary policy announcement days). If I use this model how do I test my hypothesis? - Is it possible for me to add other variables than the dummy variables for my event days (I would also like to test if shocks in monetary policy or changes in interest rates has an effect). - Perhaps I can also do simple OLS regressions with a measure of volatility as my $y$ variable and the event day as my dummy variable? Note: I primarily use Stata for my analysis.
Testing if volatility increases during ECB-Monetary Press Releases
CC BY-SA 4.0
null
2023-05-10T23:31:27.297
2023-05-11T19:50:00.560
2023-05-11T19:50:00.560
53690
387690
[ "time-series", "self-study", "garch", "intervention-analysis", "volatility" ]
615504
1
null
null
0
35
I'm curious on why the odds ratio would be useful? For example, suppose I designed an experiment and fit a logistic regression model for success rate data. $p(success|treatment) = sigmoid(treatment*B_1 + B_0)$ where treatment is binary. So if I want to know the lift attributable to treatment, I can query it as $sigmoid(B_1 + B_0) - sigmoid(B_0)$. However, I've read that the odds ratio, $exp(B_1)$, is actually more useful. Could someone explain why this might be the case?
Why would odds ratio be more useful than difference of probabilities?
CC BY-SA 4.0
null
2023-05-10T23:35:34.257
2023-05-11T04:48:05.710
null
null
288172
[ "regression", "logistic", "odds-ratio" ]
615505
2
null
223983
2
null
The determinant of the confusion matrix is proportional to the Matthews Correlation Coefficient (MCC, also called the phi coefficient). [https://en.m.wikipedia.org/wiki/Phi_coefficient](https://en.m.wikipedia.org/wiki/Phi_coefficient) So basically, the determinant of the confusion matrix is an unscaled measure of correlation. It’s not expected to be as useful as MCC because it doesn’t account for the total population size.
null
CC BY-SA 4.0
null
2023-05-10T23:36:28.633
2023-05-10T23:36:58.840
2023-05-10T23:36:58.840
387691
387691
null
615506
1
null
null
1
26
The Python code used, printed are the eigenvector coefficients and the relative eigenvalues, the data is the same for both SPSS and Python: ``` X = dat[cols] pca = PCA(n_components=2,svd_solver = 'full').fit(X) print(pca.components_.T) print(pca.explained_variance_ratio_) ``` Python Output: ``` [[-0.61573707 -0.12879642] [-0.40476782 -0.3582253 ] [-0.26925575 0.67811576] [-0.37436419 0.53892622] [-0.49435177 -0.32373394]] [0.31112415 0.25145249] ``` SPSS Output: ``` [[0.793 -0.027] [0.608 -0.510] [0.276 0.812] [0.478 0.621] [0.603 -0.314]] [0.33332266 0.28082707] ``` I tried applying different normalization methods: ``` scaler = preprocessing.MinMaxScaler().fit(X) X_scaled = scaler.transform(X) pca = PCA(n_components=2,svd_solver = 'full').fit(X_scaled) [[-0.82663883 -0.23475242] [-0.31202742 -0.28986165] [-0.27729367 0.74274876] [-0.25778984 0.5187487 ] [-0.27560797 -0.20023884]] [0.34864558 0.24545525] scaler = preprocessing.MaxAbsScaler().fit(X) X_scaled = scaler.transform(X) pca = PCA(n_components=2,svd_solver = 'full').fit(X_scaled) [[ 0.55296252 -0.70429099] [ 0.11899483 -0.31431751] [ 0.63331106 0.50543344] [ 0.51086644 0.27929861] [ 0.13418372 -0.26778354]] [0.34906865 0.26269499] scaler = preprocessing.StandardScaler().fit(X) X_scaled = scaler.transform(X) pca = PCA(n_components=2,svd_solver = 'full').fit(X_scaled) [[-0.60705794 -0.03705608] [-0.51335244 -0.37088481] [-0.20018055 0.67711926] [-0.29857834 0.59942764] [-0.48859866 -0.20800795]] [0.3126145 0.24884053] scaler = preprocessing.RobustScaler().fit(X) X_scaled = scaler.transform(X) pca = PCA(n_components=2,svd_solver = 'full').fit(X_scaled) [[-0.28720655 -0.51737357] [-0.93496554 0.31763895] [ 0.00257742 -0.15641986] [-0.00360587 -0.32249965] [-0.20816385 -0.70919453]] [0.49561027 0.18208105] ``` I get very different results. Given this I don't feel confident using any of these coefficients. Any ideas? Assuming the coefficients obtained by SPSS are the most correct how could I replicate them? and are they?
SPSS and Scikit-learn giving different PCA eigenvectors coefficients
CC BY-SA 4.0
null
2023-05-11T00:23:55.800
2023-05-11T00:23:55.800
null
null
352032
[ "python", "pca", "spss", "scikit-learn" ]
615507
2
null
410110
0
null
Actually it is a delta function. We're basically finding the pdf of $f(\mathbf{x}_*, \mathbf{w})$ given the derived posterior pdf for $\mathbf{w}$. For example see [here](https://stats.stackexchange.com/questions/16509/pdf-for-a-function-of-random-variables). So $$ p\left(f_{*} | \mathbf{x}_{*}, \mathbf{w}\right) = \delta(f_{*} - \mathbf{x}_*^T \mathbf{w}) $$ Our integral becomes $$ \begin{align} p\left(f_{*} | \mathbf{x}_{*}, X, \mathbf{y}\right) &= \int p\left(f_{*} | \mathbf{x}_{*}, \mathbf{w}\right) p(\mathbf{w} | X, \mathbf{y}) d \mathbf{w} \\ &= \int \delta(f_{*} - \mathbf{x}_*^T \mathbf{w}) \mathcal{N}(\mathbf{w} \mid \mu, \Sigma) d \mathbf{w} \\ &= \ldots \\ &= \mathcal{N}(f_{*} \mid \mathbf{x}_{*}^T \mu, \mathbf{x}_{*}^T \Sigma \mathbf{x}_{*}) \\ \end{align} $$ The integral is easy to take in the 1D case as a sanity check. The multivariate case is a bit more annoying but basically the delta function constrains one dimension of $\mathbf{w}$, and the rest is doable by completing the square to get something of the form $$ p\left(f_{*} | \mathbf{x}_{*}, X, \mathbf{y}\right) = \mathcal{N}(f_{*}) \int d\mathbf{u} \mathcal{N}(\mathbf{u}) $$ where $\mathbf{u}$ is $n-1$ dimensional and the integral manifestly just goes to 1.
null
CC BY-SA 4.0
null
2023-05-11T00:36:09.090
2023-05-12T07:04:56.100
2023-05-12T07:04:56.100
387694
387694
null
615508
2
null
614570
4
null
With great respect, what you are proposing is a really awful way to model the relationship of income to life expectancy. Before you even get to the choice of models/distributions, you are missing one of the most important statistical phenomena at work in this type of problem: cumulative income is likely to be strongly positively related to total lifetime primarily because it is accumulated steadily over that lifetime --- a person who has twice as much lifetime as an adult will have roughly twice as long to earn money, so they will tend to have a higher cumulative income over their lifetime. This is likely to give a strong statistical effect which will dwarf the effect of the positive statistical relationship between income and health. If you want to model the relationship between health and income, you should be looking at various standard forms of [survival models](https://en.wikipedia.org/wiki/Survival_analysis) and you should be aiming to get longitudinal data on the income of people at each year (or other relevant interval) over their lifetime. You can then build a survival model where the conditional probability of survival in each interval is based on a regression of various income variables up to that point (e.g., present income, average or cumulative past income, etc.). If you can collect longitudinal data on health for the same people (or other relevant covariates) then you can also incorporate these variables into your survival regression. In order to get a feel for this field of analysis, I recommend you read some introductory material on survival analysis. I have never seen a truncated normal distribution used within this field and it has some glaring problems that would make it a poor choice for almost any purpose (e.g., imposing a hard cut-off on the maximum possible age, having a strange shape with some continuity and symmetry but then hard cut-offs, etc.). There are applications for truncated normal distributions in other fields, but this is not an area where they would be fruitful. Proper survival models deal with the death of elderly people by having hazard rates that increase rapidly in old age, making it highly unlikely (but not impossible) for an elderly person to live to a much older age. If you would like a primer on how to do survival modelling that incorporates regression effects based on longitudinal variables, you can have a look at [Allison (2014)](https://www.amazon.com.au/Event-History-Survival-Analysis-Longitudinal-ebook/dp/B00JXZ2XVC).
null
CC BY-SA 4.0
null
2023-05-11T00:36:57.603
2023-05-11T00:36:57.603
null
null
173082
null
615509
1
null
null
0
34
I have a random variable X that has a PDF defined by the following piecewise function: $$f(x)= \begin{cases} x + 1 & \text{if } x \in \left[-1, 0\right]\\ -x + 1 & \text{if } x \in \left[0, 1\right]\\ 0 & \text{otherwise} \end{cases} $$ Basically, this PDF creates a triangle with a base of length 2 (from -1 to 1), and a height of 1 (such that the area under the curve is 1, by the rules of PDFs) I've plotted it with matplotlib here: [](https://i.stack.imgur.com/IsnZq.png) I am trying to better understand how modifications to a random variable with this sort of function defining its PDF work. For example, if we defined $Y = 2X - 1$, I understand that the area under the curve for Y will still need to be 1 to be a valid PDF. I also understand that the domain of Y will be $[-3, 1]$. However, how does this affect the new height of the RV Y? More generally, if you had an arbitrary PDF $f(x)$, and you were to apply some linear transformation to it, how does that affect the domain and range, generally? Thank you!
How to calculate the PDF of a modified random variable?
CC BY-SA 4.0
null
2023-05-11T01:12:57.570
2023-05-11T01:12:57.570
null
null
387697
[ "mathematical-statistics", "density-function" ]
615511
1
null
null
1
12
I am currently conducting an experiment and I am struggling to find a statistical test to test the relationship between my independent and dependent variables. My research is to find the relationship between [IV] increasing concentrations of metal ions and the [DV] chlorophyll content in the leaves of a plant. My supervisor has told me to use a repeated measures ANOVA, but my understanding was that ANOVA tests were only used to find the statistical significance between datapoints and not the relationship between the variables (although I may be wrong here). My graph is non-linear and looks something like the plot below:[](https://i.stack.imgur.com/pZGAo.png) Where the y-axis is the concentration of metal ions and the x-axis is the total chlorophyll content. I was thinking of drawing two lines (one from 0 to 5, the other from 5 to 25) and finding the Pearson's correlation coefficient of both lines or conducting a Spearman's Rank. Which test would be better? Additionally, I would like to find the significant differences between the datasets, so which test should I follow up with? ANOVA, or any other test? Thank you!
What statistical test should be used to test the relationship between the variables for a non-linear data set?
CC BY-SA 4.0
null
2023-05-11T01:44:31.533
2023-05-11T01:51:27.003
2023-05-11T01:51:27.003
387698
387698
[ "hypothesis-testing", "nonlinear-regression" ]
615512
1
null
null
0
15
Assuming we have two samples from two populations each containing 10 independent observations + standard deviation. How should I propagate the error of the measurements themselves into the sampling error? Example: Blood samples were taken from 20 male individuals, 10 from individuals randomly sampled from a population of anemic patients, and 10 from an independent age-matched healthy male population. For each individual blood sample, I measured a protein, which according to the analytical method used, has a standard deviation associated with it, e.g., 10 ng/mL ± 0.2 ng/mL. Now, for the whole data set, If I was to model the difference between the means in anemic vs. healthy patients, how would I propagate the errors? ``` df <- structure(list(value = c(10.9746643444716, 11.8215643507607, 10.0904398058947, 7.72063101668958, 11.132160205841, 9.90175385296537, 10.7130307588643, 9.14277089370172, 8.98963479613762, 10.7345886475362, 5.02005913129535, 3.94591286780404, 5.88269731881474, 4.9076856062703, 2.61720608462012, 2.84253207675305, 4.06644378220811, 4.60858489634185, 3.23740211847406, 2.23776928808866), sd = c(0.133358425507811, 0.114107348166208, 0.111754990370393, 0.135922789945039, 0.0323765045192955, 0.0848776334673558, 0.0801350324665742, 0.063359998506837, 0.0811735414300104, 0.150197358507943, 0.00992790714255545, 0.0434082629014837, 0.103808450495596, 0.0341798509894078, 0.06395272211175, 0.0991172347676062, 0.122222651013611, 0.172497320052591, 0.0383064081152008, 0.0615236705789717), group = c("healthy", "healthy", "healthy", "healthy", "healthy", "healthy", "healthy", "healthy", "healthy", "healthy", "anemic", "anemic", "anemic", "anemic", "anemic", "anemic", "anemic", "anemic", "anemic", "anemic" )), class = "data.frame", row.names = c(NA, -20L)) summary(lm(value ~ group, df)) Call: lm(formula = value ~ group, data = df) Residuals: Min 1Q Median 3Q Max -2.40149 -1.00804 0.06955 0.88217 1.94607 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.9366 0.3808 10.34 5.35e-09 *** grouphealthy 6.1855 0.5385 11.48 1.02e-09 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.204 on 18 degrees of freedom Multiple R-squared: 0.8799, Adjusted R-squared: 0.8733 F-statistic: 131.9 on 1 and 18 DF, p-value: 1.018e-09 ``` I wonder if a hierarchical model through linear mixed models can help. I've seen some people suggesting just weighting the individual values by the inverse of the variance of the error in the measurement.
Error propagation for inferential statistics?
CC BY-SA 4.0
null
2023-05-11T02:01:03.103
2023-05-11T02:01:03.103
null
null
223432
[ "inference", "error-propagation" ]
615516
1
null
null
0
10
When testing the carry-over effect (period*treatment) for cross-validation by GEE, do I actually need to bring in the main factors period, and treatment to the GEE model? (1) Y = intercept + (periodtreatment)X + residual (2) Y = intercept + (Period)X1 + (treatment)X2 + (periodtreatment)X3 + residual Which model is the appropriate one? Thank you for helping out.
carry over effect
CC BY-SA 4.0
null
2023-05-11T03:42:56.060
2023-05-11T03:42:56.060
null
null
387704
[ "model", "generalized-estimating-equations" ]
615517
2
null
615504
0
null
What is "useful" depends on your application. However, consider a more complication logistic regression e.g. regression of `success` against `treatment + age` where age is a numeric. If you use the odds, you can easily partition out the effect of `treatment` from the effect of `age`.
null
CC BY-SA 4.0
null
2023-05-11T04:48:05.710
2023-05-11T04:48:05.710
null
null
369002
null
615519
1
615583
null
1
37
In their [2005 paper](https://onlinelibrary.wiley.com/doi/10.1111/j.1541-0420.2005.00377.x) (also see the correction [here](https://onlinelibrary.wiley.com/doi/10.1111/j.1541-0420.2008.01025.x)) Bang and Robins describe a doubly robust estimator of the average treatment effect. In short, the procedure is: - Estimate inverse probability of treatment weights (IPTW). These are $1 / Pr[A=1|L]$ for those with $A = 1$ and $-1 / (1 - Pr[A=1|L])$ for those with $A=0$, for treatment $A$ and vector of confounders $L$. - Then, include the IPTW in the conditional mean outcome model as a 'clever covariate': $E[Y|A,L,R]$, where $R$ is the weights defined above. - Finally, standardise over the confounder distribution of the sample. In the words of the Bang and Robins correction: > ...see that we must add to the regression the inverse probability of treatment weighted (IPTW) covariate, which is the (estimated) inverse of the PS for treated subjects(Δ= 1)and the inverse of the negative of “1 minus the PS” for untreated subjects(Δ= 0). Other choices can result in inconsistent estimation of the average treatment effect. I have tried to simulate simple data and test out this estimator under correct specification of the treatment and outcome models. The results I'm getting are confusing. Here is the simulation code: ``` library(tidyverse) dr_sim <- function(SEED){ ## Simulate data set.seed(SEED) L <- rnorm(1000) A <- rbinom(1000,1,plogis(-0.5 + 0.25*L)) Y <- 0.5*L + rnorm(1000) # No causal effect of A, confounding by L d <- tibble(L,A,Y) ## fit IPTW MSM ip_mod <- glm(A ~ L, family = binomial, data = d) ipt_weight <- ifelse(d$A == 1, 1 / predict(ip_mod, type = "response"), 1 / (1 - predict(ip_mod, type = "response"))) msm <- lm(Y ~ A, data = d, weights = ipt_weight) msm_est <- coef(msm)["A"] # correct ## parametric g-formula gmod <- lm(Y ~ A + L, data = d) d1 <- d0 <- d d0$A <- 0 E_0 <- mean(predict(gmod, newdata = d0)) d1$A <- 1 E_1 <- mean(predict(gmod, newdata = d1)) gform_est <- E_1 - E_0 # correct ## Bang & Robins doubly robust estimator dr_weight <- ifelse(d$A == 1, 1 / predict(ip_mod, type = "response"), 1 / -(1 - predict(ip_mod, type = "response"))) dr_mod <- lm(Y ~ A + L + dr_weight, data = d) d1 <- d0 <- d d0$A <- 0 E_0 <- mean(predict(dr_mod, newdata = d0)) d1$A <- 1 E_1 <- mean(predict(dr_mod, newdata = d1)) dr_est <- E_1 - E_0 out <- tibble(msm_est, gform_est, dr_est) out } ## Simulate results <- tibble(seed = 1:1000) |> mutate(out = map(seed, dr_sim)) |> unnest(out) ## results tibble( estimator = c("IPTW","g-formula","doubly-robust"), mean_estimate = c(mean(results$msm_est), mean(results$gform_est), mean(results$dr_est)), sd_estimate = c(sd(results$msm_est), sd(results$gform_est), sd(results$dr_est))) ``` And here are the results: ``` # A tibble: 3 × 3 estimator mean_estimate sd_estimate <chr> <dbl> <dbl> 1 IPTW 0.000271 0.0628 2 g-formula 0.000285 0.0626 3 doubly-robust 0.0593 1.19 ``` The IPTW marginal structural model and parametric g-formula estimators are unbiased. The doubly robust robust estimator is not. The magnitude of bias and the extent of the variance seems to depend on the strength of the confounder-treatment relationship. When the confounder strongly affects the treatment, the doubly-robust estimator is unbiased and has low variance (lower than the IPTW estimator), but when it is weak, as above, the variance and bias can be very large. Can anyone shed any light on this? Am I incorrectly implementing the estimator? Or is there something else I'm missing?
Bang and Robins doubly-robust estimator biased and with large variance?
CC BY-SA 4.0
null
2023-05-11T05:28:12.853
2023-05-11T22:58:44.253
2023-05-11T22:58:44.253
228747
228747
[ "regression", "causality", "weights", "doubly-robust-estimator" ]
615520
1
null
null
0
14
I have time series data. The table contains date, no of orders, no of products sold, no of returns, no of walk in customers, etc. I want to create a model which takes 12 months of data as input and flags any anomaly in the next timestamps.One simple way which I can think of is to train an ARIMA model and use its confidence intervals as the upper and lower control limits. Hence if any future data point lies outside these intervals then it can be flagged as an anomaly.Note: I am using only one signal(column) at a time ie I am doing univariate time series assuming that the columns are un-correlated. This is the scenario, type of data and a simple approach that I have. The problem statement is to create a model that does Data Quality Checks based on historical time series data. Are there any better solution for the above problem? Note I only want to do univariate analysis ie use only one signal at at time to flag anomalies in that particular signal. Please share any ideas that you might have. I am not sure if the above method is the best way to proceed. Any links to related research work would also be helpful.
Data Quality check in time series data
CC BY-SA 4.0
null
2023-05-11T05:32:29.447
2023-05-11T05:32:29.447
null
null
382761
[ "time-series", "anomaly-detection", "quality-control" ]
615523
1
null
null
3
58
I am modeling the rates of chemical reactions with a ML-model (specifically a neural network with a lowish number of parameters). However, the rates (or technically the formation rates), the predictions, span multiple orders of magnitude in absolute values with differing signs and including zero. Each prediction consists of a vector of rates. So one sample from a batch of typical training data might look like $$\begin{bmatrix} 1.3\mathrm{e}{-12} & -3.9\mathrm{e}{-8} & 7.6\mathrm{e}{-5} & 0.00 \end{bmatrix}.$$ That means that inside one target vector the values differ by orders of magnitude. However, the values between two targets are (mostly) in the same order of magnitude. My big problem is to accurately and robustly determine the 'error' of one of my predictions to ensure they are right, even across multiple orders of magnitude. My attempts so far include - absolute error e.g. L1/L2-norm: They fail for obvious reasons. The absolute biggest term in the prediction dictates the error and predictions for small values can be wrong by orders of magnitudes. - relative error e.g. MRE: They fail for the presence of zeros in my predictions so are not applicable as is but have to be extended with a threshold for example. - thresholded relative error e.g.$|\frac{\hat{y}_i-y_i}{\max(|y_i|,t_i)}|$: fixes the zero issue but the choice of threshold can be arbitrary and does not guarantee good performance. - logarithmic scaling for error calculation: same problem as with relative errors except worse because predictions might be negative. So far I have not been able to find a good compromise and I see the consequences in finding good approximations for the more dominant rates and very poor results for the smaller ones. Are there any more reliable/meaningful error measures I am missing?
Which loss function to use if entries in a prediction span multiple orders of magnitude?
CC BY-SA 4.0
null
2023-05-11T06:31:56.553
2023-05-12T18:44:26.683
null
null
387714
[ "machine-learning", "loss-functions", "error" ]