Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
615044
1
null
null
1
7
I'm fine-tuning a large language model to predict binary sentiment, where a false negative is far more costly for my use case than a false positive. I've used weighted cross-entropy to account for this, and when I evaluated my model on the test set, there were a few cases in the test set that were unacceptable misclassifications. I then decided to re-train the model by manually imputing those edge cases to my training data, and it no longer made that mistake. However, I know it is poor form to "train on the test set", which is to an extent what I'm doing. How else am I supposed to account for this specific edge case not being misclassified?
Accounting for edge cases without training on the test set
CC BY-SA 4.0
null
2023-05-05T19:55:03.473
2023-05-05T19:55:03.473
null
null
386642
[ "classification", "binary-data", "natural-language", "perplexity" ]
615045
1
615066
null
1
18
Can I use VECM, GARCH and HAR-RV for forecasting of carbon price? I'm not sure the assumptions of the models don't contradict each other.
Can I use VECM, GARCH and HAR-RV for forecasting of carbon price?
CC BY-SA 4.0
null
2023-05-05T20:21:04.983
2023-05-06T07:56:55.177
null
null
361080
[ "time-series", "forecasting", "garch", "vector-error-correction-model" ]
615046
1
615103
null
3
46
I'm relatively new to the field of biostats and was reading up on Non-Inferiority Trials and Margins and came across this explanation which is tripping me off a bit. "For example, the oral Tebipenem HBr study cited above set a Non-Inferiority Margin of -12.5% for the primary endpoint of overall response rate. This means in a head-to-head trial, as long as the 95% Confidence Interval for the response rate with tebipenem was not lower than 87.5% of what was seen with intravenous ertapenem , the oral drug would have been deemed non-inferior" What is this 87.5% number? I'm assuming he took 100 - 12.5 = 87.5% but what does that have to do with the Non-inferiority margin? Here is the link to the actual study: [https://www.nejm.org/doi/10.1056/NEJMoa2105462](https://www.nejm.org/doi/10.1056/NEJMoa2105462) Say for example the intravenous ertapem response rate is 50%. Then 87.5% of 50 is 43.75. Does that mean to pass non inferiority the new drugs response rate 95% confidence interval has to be at the very least above 43.75%? In which case the NON-Inferiority Margin would be 43.75-50=-6.25% and not -12.5 ?? Alternatively say for example IV ertapem has a 25% Response rate. A -12.5% Non Inferiority Margin would mean the new drug can only go as low 12.5%. But 12.5% of 25% is 50% not 87.5% The only way I can understand it is that he's saying that at any given moment the absolute risk difference from the 95% CI should never be lower than -12.5% (but that doesn't necessarily mean it's an 87.5% Response Rate of IV ertapem. Look at my 2nd example above)
Is this explanation of a Non-Inferiority Margin Correct?
CC BY-SA 4.0
null
2023-05-05T20:23:41.843
2023-05-06T22:30:56.443
2023-05-06T18:11:08.240
387320
387320
[ "confidence-interval", "binomial-distribution", "biostatistics", "clinical-trials", "non-inferiority" ]
615047
2
null
614982
4
null
For distribution with a scale parameter such as the gamma and the weibull, the likelihood can be maximised numerically with respect to the remaining parameters (the shape in case of the gamma and the weibull) since for any given value of those parameters (the shape), there will be only a single value of the scale parameter for which the 95% quantile of the distribution matches the target value. The following R code implements this method: ``` x <- c(28.744,385.714,20.595,99.350,31.864,77.713, 264.408,21.204,31.937,0.900,18.762,173.276,23.707) constrained_mle <- function(x, p, q) { lnL <- function(shape) { # Solving q = qgamma(p, shape)*scale for the necssary scale scale <- q/qgamma(p, shape) sum(dgamma(x, shape, scale=scale, log=TRUE)) } res <- optimise(lnL, lower=0, upper=1e+3, maximum=TRUE, tol=1e-8) scale <- q/qgamma(p, res$maximum) c(scale=scale, shape=res$maximum) } par <- constrained_mle(x, .95, 500.912) par #> scale shape #> 231.0561574 0.6038756 # checking that the solution is correct qgamma(.95, scale=par[1], shape=par[2]) #> [1] 500.912 ``` The constained MLEs are evidently quite different from the estimates produced by matching the first moment and the target quantile (shape=0.1470374 and scale=616.5818235) in the answer by Matt F.
null
CC BY-SA 4.0
null
2023-05-05T20:23:53.003
2023-05-05T20:23:53.003
null
null
77222
null
615048
2
null
614790
0
null
It looks good to me, but I'll clarify some points that could create ambiguity for future reference. Let's suppose that you have a training dataset $\mathcal{D}$, a validation dataset $\mathcal{V}$, and a model $g = g_w(\lambda,\mathcal{D})$ with parameters $w$ and hyperparameters $\lambda$ in an hypothesis space $\Lambda$. To evaluate the model performance after training, we use a metric $M$ on the validation set. > Objective function: this is the function that we want to minimize or maximize (like loss function for example) The objective function (also called response function) $$f(\lambda) = M(g(\lambda,\mathcal{D}),\mathcal{V})$$ is the value of a metric obtained by a learned model $g$ with (fixed) hyperparameters $\lambda$ on a dataset $\mathcal{D}$. - In cases like regression you can use MSE as loss function and also as the metric $M$, but in most cases $M$ is another function - In gradient-based learning the loss function is (almost everywhere) differentiable, while $f(\lambda)$ can be not differentiable or not optimizable through GD (Think about accuracy) - Evaluating $f(\lambda)$ is expensive since it requires to train each model to convergence and then evaluate the metrics on the validation set $\mathcal{V}$. Expecially for deep model, this process can require several days, so it's not feasible to evaluate all combinations of hyperparameters $\lambda_1 \dots \lambda_n \in \Lambda$ (Grid search), and hyperparmeters can also be continuous. This is why we want to find a lightweight predictor $\hat{f}(\lambda)$ that estimates the value $f(\lambda$), before the expensive training. To train this predictor, you initially sample a casual combination of hyperparameters $\lambda_i \in \Lambda$, then you compare the estimated value $f(\lambda)$ with the metric $M$ obtained after training and evaluating the model with hyperparameters $\lambda_i$. The update is done with Bayes Rule The selection function (also called acquisition function) tells us which regions of the hyperparameter space are more promising, i.e the regions where the model is predicted to perform well and/or the regions where there is more uncertainity. To guarantee a good search, we want to find a compromise between exploration and exploitation [](https://i.stack.imgur.com/c1f2N.png) The selection function can be modeled as - the probability that a set of hyperparameters $\lambda_j \in \Lambda$ improves upon the current best value (probability of improvement). An issue with this is that you don't estimate how much the new set of hyperparameter will improve - the expected improvement $$\mathbb{E}[I(\lambda)] = \mathbb{E}[\max(0, f_{\text{min}} - \hat{f}(\lambda))]$$ You want to try the hyperparameter values that maximize the expected improvement under some budget, and you also need to account for hyperparamter combinations that are more expensive to evaluate than others, so we can define the expected improvement per second ## References [[1](https://link.springer.com/chapter/10.1007/978-3-642-25566-3_40)] Hutter, Frank, Holger H. Hoos, and Kevin Leyton-Brown. "Sequential model-based optimization for general algorithm configuration." Learning and Intelligent Optimization: 5th International Conference, LION 5, Rome, Italy, January 17-21, 2011. Selected Papers 5. Springer Berlin Heidelberg, 2011. [[2](https://link.springer.com/article/10.1023/A:1008306431147#citeas)] Jones, D.R., Schonlau, M. & Welch, W.J. Efficient Global Optimization of Expensive Black-Box Functions. Journal of Global Optimization 13, 455–492 (1998). [https://doi.org/10.1023/A:1008306431147](https://doi.org/10.1023/A:1008306431147)
null
CC BY-SA 4.0
null
2023-05-05T20:25:42.613
2023-05-05T20:25:42.613
null
null
377435
null
615049
1
null
null
1
18
A background: I am currently working with the 'elasticnet' package ([elasticnet v.1.3)](https://www.rdocumentation.org/packages/elasticnet/versions/1.3) maintained by Hui Zou. This package was developed to accompany Hui Zou and Trevor Hastie's Statistical Society B article ["Regularization and variable selection via the elastic net"](https://hastie.su.domains/Papers/B67.2%20(2005)%20301-320%20Zou%20&%20Hastie.pdf) in which they propose the elastic net method, a method that weights and counterweights the $L_1$ and $L_2$ norms via a selection of a particular parameter $\lambda$ defined as the "quadratic penalty parameter" in the package documentation. When $\lambda = 0$, the `enet` algorithm effectively performs the LASSO fit and when $\lambda = 1$ I think the algorithm performs a Ridge Regression fit. This is where my question/confusion begins: I am comparing this `enet` method of variable selection to `lars` LASSO and `MASS lm.ridge()` methods by changing values of the quadratic penalty parameter $\lambda$. I have confirmed that when $\lambda = 0$ I get the LASSO fit as expected. However, when I take $\lambda = 1$ I get `enet` results that are wildly different from the `lm.ridge` results, and much more inaccurate. As a side note, I am always using `predict(..., naive = FALSE, ...)` when performing my calculations. A snippet of my code is below: ``` # Elastic Net with Parameter 1 lm_EN3 <- enet(x = data.matrix(curr_train_BH0[,2:17]), y = curr_train_BH0$Y, lambda = 1.0) step_index = as.numeric(names(which.min(lm_EN3$Cp))) EN3_pred = predict(lm_EN3, newx = data.matrix(curr_test_BH0[,2:17]), s = step_index, type = 'fit', mode = 'step')$fit EN3_pred_MSE <- MSE(EN3_pred, curr_test_BH0$Y) pred_MSE_vec <- append(pred_MSE_vec, EN3_pred_MSE) #print("EN3 Success!") # Ridge regression model with 16 predictors and select best lambda myLambda = seq(0.0001,10.0,0.00001) lm_Ridge <- lm.ridge(Y~., data = curr_train_BH0, lambda = myLambda) index = which.min(lm_Ridge$GCV) bestLambda2 <- myLambda[index] lm_Ridge_best <- lm.ridge(Y~., curr_train_BH0, lambda = bestLambda2) predicted <- cbind(1,as.matrix(curr_test_BH0[,2:17]))%*%coef(lm_Ridge_best) Ridge_pred_MSE <- MSE(predicted, curr_test_BH0$Y) pred_MSE_vec <- append(pred_MSE_vec, Ridge_pred_MSE) #print("Ridge Success!") ``` Question: Am I incorrect that thinking $\lambda = 1$ would correspond to a Ridge regression approach in variable selection? If so, what does quadratic penalty parameter of $1$ produce when using `enet` compared to a `lm.ridge()` approach?
Differences in Performance Between in MASS Package lm.ridge() and enet in elasticnet Package
CC BY-SA 4.0
null
2023-05-05T20:29:55.750
2023-05-05T20:29:55.750
null
null
373223
[ "r", "lasso", "ridge-regression", "elastic-net" ]
615050
2
null
185253
1
null
You seem to confuse $\displaystyle \frac1{n-1} \sum_{i=1}^n \left( X_i - \overline X \right)^2$ with $\displaystyle \frac1{n-1} \sum_{i=1}^n \left( X_i - 10 \right)^2.$ If you take a new sample of $n$ observations independent of the first $n$, then $\overline X$ will change but $10$ will not change. They're two different things. And it is far easier to prove that $\displaystyle \frac1n \sum_{i=1}^n \left( X_i - 10 \right)^2$ is unbiased then that $\displaystyle \frac1{n-1} \sum_{i=1}^n \left( X_i - \overline X \right)^2$ is unbiased (but both are unbiased).
null
CC BY-SA 4.0
null
2023-05-05T20:34:37.883
2023-05-05T20:34:37.883
null
null
5176
null
615051
2
null
543366
0
null
It is extremely common to talk about how many standard deviations a point is from the mean. Without a reference point (such as the mean), a phrase like "within the range of 0.5*sigma" lacks meaning, but because of how common it is for the mean to be this reference, that seems like a safe assumption here. Thus, it seems like what you want to do is deterine the interval $\mu\pm0.5\sigma$. You know how to calculate means. You know how to calculate standard deviations. You know how to multiply by $0.5$. You know how to add and subtract. That's all there is to it! For example, if the mean is $9$ and standard deviation is $6$, you are looking at an interval of $9\pm3$, so $(6, 12)$. (There is a technical point that you are, most likely, working with estimates, so it may be more appropriate to write $\bar x$ as an estimate of $\mu$ and $s$ as an estimate of $\sigma$.)
null
CC BY-SA 4.0
null
2023-05-05T20:40:45.763
2023-05-05T21:25:12.903
2023-05-05T21:25:12.903
247274
247274
null
615052
2
null
615040
5
null
Assuming that the discrete variables are ordinal or binary, a simple assumption is that there is an underlying latent multivariate normal distribution, and that the discrete variables have been obtained by binning the corresponding normal distributions. Dependence can then be modelled by the correlations of the underlying normal. See R-package polycor and the literature listed on its help pages (particularly function hetcor).
null
CC BY-SA 4.0
null
2023-05-05T20:50:34.757
2023-05-05T20:50:34.757
null
null
247165
null
615053
1
null
null
1
19
I have collected percentage data for Rocky shore and summarised it per algae type like the example below: ``` Low Mid Mid-High High Brown 23% 13% 3% 40% Green 0% 8% 14% 18% Red 12% 6% 12% 0% ``` How can I transform this percentage coverage data to species abundance data? Thanks in advance
How to transform percentage cover data into species abundance data in algae
CC BY-SA 4.0
null
2023-05-05T20:51:19.343
2023-05-05T23:52:44.650
2023-05-05T23:52:44.650
805
387323
[ "categorical-data", "data-transformation", "dataset", "percentage", "coverage-probability" ]
615055
2
null
615040
4
null
Parametric distributions for mixed discrete/continuous random variables can easily be built up as mixtures of standard parametric families of discrete distributions and continuous distributions. There are huge numbers of choices of mixtures you could make, depending on how many distributions you want to mix, and which ones you choose. In general, if you have any parametric discrete distribution with mass function $p_\theta$ and any parametric continuous distribution with density $f_\phi$ then you can create the corresponding parametric mixture distribution with CDF: $$F(x|\theta, \phi, \lambda) = \lambda \sum_{r \leqslant x} p_\theta(r) + (1-\lambda) \int \limits_{0}^r f_\phi(r) \ dr.$$ While there are infinite ways you can do this, there are a few common cases of mixd random variables that arise in statistical analysis. One extremely common form of model for this is zero-inflation of a continuous random variable, where the continuous distribution is mixed with a point-mass distribution on zero. Another common form occurs when there is censorship of a continuous random variable over some of its range, which yields a mixture case.
null
CC BY-SA 4.0
null
2023-05-05T22:32:59.647
2023-05-05T22:32:59.647
null
null
173082
null
615056
2
null
615034
0
null
If I understand your problem correctly (which I may not have), you want to compute the following quantity in a numerically stable way: $$F_s(\boldsymbol{\kappa}, b) \equiv \frac{\exp( \kappa_s - b )}{\sum_{s} \exp( \kappa_s - b)}.$$ To do this you need to do computation in log-space, which requires you to manipulate your function into a form that does not involve any intermediate computation outside of log-space. For the type of function you are dealing with (which is a form of the [softmax function](https://en.wikipedia.org/wiki/Softmax_function)) we can write: $$\begin{align} \log F_s(\boldsymbol{\kappa}, b) &= \log \bigg(\frac{\exp( \kappa_s - b )}{\sum_{s} \exp( \kappa_s - b)} \bigg) \\[6pt] &= \log \Big( \exp( \kappa_s - b ) \Big) - \log \Big( \sum_{s} \exp( \kappa_s - b) \Big) \\[8pt] &= \kappa_s - b - \text{logsumexp} (\boldsymbol{\kappa} - b). \\[6pt] \end{align}$$ (In the last line we take $\boldsymbol{\kappa} - b$ to refer to the vector of values $\kappa_s - b$ taken over all indices $s$.) This form puts things in terms of the [logsumexp function](https://en.wikipedia.org/wiki/LogSumExp), which can itself be computed in log-space in a numerically stable way (see [this related question](https://stats.stackexchange.com/questions/381936/)). Consequently, we can compute the original function of interest in log-space (converting back to regular space at the end) as: $$F_s(b, \boldsymbol{\kappa}) = \exp(\kappa_s - b - \text{logsumexp} (\boldsymbol{\kappa} - b)).$$ --- Coding the function: The softmax function is already coded in all relevant mathematical and statistical software, so you should be able to find it if you look around in the software you are using. If you want to code it from scratch in `R` you can do it like this: ``` #Create softmax function softmax <- function(x, b = 0, log = FALSE) { #Check inputs if (!is.vector(x)) stop('Error: Input x should be a vector') if (!is.numeric(x)) stop('Error: Input x should be a numeric vector') if (!is.vector(b)) stop('Error: Input b should be a vector') if (!is.numeric(b)) stop('Error: Input b should be a numeric vector') if (length(b) != 1) stop('Error: Input b should be a single numeric value') if (!is.vector(log)) stop('Error: Input log should be a vector') if (!is.logical(log)) stop('Error: Input log should be a logical vector') if (length(log) != 1) stop('Error: Input log should be a single logical value') LOGS <- (x-b) - matrixStats::logSumExp(x-b) if (log) { LOGS } else { exp(LOGS) } } ```
null
CC BY-SA 4.0
null
2023-05-05T22:54:12.637
2023-05-05T22:54:12.637
null
null
173082
null
615059
1
null
null
1
20
I have been recently been working with the `MASS`, `lars`, and `glmnet` packages to study variable selection and prediction via LASSO, Ridge Regression, and the Elastic Net. To compare these methods against one another, I have been using some Boston Housing Market data (`BH0`) but I am not entirely sure that this dataset captures a good comparison amongst the methods since there are 505 observations and only 17 predictors for each response. In general, I know that the LASSO method follows $L_1$-norm regularization while Ridge Regression follows $L_2$-norm regularization, and Elastic Net is a combination of the two methods to hopefully maximize on the advantages of each method. Further, I know that Ridge Regression is often the preferred method when the number of predictors $p$ is large compared to the number of observations $n$, or when there are concerns of multicollinearity among the predictors. My question: Are there any particular packages or datasets that would be good for comparing LASSO, Ridge Regression, and Elastic Net methods against one another, or any recommendations for datasets?
Best Datasets and Packages for Comparing LASSO, Elastic Net, and Ridge
CC BY-SA 4.0
null
2023-05-06T03:09:54.423
2023-05-06T07:46:35.713
2023-05-06T07:46:35.713
53690
373223
[ "r", "dataset", "lasso", "ridge-regression", "elastic-net" ]
615060
1
null
null
0
13
How would I go about examining which items on one questionnaire are most associated with the overall score of another questionnaire and each sub scale of this second questionnaire?
How to examine the relationship between items on one questionnaire with scores on another?
CC BY-SA 4.0
null
2023-05-06T03:18:42.210
2023-05-06T03:18:42.210
null
null
387331
[ "correlation", "survey" ]
615063
1
null
null
0
29
I would like to compare whether 4 continuous variables have a difference among 3 groups (disease states A, B, C). There are 48 cases in the sample, however, the continuous variables are quite skewed. I have run Kruskal-Wallis with post-hoc Dunn's tests for pairwise comparisons. I would like to however run another analysis while adjusting for age, sex, and post-mortem delay (continuous variable). I have gone ahead and used separate median regressions for this, each time using one of the 4 continuous variables as the dependent variable, and adding disease state, age, sex, and post-mortem delay as predictors. This is working fine, except I'm getting p-values in reference to one of the disease state groups and therefore missing the comparison between the other two groups (i.e. am only getting A-C and B-C, and not A-B). I am wondering two things: - Is this an acceptable approach, or is there a better approach. I did not want to use GLM given the nature of the sample. - Would it be reasonable to run an additional median regression defining one of the other groups as a reference in order to get a comparison between A and B. Or again is there a better approach for a pairwise comparison which adjusts for age, sex, and post-mortem delay?
Median regression comparing continuous variables between three groups, adjusting for demographic variables
CC BY-SA 4.0
null
2023-05-06T05:12:38.817
2023-05-06T06:51:32.003
2023-05-06T06:51:32.003
387335
387335
[ "regression", "quantile-regression" ]
615064
1
null
null
0
28
I have read a paper [(https://doi.org/10.1038/s41591-023-02296-6)](https://doi.org/10.1038/s41591-023-02296-6) in this paper, fig 5, the authors point out: > between-group differences (disease versus health, t-statistics, two-sided) in body (n = 78, top) and brain (n = 2,309, bottom) features (x axis) My question is: - It seems to be rare to use the t statistic to measure the difference between groups. Compared with other similar statistics, such as Z-score, does it have any advantages? - The author mentions the t-statistic and mentions that it is two-sided, how is this different from the two-sided t-statistic threshold used for significance testing?
Use t-statistics to measure between group differences
CC BY-SA 4.0
null
2023-05-06T06:14:00.297
2023-05-06T06:32:13.747
2023-05-06T06:32:13.747
56940
387339
[ "hypothesis-testing", "correlation", "statistical-significance", "standard-deviation", "group-differences" ]
615065
1
615080
null
0
32
In the code below I test several parametric distributions against the `lung` dataset from the `survival` package data for best fit. My main question with respect to the below code is my use of `fitdist(lung$time, dist)`. Is this the correct usage of the `fitdist()` function from the `firstdistrplus` package? Where the only data input into the model is the "time" column? During my research as I drafted this code I came across examples where only the "time" values are input into the `fitdist()` function and there are further notes that "...the `fitdistrplus::fitdist()` function can handle censored data without explicitly specifying the censoring status. The function assumes that any value greater than the largest observed event time is right-censored..." which doesn't make sense to me. The `lung` dataset has a "status" column where 1=censored, 2=dead. Why wouldn't this extra information be used? I reviewed the package documentation for the `fitdist()` function and the example dataset used is `groundbeef$serving` which doesn't have the time and status elements of the `lung` dataset. Alterenatively, if `fitdist()` ignores censoring status, should I be instead using another parametric distribution fitting function from another package? Code: ``` library(fitdistrplus) library(survival) # Create vector of parametric distributions to test distList <- c("weibull", "exp", "gamma", "lnorm") # Function fits each distribution and extracts AIC, BIC, log-likelihood values fit_dist <- function(dist) { tmp <- fitdist(lung$time, dist) c(tmp$aic, tmp$bic, tmp$loglik) } # Apply above function to each distribution in the distList results_list <- lapply(distList, fit_dist) # Convert the above list of results to a data frame results_df <- data.frame(t(matrix(unlist(results_list), nrow = 3))) colnames(results_df) <- c("aic", "bic", "logLik") rownames(results_df) <- distList # Find the distribution with the lowest AIC, BIC, and logLik values bestFitAIC <- rownames(results_df)[which.min(results_df$aic)] bestFitBIC <- rownames(results_df)[which.min(results_df$bic)] bestFitLogLik <- rownames(results_df)[which.max(results_df$logLik)] # Print the results data frame and best fitting distributions results_df cat("\nBest fitting distribution using AIC:", bestFitAIC, "\n") cat("\nBest fitting distribution using BIC:", bestFitBIC, "\n") cat("\nBest fitting distribution using Log-Likelihood:", bestFitLogLik, "\n") ```
How to correctly select parametric distribution that best fits survival data?
CC BY-SA 4.0
null
2023-05-06T07:40:04.717
2023-05-07T07:19:49.657
null
null
378347
[ "r", "distributions", "survival", "goodness-of-fit" ]
615066
2
null
615045
1
null
You can certainly try to use these models or their pairwise combinations. The only pairwise combination that would not make sense is GARCH + HAR-RV, as both model the conditional variance. Having two models for the same underlying quantity would indeed be contradictory. On the other hand, VECM + constant* variance, VECM + GARCH, VECM + HAR-RV, constant* mean + GARCH and constant* mean + HAR-RV would not be contradictory. Once you formulate and estimate one or more of these models, you may benefit from assessing their statistical adequacy and their forecasting performance. They may turn out to be poor models of the underlying time series; there is no ex ante guarantee they would be good. *It could be constant or some other model.
null
CC BY-SA 4.0
null
2023-05-06T07:56:55.177
2023-05-06T07:56:55.177
null
null
53690
null
615067
2
null
585489
2
null
I'm not sure if that's legit but you can still use GEE. If I were you I would define an autoregressive working dependence structure (use the PACF and confirm which lags is any are significantly different from zero), covariance type as robust and I would treat all observations as one cluster. Here's Python code for that: ``` import statsmodels.api as sm import statsmodels.formula.api as smf # Define the GEE model formula formula = 'target ~ X1 + X2 + X3' # Define the correlation structure - note that this statsmodels class allows only for a first-order autoregressive working dependence structure. Grid will become default argument in the future release, so to suppress warnings I set this argument to True. cov_struct = sm.cov_struct.Autoregressive(grid=True) # Define the GEE model with binary target variable and logistic link function model = smf.gee( formula=formula, data=df, groups=np.ones(len(df)), family=sm.families.Binomial(), cov_struct=cov_struct, ) # Fit the GEE model result = model.fit( maxiter=100, cov_type='robust', ) # Print the summary of the model print(result.summary()) ```
null
CC BY-SA 4.0
null
2023-05-06T08:15:03.300
2023-05-06T09:43:22.550
2023-05-06T09:43:22.550
364710
364710
null
615068
1
615089
null
5
167
I have been trying to define a Gaussian Process from hand by simply writing down the required formulas and apply them to an toy example. Here is what my (noise-free), observed data looks like: ``` x_observed = np.arange(-5, 5, 0.2).reshape(-1, 1) y_observed = np.sin(x_train) fig, ax = plt.subplots(figsize=(12, 8)) ax.scatter(x_train, y) ``` [](https://i.stack.imgur.com/AJsKim.png) Moreover, I used the exponential quadratic kernel function: ``` def kernel_func(X1, X2, l=1.0, sigma=1.0): sqdist = np.sum(X1**2, 1).reshape(-1, 1) + np.sum(X2**2, 1) - 2 * np.dot(X1, X2.T) return sigma**2 * np.exp(-0.5 / l**2 * sqdist) ``` and defined the gaussian process: ``` def gp(x_observed, y_observed, x_new): k = kernel_func(x_observed, x_observed) K_star_star = kernel_func(x_new, x_new) K_star = kernel_func(x_observed, x_new) K_inv = np.linalg.inv(k) mu = K_star.T @ K_inv @ y_observed sigma = K_star_star - K_star.T @ K_inv @ K_star return mu, sigma ``` Creating new data, which I want a prediction for: ``` X_new = np.array([-3.5, -3, -2, -1, 1]).reshape(-1, 1) ``` I can get the mean value for each $x_i$ in X_new, by calling the gaussian process function: ``` mu_s, cov_s = gp(x_observed, y_observed, X_new) ``` Now, I did expect that the point prediction, that is the mean, for some of the $x_i$ in `X_new` are equal to some values in `y_observed`, since we already know the target value, i.e. the values were already observed. However, looking at the observed y values and `mu_s` : ``` for i, x in enumerate(X_new): print(f'target function {np.sin(x)}, prediction {mu_s[i]}') target function [0.35078323], prediction [-6.27205575] target function [-0.14112001], prediction [-1.03358472] target function [-0.90929743], prediction [2.2573257] target function [-0.84147098], prediction [3.20067827] target function [0.84147098], prediction [-7.19285408] ``` I either have a missunderstanding of Gaussian Processes or an error in my code, but I could not find any bugs or similar
Error in Gaussian Process Implementation
CC BY-SA 4.0
null
2023-05-06T08:33:42.317
2023-05-08T10:35:25.840
2023-05-06T10:32:48.230
198044
198044
[ "python", "gaussian-process" ]
615069
2
null
505801
2
null
I see often that people spend too little time planning the experiment and too much time evaluating a corrupted dataset. Therefore, I get suspicious and ask why have you chosen this percentile? Have you thought about the frequency of corrupted data points, and their origin before evaluating the dataset? Do the values 1% and 99% just enhance the "argument" you are trying to make or are you being conservative? You should ask these questions yourself, and test if the answers are satisfying. To the question: State what you are using to evaluate the data. Do not say that you are evaluating min and max values, but use the 1% and 99% percentiles instead. It's also good practice to run different evaluations using different values and test that the result is robust against the subjective choice (1%, 99%). Other than this, I do not take an issue with the analysis. Here is sample R code. ``` ## generate fake data nDays = 200 # we take data for 200 days nData = 24*2 # we take one data point every 30min => 48 data points per day data = rnorm(nDays*nData) # fake data day = factor(rep(1:nDays, each=nData)) # store the data in a data frame: df = data.frame(data, day) ## calculate quantiles for each day: library(dplyr) q01 = df %>% group_by( day ) %>% reframe(q = quantile(data, c(1e-2))) q99 = df %>% group_by( day ) %>% reframe(q = quantile(data, c(99e-2))) ## plot them dfq = data.frame( data = c(q01$q, q99$q), grp = factor(c(rep('1%', nDays), rep('99%', nDays))) ) boxplot(data ~ grp, dfq) ```
null
CC BY-SA 4.0
null
2023-05-06T09:03:10.627
2023-05-06T09:24:18.727
2023-05-06T09:24:18.727
22047
163054
null
615070
1
null
null
1
21
I've created this regression table, but I'm unsure if this is the correct way to present my results. The dependent variable is the proportion of each frame, and the independent variable is time (from 2012-2022), with the outcomes listed for all 11 frames analysed. These outcomes were all derived 11 separate simple linear regressions. I have R squared statistics for each of the 11 regressions, but I'm unsure how to calculate the R squared figure for this table, whether each result needs its own R squared figure given that they are all derived from separate regressions, or whether this is the wrong table for presenting these results. Thanks! [](https://i.stack.imgur.com/KKT4h.png)
Presenting separate linear regression models in one regression table?
CC BY-SA 4.0
null
2023-05-06T09:20:47.613
2023-05-06T09:20:47.613
null
null
386624
[ "r", "regression", "linear-model", "r-squared" ]
615071
1
null
null
1
30
I have time series with 322 observations. My dataset contains financial data. My endogenous variable, "target" is a binary variable. My exogenous variables are two continuous variables: "market_return" and "retail_reduction_rate" and one binary variable "target_lag1" which is lagged dependent variable. The reason for that is that I found out that my dependent variable is autocorrelated up to first 7 lags using the Ljung-Box test. Here's the output from my function for performing the Ljung-Box test: [](https://i.stack.imgur.com/nhEPx.png) Firstly I created simple binary logistic regression model. Here's the output:[](https://i.stack.imgur.com/WcRD3.png) Surprisingly, deviance residuals aren't correlated at all. Here's the output from my function for performing the Ljung-Box test: [](https://i.stack.imgur.com/0yW8N.png) I'm not sure if that is evidence that observations are independent (this is one of the critical assumptions in logistic regression). I know that GEE model address the serial correlation in data, so I tried utilizing it. I defined a first-order autoregressive working dependence structure (I used PACF and confirmed that only first lag is significantly different from zero), covariance type as robust and I treated all observations as one cluster. Here's the output: [](https://i.stack.imgur.com/NGz0z.png) It's very different from Logit Regression Results. Standard errors are much smaller! Bonus: Here's the output for the 'naive' covariance type: [](https://i.stack.imgur.com/8sUf1.png) Which approach is correct?
Binary logistic regression vs generalized estimating equation (GEE) for time series
CC BY-SA 4.0
null
2023-05-06T09:39:50.193
2023-05-06T11:54:04.440
2023-05-06T11:54:04.440
364710
364710
[ "time-series", "logistic", "autocorrelation", "generalized-estimating-equations" ]
615073
2
null
612843
1
null
- Pooling data is only allowed if you can reasonably make the assumption of equal distributions. For instance when the null hypothesis of equal medians is correct, but also other distribution parameters, like variance, should be the same. By pooling the groups you will get a more precise estimate of the distribution of the statistic, because you are using a more precise estimate of the empirical distribution of the data (an estimate that improves when we have more datapoints). - The approach 2 without pooling the data also works if the two groups have different distributions. With this method you do have to think about the interpretation of the distribution. Example with two beta distributions shifted such that their medians are 0: I have chosen the parameters to create a difficult situation on purpose. Here the sampling distribution of the experiment has some skewness and the right tail is stretched out further than the left tail. I also chose a random seed such that the outcome is far in the left tail. This situation shows that the bootstrap does mimic the skewness of the distribution, but as a hypothesis test, one should consider to shift the bootstrapped distribution to be centered around zero, instead of ateound the observed median. The probability that the bootstrapped sample has median zero or larger is different from the probability that the sampling distribution has the observed value or smaller. Example code: ``` set.seed(2) n = 31 ### create some data from distributions with zero median alpha=0.25 beta=2 x = rbeta(n,alpha,beta)-qbeta(0.5,alpha,beta) y = rbeta(n,beta,alpha)-qbeta(0.5,beta,alpha) ### order the datapoints x = x[order(x)] y = y[order(y)] ### bootstrapping based probability distribution of sampled medians k = 1:n m = (n-1)/2 p = (1/n)*(k/n)^m*((n-k)/n)^m*factorial(n)/factorial(m)^2 ### create tables for convolution mS = outer(x,y,"-") # domain mP = outer(p,p,"*") # probabilities ### compute an estimate for density of median(x)-median(y) f = density(mS, weights=mP, n=2/0.005, bw = 0.005, kernel = "rectangular" , from = -1, to = 1) brks = seq(-1,1,0.005) #### creating sampling distribution estimates #### based on repeating the experiment experiment = function() { x = rbeta(n,alpha,beta)-qbeta(0.5,alpha,beta) y = rbeta(n,beta,alpha)-qbeta(0.5,beta,alpha) return(median(x)-median(y)) } m_sample = replicate(10^5, experiment()) ### plot histogram hist(m_sample, breaks = brks, xlim = c(-0.1,0.25), freq = 0, main = "estimate for density of median(x)-median(y) \n density curve based on bootstrap \n histogram based on re-sampling true distribution" , ylim =c(0,25)) lines(f) ### plotting other stuff lines(c(1,1)*(median(x)-median(y)),c(0,25),lty=2,col =2) text((median(x)-median(y)),15,"observed value",col =2,srt=90,pos =4) ```
null
CC BY-SA 4.0
null
2023-05-06T10:11:42.033
2023-05-06T10:11:42.033
null
null
164061
null
615074
1
null
null
5
125
From the [slides of variational inference](https://www.cs.princeton.edu/courses/archive/fall11/cos597C/lectures/variational-inference-i.pdf), it shows the evidence lower bound ($L$) and the derivative over a variational distribution $q(z_k)$, quoted as follows $$ L_k = \int q(z_k) E_{-k} \bigg[ \log p(z_k|z_{-k},x) \bigg] dz_k - \int q(z_k) \log q(z_k) dz_k \tag{22} $$ equation (22) focuses on the $k$th latent variable $z_k$, and assumes other latent variables $z_{-k}$ as constants, so that from (22), the derivative of $z_k$ can be derived $$ \frac{dL}{dq(z_k)} = E_{-k} \bigg[ \log p(z_k|z_{-k},x) \bigg] - \log q(z_k) - 1 = 0 \tag{23} $$ and then by a simple reformulation, we can now have the coordinate ascent update rule for $q(z_k)$ $$ q^*(z_k) \propto \exp \bigg\{ E_{-k} \bigg[ \log p(z_k|z_{-k},x) \bigg] \bigg\} \tag{24} $$ Question, what I don't understand is how (23) can be like that? Because according to my understanding of (22), the integral is over $z_k$ which is $\int_{z_k}$, not over $q(z_k)$ which is $\int_{q(z_k)}$. If so, instead of deriving $\frac{dL}{dz_k}$, how to derive $\frac{dL}{dq(z_k)}$?
derivation of coordinate ascent variational inference
CC BY-SA 4.0
null
2023-05-06T10:22:44.143
2023-06-02T15:46:32.807
null
null
30540
[ "calculus", "variational-inference" ]
615075
2
null
611723
1
null
If I understand the proposed algorithm correctly, we can prove that this doesn't generally sample from the target distribution by way of counter example. And while this algorithm will be a bit pathological, I believe it does do a good job of illustrating what can easily go wrong with this method. Consider the case that the target distribution is a N(0, 1) distribution. For $Q_1$, we select a N(0, 0.5) distribution. For $Q_2$, we select a N(0, 0.001) if $x_1$ was positive and N(0, 0.5) if $x_2$. In the target distribution, 50% of the values should be positive. However, in our algorithm, if the current state becomes positive, it gets stuck in the positive region for a very long time due to the small step size. If the current state is negative, it moves quickly due to the large step size. Thus, the samples from our algorithm will be disporpotionally positive, even after sufficient burnin. Therefore, we never approach the target distribution.
null
CC BY-SA 4.0
null
2023-05-06T11:01:30.867
2023-05-06T11:01:30.867
null
null
76981
null
615078
1
null
null
0
27
I am trying to create an ARIMA model which allows to extrapolate a voltage curve of a sensor battery into the future. Is this a possible application scenario at all? I have four different voltage waveforms from two different sensors (Hourly measured values) that I want to use to train the model (I have inflated each waveform to 1000 data points so that all waveforms contain the same data points): [](https://i.stack.imgur.com/p5lZ2.png) Augmented Dickey-Fuller Test over Trainings set: ``` 1. ADF : -2.3325211837089252 2. P-Value : 0.16169362186896774 3. Num Of Lags : 0 4. Num Of Observations Used For ADF Regression and Critical Values Claculation : 3999 pandCritical Values : 1% : -3.431986284700283 5% : -2.862263021072379 10% : -2.5671548718243575 ``` P-Value of 0.16 => the data set is not stationary ``` decomposition = seasonal_decompose(df_train['rm_data_BatV']) decomposition.plot() ``` [](https://i.stack.imgur.com/W2sTE.png) How should we assess the trend here? From my point of view, the individual curves show a downward trend, but the complete data set over time does not show a trend? The seasonal component between 0.001 and - 0.001 is negligible? Or would the plot have to be set with period=1000, since voltage curve includes 1000 data points? ``` decomposition = seasonal_decompose(df_train['rm_data_BatV'], period=1000) decomposition.plot() ``` [](https://i.stack.imgur.com/kTmsV.png) Does the data set now have a trend component and a seasonal component? Can these be eliminated? ``` plot_acf(df_train['rm_data_BatV']) plot_pacf(df_train['rm_data_BatV']) ``` [](https://i.stack.imgur.com/8SHo7.png) Is it because of this high autocorellation that the ARIMA model does not work? ``` pmas = pma.auto_arima(df_train['rm_data_BatV']) pmas.summary() ``` AutoARIMA gives me (0,1,0) ``` sarimax = SARIMAX(df_train['rm_data_BatV'], order=(0, 1, 0)) result = sarimax.fit() result.summary() predict_train = result.predict(start=df_train.index[0]) ``` Result about the Train data set: [](https://i.stack.imgur.com/7eCan.png) ``` model = SARIMAX(df_test[5:]['rm_data_BatV'], order=(0, 1, 0)) result_new = model.filter(result.params) predict_test = result_new.predict(start=df_test.index[6]) predict_test_future = result_new.forecast(steps=24 * 7) ``` Result about the test data: [](https://i.stack.imgur.com/WddXK.png) I hope someone can tell me where the problem is and why it is not working. Is it the data, are they prepared incorrectly? Is the ARIMA model wrong here? Is ARIMA not meant for this kind of use cases and why? Can the ARIMA model be trained on and map only single curves? I am grateful for any feedback.
Does my ARIMA Model Fail because of the non stationary data?
CC BY-SA 4.0
null
2023-05-06T11:50:54.530
2023-05-06T11:50:54.530
null
null
379800
[ "time-series", "forecasting", "arima", "seasonality", "trend" ]
615079
2
null
614990
0
null
To calculate a (good) sample size you need the following inputs: - RQL - $\alpha$ risk of the test at the given RQL - AQL - power of the test at the given AQL If you have these as inputs, you can calculate the optimal sample size for testing \begin{align} H_0: \hspace{1em} \mu &\ge \mu_{RQL}\\ H_1: \hspace{1em} \mu &< \mu_{RQL} \end{align} with the given $\alpha$ risk, and power. In this context the reliability is given by $R=1-RQL$, and the confidence is given by $\gamma=1-\alpha$. Thus, there is no simple/direct relationship between reliability and confidence -- this is why statements such as "we are 95% confident that the reliability is at least 99%" are meaningful. If these two terms had a simple/direct relationship such a statement would repeat its information. On the one hand side we are testing against the $RQL$ value. Nevertheless, it's important to choose the AQL level correctly, because this affect the number of false negatives. E.g. suppose the customer sets $RQL=1\%$, and the producer is over confident and states that $AQL = 0.01\%$. Now, if we calculate a sampling plan using $\alpha=5\%$, and $power= 99\%$, we obtain a sample size of $N=299$. However, if the true quality level is only $AQL=0.2\%$, we are going to reject not only $5\%$ of the LOTs, but much more -- much to the dislike of the producer and the customer. This is why the operating characteristic curve is important: [](https://i.stack.imgur.com/nIKz6.png) In the example the OC curve shows that we are going to reject approx 50% of the LOTs, if $AQL=0.2\%$ is the true quality value. The proper sampling plan for $AQL=0.2\%$ uses $N=773$ samples and rejects the LOT only if we obtain four or more failures. An alternative to the hypothesis test described above is the use of tolerance intervals according to Wilks method -- which is often done in the medical industry.
null
CC BY-SA 4.0
null
2023-05-06T13:05:47.640
2023-05-06T13:58:25.757
2023-05-06T13:58:25.757
163054
163054
null
615080
2
null
615065
2
null
Fitting a distribution to censored data is somewhat different from fitting to uncensored data. The likelihood calculation with a censored observation involves the survival function of the distribution, not the probability of the observed value. See [this page](https://stats.stackexchange.com/q/145164/28500) for an introduction to the likelihood contributions. In the package to which you refer, uncensored data are handled by the `fitdist()` function while censored data require the `fitdistcens()` function. Although those functions might help with simple data sets, as I understand it they won't help much with finding the best underlying distribution for a regression model involving covariates and censored outcomes. For a wide class of such models you can evaluate whether the distribution of (censored) residuals matches the associated error distribution. [These course notes](https://grodri.github.io/survival/ParametricSurvival.pdf) show error distributions associated with several types of parametric survival models. [This page](https://stats.stackexchange.com/a/505453/28500) provides a brief introduction to calculating residuals in parametric models. Chapters 18 and 19 of Harrell's [Regression Modeling Strategies](https://hbiostat.org/rmsc/parsurv.html) discuss parametric modeling of survival data in detail, with an extensively worked-through example.
null
CC BY-SA 4.0
null
2023-05-06T13:06:54.587
2023-05-06T13:06:54.587
null
null
28500
null
615081
1
null
null
3
28
I'm working with patient-level healthcare data in a pre vs. post difference-in-differences analysis where the goal is to estimate the average treatment effect for nursing care. Patients in the treatment group received home visits from a nurse while patients in the control group did not receive visits. Given that: 1) nurses often treated multiple patients, 2) outcomes of interest are correlated within nurse ID, and 3) coding fixed effects for each Nurse ID would overfit, a random intercept linked to the Nurse ID for each patient seems appropriate. One problem: How should a random effect be coded for patients in the control group, who received no nursing care? Should all control patients receive the same proxy "nurse ID"?
Counterfactual modeling where variable is present in treatment but not control group?
CC BY-SA 4.0
null
2023-05-06T13:39:31.550
2023-05-06T13:39:31.550
null
null
13634
[ "mixed-model", "panel-data", "missing-data", "difference-in-difference", "case-control-study" ]
615082
2
null
614988
1
null
As @whuber notes in a comment on another answer, there is no reason to restrict hazard functions to distributions having only non-negative support. Although a [Wikipedia page](https://en.wikipedia.org/wiki/Survival_function) suggests that restriction, a survival function $S(x)$ can be taken in general to be the complement of a corresponding cumulative distribution $F(x)$, with $S(x)=1-F(x)$. For any value of $x$ for which $f(x)$ is defined and $S(x)$ isn't 0, there is a defined value of the hazard $h(x)=f(x)/S(x)$. A survival function defined over the entire real line can be useful in evaluating a parametric survival model fit, as explained for example in Chapters 18 and 19 of Frank Harrell's [Regression Modeling Strategies](https://hbiostat.org/rmsc/parsurv.html). A Kaplan-Meier survival plot of censored, standardized residuals over the real line can help evaluate whether the distribution of residuals matches that expected for a particular choice of parametric family (e.g., standard minimum extreme value, defined over the entire real line, for a Weibull model). Furthermore, survival analysis with left-censored survival times can be thought of as allowing for negative survival times. In R, a left-censored observation at time $x$ can be coded as `Surv(-Inf, x, event)`. Survival functions defined over the entire real line thus can be useful. The question, however, is how useful a hazard function is for distributions with support over the entire real line. The hazard function is perhaps most useful as describing the probability of an event at time $x$ given that there hasn't yet been an event. In practice for analysis of event times, it's useful to define a time origin such that $f(x)=0$ (and thus $F(x)=0$) for $x \le 0$. In that case, $h(x)=0$ for $x \le 0$. I suppose there might be circumstances in which a different time origin might make sense, but it's hard to think of one.
null
CC BY-SA 4.0
null
2023-05-06T13:59:44.487
2023-05-06T13:59:44.487
null
null
28500
null
615083
1
null
null
0
28
#### EDITED I spotted error in my code, and now results are no more contradictory. But still I keep the question for utility of others and myself in understanding meta-analysis and correctly interpret null-hypothesis and results. Please see below: --- Also looking at: - Why is my combined p-value, obtained using the Fisher's method, so low? - P-Values in Wilcoxon test ##### Goal I would like to test if samples of sequences are drawn from same population. As controlled experiment, I am pooling sampling from the same population. I am having contradictory results. ###### Data type I am dealing with this data: I have sequences of animal sounds; the sequences expresses how frequent an utterance is in a repertoire (e.g. think like encoding a phrase with how often syllables are known to happen). - The Population is sequences of an animal vocalizations, for observed contexts. - The null hypothesis is: samples come from similar distribution. - In the controlled experiment, the samples actually (or I believe so, based on the annotated dataset I am using) come from the same distribution, i.e. samples are pooled from the population of vocalizations belonging to the very same context. When I apply Wilcoxon rank-sum on pairs of sequences pooled from the controlled experiment, I actually found majority of p-values > 0.05. I run a simulation, and repeat the test 1000 times. I apply Fisher's method on the group of 1000 p-values. ###### Interpret results of trials VS simulation: p-values of trials VS combined p-value from Fisher's method I interpret the result of the majority trials as: "there are very few trials where extreme results of significant difference are found, and the samples likely does not come from different distribution". But then I apply Fisher, and found a `combined p-value` of almost 0. Shall I interpret that the samples significantly come from different distributions ? That is contradictory. If correct, it is a result I failed to understand since the controlled group is from samples belonging to the very same type of vocalisations. I noticed that: the only situation when I have Fisher's p-value > 0.05, is when I test Wilcoxon on the whole population test against itself (I mean, the flattened array of all sequences of a vocalisation-type, against itself). In this case, the p-value of Fisher is equal to the p-value of Wilcoxon, 1., for all the trials (i.e. it is a uniform distribution). ##### Dataset The dataset consists in a series of sequences; the sequences' items represent how often a syllable is observed in a repertoire. Example: `[170, 69]` is a sequence of two syllables, the first has been used 170 times, the second 69 times. ``` 149 [170, 69] 254 [37] 255 [108] 256 [81, 46] 257 [20, 139, 69, 104, 43, 30, 46, 83, 71, 68, 92,... ... 4671 [116, 7, 31, 26, 97, 73] 4672 [71, 83] 4673 [79] 5859 [68, 68, 65, 33, 68, 33, 65, 43, 87, 61, 1] 5931 [66] ``` As controlled experiment, I would like to test if sequences are drawn from same distribution, as it should be because all sequences are known to be associated to a given context, based on empirical observations. All those vocalization are from the same animal, and expressed for a given context. I am using wilcoxon rank-sum test: - sequences are said in independent events - the distribution of syllables-frequences is not normal ##### Results Sampling 2 random pairs sequences for 1000 trials, I got a Wilcoxon p-value list. Only 18 trials out of 1000 has a p-value < 0.005 ``` len(np.argwhere(np.array(corr_wilcoxon_matrix_list).flatten() < 0.05)) #18 # first 10 values: # array([0.14891467, 0.43857803, 0.35453948, 0.31731051, 0.0808556 , 0.35453948, 0.31731051, 0.3358309 , 0.31731051, 0.24821308]) ``` I compute Fisher's test against the p_value list: result is `0.0810574489885437` I noticed that, the higher the number of trials, the combined p_value tends to get lower and lower, and fluctuate more: it can be lower than 0.05 too. Example: - for 2 trials, with 1 random sample for each distribution: rank_sum(sample_from_context, sample_from_context), I got combined p-value of : 1.0 - for 10 trials, combined p-value: 0.98 - for 100 trials, combined p-value: 0.02 - for 1000 trials, combined p-value: 0.36 - repeat 1000 trials, combined p-value : 0.08 I thought possible answers could be that the size of sequences is different, and the longer the sequences, the more variability it has and the distribution of syllables may actually be different. But I would like to have a point of view from people who are fond in statistics to interpret the meaning of this behaviour. --- Basically Fisher's value will be always < 0.05, except when comparing the whole flattened array of historical data. - Can you help understand: is the null-hypothesis in meta-evaluation (Fisher) the same of the null-hypothesis of the trials (Wilcoxon), or is it another thing ? I found difficult to understand what a distribution of p-values is, and what the null hypothesis of Fisher represents. - Why cannot I use proportion of p-values in independent trials as an argument for statistical confidence ? E.g. "Since 982 trials had a p-value > 0.05, and only 18 had significance confidence that sequences were not from the same distribution, therefore there is 982/1000 probability of not meeting any extreme value, and therefore the simulation showed that the two samples come from the same distribution with > 95% confidence." - Why Fisher value is so low (0.08), if just 18 values are < 0.05 ? - Why if I repeat the experiment for higher number of trials, the combined p_value fluctuate and get smaller ? Should not converge, for central limit theorem ? (I am tossing n independent simulations over and over) - Should I conclude there is a flaw in the design of experiment ?
Combined p-value from wilcoxon test result on control experiment - interpreting the null-hypothesis correctly
CC BY-SA 4.0
null
2023-05-06T14:24:28.280
2023-05-06T22:17:06.657
2023-05-06T22:17:06.657
107116
107116
[ "hypothesis-testing", "p-value", "experiment-design", "wilcoxon-mann-whitney-test", "combining-p-values" ]
615084
2
null
614972
3
null
Thinking in terms of percentage-point differences can lead to confusion in logistic regression. The coefficients are in units of log-odds differences associated with the predictor values. Those aren't linearly related to percentage values. The negative intercept in your model is related to the low overall probability of the `used` outcome in your data. If your data adequately represent the underlying population of interest, that's what you should focus on. If there were simply an under-representation of cases with `used` outcomes and the regression coefficients still represented the correct differences in log-odds associated with changing predictor values, then you could estimate corresponding percentage-point differences. With your current intercept of `-3.259`, the percentage estimates with all continuous predictors at 0 for the two levels of the binary `exp` predictor are about 3.7% and 1.1%; if the data had been better "balanced" so that intercept was `0` and the other coefficients were the same, those percentages would be about 50% and 22%, respectively. ``` plogis(-3.259) # [1] 0.03700483 plogis(-3.259-1.255) # [1] 0.01083585 plogis(0) # [1] 0.5 plogis(0-1.255) # [1] 0.2218358 ``` That said, it seems unlikely that a "better balanced" data set would only change the intercept of the model without affecting the other coefficient estimates.
null
CC BY-SA 4.0
null
2023-05-06T14:28:06.100
2023-05-06T14:28:06.100
null
null
28500
null
615085
2
null
615027
1
null
With a Cox model you can readily get point estimates and confidence intervals of (log)-hazard ratios associated with different combinations of predictor values. In your model with the interaction (which is preferable to the separate models), the value of the interaction coefficient `Inflammation:Frailty` is specifically the extra log-hazard associated with `Inflammation` when there is also `Frailty` (and vice-versa). That log-hazard (or the hazard ratio obtained by exponentiating it) is typically of most interest and is unaffected by the values that you choose for other predictors. For illustration you can choose any sets of predictor values that make sense in you population of interest. Yes, the absolute estimates of survival will change depending on your choices. The hazard ratios associate with `Inflammation`, `Frailty`, and their interaction won't change.
null
CC BY-SA 4.0
null
2023-05-06T14:37:13.737
2023-05-06T14:37:13.737
null
null
28500
null
615087
1
null
null
0
12
I read generic claims online and in papers that the Linear Discriminant Analysis (LDA) classifier is widely used for many applications including: face recognition, speech recognition, medical diagnosis, etc... Many research papers also propagate these claims with no mention of actual systems. But this suggests that these are just potential applications, while I need concrete examples. Does anybody know the names of actual systems which are based on LDA?
Names of actual software which use the LDA classifier
CC BY-SA 4.0
null
2023-05-06T15:23:11.703
2023-05-06T15:23:11.703
null
null
99491
[ "classification", "predictive-models", "software", "application" ]
615088
1
null
null
0
10
For the Rao Score Test, we usually reject for large values of the score when are testing $H_0: \theta = \theta_0$ vs $H_1: \theta > \theta_0.$ I am confused why we reject for large values, instead of for small (negative) values of the score. There are two ways of looking at this, and I am getting confused about how to reconcile the two and whether there are some logical flaws in my argument. Suppose that $$\ell_\theta(X) = \frac{\partial}{\partial \theta} \text{log}(p_\theta(X))$$ is the log likelihood evaluated at some data point $X$. So the score statistic is given by $$\sum_{i=1}^n \ell_\theta(X_i).$$ Now, the point is that the expectation of $$\frac{\partial}{\partial \theta} \ell_\theta(X)$$ is the negative Fisher Information (which is positive), and hence the expectation of $\frac{\partial}{\partial \theta} \ell_\theta(X)$ should be negative. This implies that the Score Statistic is (in expectation) decreasing in $\theta.$ Therefore, it seems that we want to reject the null when the score is small, not large. This is because if the score is small, we expect that $\theta$ is large, since the score statistic is decreasing in $\theta$, and so it's more likely that the alternative is true. One could also look at it like this. We know that the likelihood ratio test, i.e. rejecting when the statistic $\frac{L_n(\theta_0+h)}{L_n(\theta_0)} > C$ for some constant $C$, is optimal for simple alternatives. Taking the logarithm of both sides, we reject when $$\text{log} \left(L_n(\theta_0+h) \right) - \text{log} \left( L_n(\theta_0) \right) > \text{log}(C).$$ Taylor expanding, then roughly speaking we reject exactly when $$\frac{1}{n} \sum_{i=1}^n \frac{\partial}{\partial \theta} \text{log}( p_{\theta_0}(X_i) ) > \text{log}(C)$$ (plus some error terms). This way of thinking about it seems to give the right idea. But then what is wrong with the previous logic, when considering tha the score is a decreasing function of $\theta$?
Reconciling the Rejection Region for the Rao Score Test
CC BY-SA 4.0
null
2023-05-06T15:23:40.913
2023-05-06T15:23:40.913
null
null
348052
[ "hypothesis-testing" ]
615089
2
null
615068
7
null
The problem is the ill conditioning of your kernel matrix. These are the singular values of your kernel matrix k: [](https://i.stack.imgur.com/w3GHS.png) As you can see, many of them are numerically zero. This leads to nonsense when you compute np.linalg.inv. You have two options. The most common is to simply add a scaled identity matrix to your kernel matrix with some small scaling value, like $10^{-6}$: ``` K_inv = np.linalg.inv(k + 1e-6 * np.eye(k.shape[0])) ``` This leads to correct predictions: [](https://i.stack.imgur.com/v7nVG.png) Another option is to notice that each time you explicitly compute a matrix inverse, Householder rolls around in his grave. We can allow him to rest easier by avoiding explicit inverse computation, and instead computing linear system solves. Like so: ``` def gp(x_observed, y_observed, x_new): k = kernel_func(x_observed, x_observed) K_star_star = kernel_func(x_new, x_new) K_star = kernel_func(x_observed, x_new) mu = K_star.T @ np.linalg.solve(k, y_observed) sigma = K_star_star - K_star.T @ np.linalg.solve(k, K_star) return mu, sigma ``` This also leads to good predictions: [](https://i.stack.imgur.com/c7Nrd.png)
null
CC BY-SA 4.0
null
2023-05-06T15:25:42.543
2023-05-06T15:25:42.543
null
null
82893
null
615090
1
null
null
2
71
In a linear regression of multiple variables, $$ y = X\beta^T + \epsilon $$, where $X$ and $y$ are the independent and dependent variables, we can estimate OLS $\beta$ as follows: $$ \hat \beta = (X^T X)^{-1} X^T y $$ I wonder if it is possible to design a neural network such that given $X$ and $y$ (with a variable number of rows) as input, it will output $\hat \beta$.
Linear Regression with Neural Net with betas as output
CC BY-SA 4.0
null
2023-05-06T15:51:01.190
2023-05-06T17:07:01.420
2023-05-06T15:57:12.787
28942
28942
[ "neural-networks" ]
615091
2
null
613478
0
null
> One thing that bothers me is that I pool the data, but then use the original group size rather than setting the populations of each group to the sum of population_size_1 and population_size_2. - The bootstrapping is a way to simulate the effects of the error in the empirical distribution and what it does to a statistic derived from that empirical distribution such as the sample mean or median. The effect of increasing the sample size $n^\prime>n$ for the bootstrapping procedure is that it reduces the variation in your bootstrapping samples. The reason is because your empirical distribution has a relatively smaller error when you are sampling from a finite distribution. (In the extreme case when $n=N$ then you have zero error, and you are supposed to take an infinite sample, which has zero variation) - The pooling has the effect of allowing an estimate of the emperical distribution that is more accurate. (however, this is only true if the two population distributions are the same.) --- These two do not interfer so much. The bootstrapping is simulating variations from sampling a distribution, and ideally that is the real distribution but you are using the empirical distribution instead. If you can use an improvement by using a pooled sample, then that is better. The adjustment of sample size is made because the bootstrapping – whether it is based on the true distribution, an empirical distribution from pooled groups, or an empirical distribution from a single group – will overestimate the variation in the sampling distribution. This is because the real sampling occurs from a fixed population without replacement and the bootstrapping sampling occurs from a fixed population (that approximates the true population) with replacement. A reason to use a pooled sample is to improve the approximation of the true population and it does not interfer with the idea of adjusents in the sample size. --- But note: For a test of equivalence of medians you might choose to not pool the data since the distributions of the two groups can be considered different in other ways. Especially for small population sizes, one may wonder whether the distributions are the same and whether pooling improves the estimates of the true population. Especially when the sample sizes are close to the population sizes, then the empirical distributions will be good estimates of the true distributions and pooling may not be a good improvement.
null
CC BY-SA 4.0
null
2023-05-06T15:55:15.020
2023-05-06T16:12:07.637
2023-05-06T16:12:07.637
164061
164061
null
615092
1
null
null
0
13
I was thinking about the paradigm to decide the best split in decision tree, so why do we really need IG? can't we directly use the entropy of the split and decide the best split based on that? Consider a node at which you are looking for the best split. Now since the Entropy of root node for all the features would have the same entropy, shouldn't entropy of the split suffice? Are we calculating the IG only to know when to stop splitting the decision tree further?
Why do we really need information gain to decide the best split in decision tree?
CC BY-SA 4.0
null
2023-05-06T16:44:49.803
2023-05-06T16:44:49.803
null
null
292848
[ "classification", "cart" ]
615094
2
null
614813
2
null
The two comments were so insightful and instructive that I thought it was worth developing them further into an answer. This way the question has an answer more than just saying the answer key in the book is wrong. ## Sketching The suggestion from whuber was > Sketch the domain and notice it's very thin when $x<1/2$. Then notice that $6xy$ must be smaller in that thin region than elsewhere, because $x$ and $y$ are both small there. Consequently, $Pr(<1/2)$ must be substantially less than $1/2$. On that basis alone your answer is plausible and that in the answer key is not. Using Python we can obtain the following figure. [](https://i.stack.imgur.com/G5Ja0.png) All the shaded area (both light and dark) is $R_{XY} = \{ (x,y) \in \mathbb{R}^2 | 0 \leq x \leq 1, 0 \leq y \leq \sqrt(x) \}$. The darker grey region is the integration region. As the comment suggested, we can observe here the integration region is small (less than half the total region). Not only this, but in this dark shaded region $x$ and $y$ are small - so $6xy$ will be smaller over this region than the other. The heatmap below indicates the value of $6xy$, with darker red for values closer to $6$ and paler white closer to $0$. [](https://i.stack.imgur.com/9Nbb8.png) Given that this is a narrow region and that this is a region where $6xy$ will be smallest, we should be expecting an answer $<0.5$ and so we can rule out the answer from the book which is larger than this, as a mistake. ## Monte Carlo Integration The suggestion from jbowman was > Generate, say, 1 million $U(0,1)$ variates for $X$, the same for $Y$. Discard all pairs for which $y>\sqrt(x)$ . Calculate $f(x,y)$ for the remainder. Sum the probabilities for $f(x,y)$ when $<0.5$ and for $(,)$ overall; divide, and you'll get a number very close to your $0.125$. So, first we generate random uniform pairs of points $(x,y) \in [0,1] \times [0,1]$. We plot these points below. [](https://i.stack.imgur.com/HOB4K.png) Then we discard all the points for which $y > \sqrt(x)$. The plot is below. [](https://i.stack.imgur.com/ZyAyv.png) For each of these points $(x,y)$, we calculate the value of $6xy$ and sum. In my case I got $\approx 9936$. Then we calculate the value of $6xy$ only for the points with $x< 0.5$. These are the points in orange below. [](https://i.stack.imgur.com/4aJUc.png) Summing these I get $\approx 1244$. Finally I calculate $1244/9936 \approx 0.125$. If you are interested in this, you might enjoy these articles. [Estimating an integral by using Monte Carlo simulation.](https://blogs.sas.com/content/iml/2021/03/31/estimate-integral-monte-carlo.html) and [Introductory Examples of Monte Carlo simulations in SAS](https://blogs.sas.com/content/iml/2022/08/01/examples-monte-carlo-simulation.html) I think you can still enjoy the articles without programming in SAS and should be able to adapt any example to Python or R. ### Python Code For the sketch and heatmap ``` import numpy as np import matplotlib.pyplot as plt x = np.linspace(0,1,1000) f, ax = plt.subplots(1) y= [np.sqrt(xi) for xi in x] ax.plot(x,y) plt.axvline(0.5, c="black", linestyle='--') fill_x=np.linspace(0,0.5,1000) fill_y=[np.sqrt(xi) for xi in fill_x] ax.set_ylim(ymin=0, ymax=1) ax.set_xlim(xmin=0, xmax=1) ax.set_xticks([0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1]) ax.set_yticks([0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1]) plt.fill_between(x, y, alpha=0.1, color="grey") plt.fill_between(fill_x, fill_y, alpha=1, color="grey") plt.grid() ``` For the heatmap ``` import numpy as np import matplotlib.pyplot as plt x1 = np.linspace(0,1,1000) f, ax = plt.subplots(1) y1= [np.sqrt(xi) for xi in x1] ax.plot(x1,y1) plt.axvline(0.5, c="black", linestyle='--') ax.set_ylim(ymin=0, ymax=1) ax.set_xlim(xmin=0, xmax=1) ax.set_xticks([0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1]) ax.set_yticks([0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1]) x, y = np.meshgrid(np.linspace(0, 1, 100), np.linspace(0, 1, 100)) z = 6*x*y z_min, z_max = -np.abs(z).max(), np.abs(z).max() c = ax.pcolormesh(x, y, z, vmin=0, vmax=6, cmap = 'Reds') ax.set_title('Heatmap for 6xy') f.colorbar(c, ax=ax) plt.fill_between(x1, y1, np.max(y1), color='white', alpha=1) plt.show() ``` For the simulation in the second part ``` import numpy as np random_points = np.random.uniform(low=0, high=1, size=(10000,2)) keep = random_points[random_points[:,1]<=np.sqrt(random_points[:,0])] sum_of_all_kept_points = np.sum([6*p[0]*p[1] for p in keep]) sum_when_x_less_than_half = np.sum([6*p[0]*p[1] if p[0]<0.5 else 0 for p in keep]) print(sum_when_x_less_than_half/sum_of_all_kept_points) ```
null
CC BY-SA 4.0
null
2023-05-06T17:12:25.180
2023-05-06T17:12:25.180
null
null
358991
null
615095
2
null
15872
1
null
As other answers here correctly point out, it doesn't usually matter. If the parameter space is continuous, and the method of finding the CI is "continuous" (e.g. sample mean $\pm$ margin of error; what I have in mind are CI methods where the endpoints are not 2 of the sampled observations), then indeed it doesn't matter whether it's open or closed. However, I'd like to make a case for why the convention should be "closed." - If the parameter space is discrete, it does matter. Consider an integer-valued parameter, such as the unknown size of a finite population in a capture-recapture problem[1]. In that case, it feels much more natural and less confusing to say "We are 95% confident that the population size is between 93 and 106, including the endpoints" rather than "...between 92 and 107, excluding the endpoints." In other words, the closed CI [93, 106] makes more sense than an open CI such as (92, 107) or (92.9, 106.1) or whatever. - Even with a continuous parameter space, some CI calculation methods just choose two observations to be the CI endpoints. Consider a bootstrap percentile CI. Conventionally, we include those endpoints as part of the CI to ensure our estimated coverage is at least nominal: if we're trying to get a 95% CI, we include the endpoints so that at least 95% of the bootstrap statistics are in the CI. (I know bootstrap percentile CIs are not guaranteed to have the right coverage! But this is what we usually hope they will do, even if they don't necessarily succeed.) - More generally, statisticians tend to conventionally prefer theoretical guarantees that are slightly conservative: most of us would rather have slight over-coverage than under-coverage in our CIs. In this spirit, closed intervals are slightly more appropriate. --- [1] Edited to replace initial example (Poisson median, which may be questionable as per @whuber's comment below) with finite population size. Another example could be the unknown count within a finite population. If there are $N$ students in my class and I want to know $\theta$ = the number who would say "Yes" to a sensitive question, I could use randomized response to ask it in a privacy-protecting way. Then I may want a confidence interval for $\theta$ which has the discrete parameter space $\{0,1,\ldots,N\}$.
null
CC BY-SA 4.0
null
2023-05-06T17:14:46.397
2023-05-10T18:58:18.347
2023-05-10T18:58:18.347
17414
17414
null
615096
2
null
15872
1
null
A confidence set with confidence $1-\alpha$ for parameter $\theta$ is a set $\mathcal{S}$ for which $P(\theta\in\mathcal{S}) = 1-\alpha$. This set could be an open interval, a closed interval, or it could not even be an interval at all. I think it makes sense to call any confidence set which takes the form of either an open, closed, or half open interval a "confidence interval".
null
CC BY-SA 4.0
null
2023-05-06T17:22:53.230
2023-05-06T17:22:53.230
null
null
82893
null
615097
1
615100
null
7
812
For any given covariance matrix, will the sum of the diagonal elements always be bigger than the sum of the off-diagonal elements? Let $\sigma_i$ be the standard deviation of the $i^\text{th}$ term of a $n\times n$ covariance matrix and $\rho_{ij}$ the correlation between the $i^\text{th}$ and $j^\text{th}$ terms. Is the following statement always true? $$ \sum_{i=1}^n\sigma_i^2 \ge 2\sum_{i<j}\rho_{ij}\sigma_i\sigma_j $$
Is the sum of the diagonal elements of a covariance matrix always equal or larger than the sum of its off-diagonal elements?
CC BY-SA 4.0
null
2023-05-06T17:38:45.073
2023-05-07T12:45:04.373
2023-05-07T10:53:25.387
53690
115679
[ "covariance-matrix", "linear-algebra" ]
615098
2
null
615097
7
null
No. Highly-correlated variables will violate this rule: ``` x <- seq(0, 1, len = 100) X <- data.frame(x = x, x2 = x^2, x3 = x^3) X_cor <- cor(X) sum(X_cor[col(X_cor) != row(X_cor)]) # 5.73822 sum(diag(X_cor)) # 3 ```
null
CC BY-SA 4.0
null
2023-05-06T17:59:57.703
2023-05-06T17:59:57.703
null
null
30351
null
615099
1
null
null
0
41
Using R, I built 2 logistic regression models (with outcome variable being depression status - present or absent) and used leave one out cross validation to obtain predicted values for the dataset. I then used the `predict` function to extract the linear probability for each observation in the data set. I've also calculated the "error" or residual for each row of data, that is, the absolute difference between the observed value and each model's prediction. I'd like to compare these two error values, to test which model did a better job at predicting the ground truth. For what I am testing, log probability are more appropriate than raw probability, because they "allow" for greater error when the probabilities are further from the ground truth. See, for example, the last 2 rows in the example data. As another example, consider two scenarios, in both of which the ground truth is 1 (depression is actually present). In the first scenario, model A's probability of picking the ground truth was .9 and model B's was .7. In the second scenario, model A's probability of picking the ground truth was .3 and model B's was .1. In both cases, the difference between the models is .2, but this difference matters less in scenario 2 (because both models were so far from the truth -- for a real world prediction, both 30% and 10% are bad if depression is actually present) and matters more in scenario 1 (because a result that says "70% likelihood depression is present" is much worse than "90% likelihood depression is present"). I want to use a statistical test to compare these log errors. Can I run a t-test to compare these columns (Log Model A and Log Model B)? Or would this be inappropriate? Example data: |Observed |Predict. A |Predict. B |Error A |Error B |Log Error A |Log Error B | |--------|----------|----------|-------|-------|-----------|-----------| |0 |.55 |.66 |.55 |.66 |-.59 |-.41 | |1 |.59 |.68 |.41 |.32 |-.89 |-1.14 | |0 |.85 |.79 |.85 |.79 |-.17 |-.24 | |0 |.04 |.02 |.04 |.02 |-3.23 |-3.72 |
Statistical comparison of two (log) probabilities
CC BY-SA 4.0
null
2023-05-06T18:01:00.653
2023-05-06T19:35:48.433
2023-05-06T19:35:48.433
213342
213342
[ "regression", "probability", "logistic", "t-test", "logarithm" ]
615100
2
null
615097
15
null
Consider the general equi-correlation covariance matrix: \begin{align} \Sigma = \begin{bmatrix} 1 & \rho & \cdots & \rho \\ \rho & 1 & \cdots & \rho \\ \vdots & \vdots & \ddots & \vdots \\ \rho & \rho & \cdots & 1 \end{bmatrix} \in \mathbb{R}^{n \times n}. \tag{1} \end{align} The sum of all the diagonal elements is $S_1 = n$, while the sum of all the off-diagonal elements is $S_2 = \rho \times (n^2 - n)$. If you analyze the limiting behavior, for fixed $\rho \in (0, 1]$, the opposite inequality $S_2 > S_1$ always holds for sufficiently large $n$. --- Note that $\Sigma$ in $(1)$ is positive semi-definite (PSD) for $\rho \in (0, 1]$. A classical proof of this goes as follows. It is straightforward to verify that $\Sigma$ can be rewritten as $\Sigma = \rho ee' + (1 - \rho)I_{(n)}$ with $e$ an $n$-long column vector of all ones. As all the eigenvalues of the rank-$1$ matrix $ee'$ are $\{n, 0, \ldots, 0\}$, all the eigenvalues of $\rho ee' + (1 - \rho)I_{(n)}$ are \begin{align} n\rho + (1 - \rho) = 1 + (n - 1)\rho, 1 - \rho, \ldots, 1 - \rho, \end{align} which are all nonnegative provided $\rho \in [-(n - 1)^{-1}, 1]$. This shows that $\Sigma$ is PSD for $\rho \in (0, 1]$, hence a valid covariance matrix.
null
CC BY-SA 4.0
null
2023-05-06T18:06:52.070
2023-05-07T12:45:04.373
2023-05-07T12:45:04.373
20519
20519
null
615101
1
null
null
0
21
In our statistics course we are learned about MA(q) models. $\epsilon_{t} = X_{t} + \theta_{0}\epsilon_{t-1} + ... + \theta_{q}\epsilon_{t-q}$ But if we think about regression, more like the AR(p) process, if the parameters that are estimated are larger, then this implies that a covariate has a large effect on the response. My question is: How can we say that the current error is dependent on the past errors if we assume a White Noise gaussian process such that $\epsilon_{t} - IN(0, \sigma_e^{2}) $ for all time. If we had to create a correlation matrix for all the errors it would be diagonal with this independence assumption, so how can we use the past errors, independent of the current, to model the current error? I am not sure when the errors are random and when they are fixed as my course notes don't make a distiction between the cases. Any explanation will be appretiated.
Independent Gaussian Assumption in MA(q) process
CC BY-SA 4.0
null
2023-05-06T18:32:41.577
2023-05-07T06:16:26.253
2023-05-07T06:16:26.253
380377
380377
[ "regression", "time-series", "gaussian-process", "moving-average-model" ]
615103
2
null
615046
0
null
This is one reason why I dislike percentages: it’s often not clear whether a “percent difference” is a difference in percentage points between treatment $t$ and control $c$ ($100*(p_t - p_c)$, where $p$ is a probability) or a percentage difference from the control success rate ($100*(p_t-p_c)/ p_c$). As I read the original NEJM paper, the authors (in consultation with the FDA) apparently adjusted the design for a noninferiority margin of 12.5 percentage points, which is not how the quote in the second paragraph of your question interpreted it. That’s a big difference in interpretation when the overall response rate to the control drug is only about 60%, as in the NEJM report. Supporting my interpretation, the [FDA guidance](https://www.fda.gov/media/78504/download) notes on page 9 that “an absolute difference in effect” is typically the choice of margin for antibiotic trials like that reported in the NEJM paper. For learning about issues in noninferiority trials, that FDA guidance isn’t a bad place to start. With respect to different types of “percent differences,” for example, there’s a simple explanation starting on page 24. Figure 3 nicely illustrates the interpretation of results from noninferiority trials. Much of the document focuses on the difficulties in choosing a noninferiority margin for study design.
null
CC BY-SA 4.0
null
2023-05-06T20:43:02.677
2023-05-06T20:43:02.677
null
null
28500
null
615104
1
null
null
0
21
I know how to use these packages (VIP etc) with `tidymodels` to evaluate individual feature contribution/importance for a model such as random forest. But I'd like to know how to evaluate a combined or joint importance for two or more features. Probably I cannot just simply add importance scores of these features together. Can I ? for example, we have x1, x2, ... xP features to classify s1, s2, s3, sN samples to two groups (Good or Bad). if we use random forest, we may eventually have ranked importance score for these features to indicate their contribution to classify samples into two groups. we my have features ranking as: x5, x3, x1, x2, .... which indicates x5 have the highest contribution for the model/classification. My questions if we want to know the joint contribution such as x5+x1, how should we do it? R package VIP permits evaluation for single feature importance but I'd like to know how to calculate joint contribution. Thank you. -Xiaokuan
how to evaluate joint importance of two features in a model (random forest) using R package such as VIP or DALEXtra?
CC BY-SA 4.0
null
2023-05-06T20:48:09.543
2023-05-10T13:45:28.700
2023-05-08T14:07:12.007
382349
382349
[ "r", "predictive-models", "random-forest", "explanatory-models" ]
615105
1
null
null
1
14
I just started learning about differential privacy (DP), and I've been trying to figure out how DP is affected when 1 or more column is added to the data. Specifically, suppose: $$Pr(F(X) \in S) \leq \exp(\epsilon) Pr(F(X') \in S)$$ where $X$ and $X'$ differ by only one data point. If I add one column to $X$ to create $X^+$, then by the definition of group privacy, I think the following is true: $$Pr(F(X) \in S) \leq \exp(n \epsilon) Pr(F(X^+) \in S)$$ since each of the $n$ data points has been changed. However, is it tight enough? More generally, if I add $k$ columns, then is the following claim tight: $$Pr(F(X) \in S) \leq \exp(kn \epsilon) Pr(F(X^+) \in S)$$ or should it just be $$Pr(F(X) \in S) \leq \exp(n \epsilon) Pr(F(X^+) \in S)$$ The former results from repeatedly applying group privacy when each column is added. But since each data point is essentially changed once when $k$ columns are added, I'm not sure if it should be just $n$ or $kn$. Thanks!
How to quantify privacy changes for differential privacy with extra columns?
CC BY-SA 4.0
null
2023-05-06T21:15:08.370
2023-05-07T15:38:17.753
2023-05-07T15:38:17.753
387371
387371
[ "probability", "differential-privacy" ]
615106
2
null
559009
1
null
A less trivial explanation can be that converting gray-scale to RGB is effectively adding a layer of ReLU neurons with fixed parameters. For example converting an image to RGB using the viridis colour-map is using something similar to three piecewise linear functions that can be composed out of ReLU functions. [](https://i.stack.imgur.com/5x06G.png) This addition has the effect of increasing the depth (extra layer) and width (potential extra neurons in subsequent layers) of the neural network. Both effects can potentially improve the performance of the model (if it's current depth and/or width was not sufficient). --- ### Width A simple example is converting a single grayscale channel to three rgb channels by simply copying the image three times. This can be effectively like performing some [ensemble learning](https://en.wikipedia.org/wiki/Ensemble_learning). Your neural network or decision tree may converge to different patterns on the different channels which can be later on merged in an average with a final layer or classification boundary. You could also see it alternatively as effectively making several of the hidden layers three times wider (but not fully connecting them, and adding only three times more connections). This can create some potential for different training and convergence which is potentially better. ### Depth The additional color mapping layer may allow to create patterns that are not possible with less connections. The flexibility is increased. The simplest example is an image of a single pixel that passes through a layer with a single neuron with a step function (so this is an example where even the number of neurons remain the same and the width of the subsequent network is not changed). - For BW, this is a two parameter function (weight $w_1$ and bias $b$) that effectively makes a classification based on whether or not the input is above or below some level. - For RGB, then we get two additional parameters, $w_2$ and $w_3$, for the extra channels, and this makes it possible to create more patterns. For example we can make a classification when the grayscale pixel has either a high or either low value. Obviously one can achieve the same when not converting to rgb, and instead add more neurons or an additional layer. - But possibly the cases where the rgb performed better did not test this out. - Also the conversion to rgb, using some useful scale, is making a hardcoded seperation into shadows, middle tones and highlights, which a NN needs training and extra neurons for. (So in a way it is adding an extra layer which is regularised. And also it is adding pre-trained information because the human decision to choose a particular colour map instead of another; ie the human chooses the trigger points of the ReLU layer and the conversion to rgb is additional information). Anyway, this simple example is a case where it is possible to prove that rgb can perform better (if we compare with a limited model, like only a fixed number of neurons and layers). [](https://i.stack.imgur.com/pf6uhm.png)
null
CC BY-SA 4.0
null
2023-05-06T21:20:14.423
2023-05-08T08:33:21.953
2023-05-08T08:33:21.953
164061
164061
null
615107
1
null
null
2
26
I am running a regression model and seeking to interpret the results of svyglm, for a continuous outcome variable and explanatory variables that are both categorical and continuous, when complex sampling design weights are accounted for. Are the results the same as an interpretation with linear regression analysis?
interpreting results of svyglm
CC BY-SA 4.0
null
2023-05-06T21:39:53.757
2023-05-08T00:29:39.693
null
null
387372
[ "r" ]
615108
1
null
null
0
11
Let's say we're going to train a classifier with the full data set. There's also a reject logic for ambiguous regions in the data. So, at the end, the final system outputs reject or 0 or 1. That is, reject data points in regions with high ambiguity, otherwise use the original model predictions. - If you were to see a scatterplot with a boxplot accompanying it, how would you interpret the data in this plot? - How would you reject ambiguous data points? [](https://i.stack.imgur.com/6NONE.png) [](https://i.stack.imgur.com/52FbF.png)
Implementing classification with rejection and interpreting 2D ambiguous data
CC BY-SA 4.0
null
2023-05-06T21:45:33.687
2023-05-06T21:45:33.687
null
null
137591
[ "classification", "interpretation", "scatterplot", "boxplot" ]
615109
2
null
505231
1
null
Suppose the estimator $\hat{\theta}$ is a function of random variable $X$ and denoted as $\hat{\theta} = g(X)$, then when $X=x$, $\hat{\theta} = g(x)$, or a constant value. The term $\mathbb{E}(\hat{\theta}|X)$ introduced in the bias-variance decomposition is a function of $X$ which takes value $\mathbb{E}(\hat{\theta}=g(x)|X=x)=g(x)$ when $X=x$. Because of this, the function $\mathbb{E}(\hat{\theta}|X)$ is the same function as $\hat{\theta} = g(X)$. Similarly, the term $Var(\hat{\theta}|X)$ as a function of $X$, takes value $Var(\hat{\theta} = g(x) | X=x)=0$ when $X=x$. So $Var(\hat{\theta}|X)$ has a constant value of 0. Using these information, introducing the term $\mathbb{E}(\hat{\theta}|X)$ seems to lead the expansion of $\mathbb{E}[(\hat{\theta}-\theta)^2|X]$ back to its original form. If we instead introduce the term $\mathbb{E}(\theta|X)$ such that $\mathbb{E}[(\hat{\theta}-\theta)^2|X] = \mathbb{E}[(\hat{\theta}-\mathbb{E}(\theta|X)+\mathbb{E}(\theta|X)-\theta)^2|X] = ...$ after expanding the terms, you should be able to get $\mathbb{E}[(\hat{\theta}-\theta)^2|X] = \mathbb{E}[(\theta-\mathbb{E}(\theta|X))^2|X] + \mathbb{E}[(\mathbb{E}(\theta|X)-\hat{\theta})^2|X]$. Note that the first term here is just the conditional variance of $\theta$. By setting the estimator $\hat{\theta}=\mathbb{E}(\theta|X)$, the second term becomes 0, and the mean squared error given $X$ is minimized. This answer [https://stats.stackexchange.com/q/164391](https://stats.stackexchange.com/q/164391) may provide some more interpretations of the second term in the result $\mathbb{E}[(\mathbb{E}(\theta|X)-\hat{\theta})^2|X]$ and how it relates to bias-variance decomposition. From what I've seen before, the bias-variance decomposition is usually done under the classical inference.
null
CC BY-SA 4.0
null
2023-05-06T22:03:31.480
2023-05-11T04:17:09.630
2023-05-11T04:17:09.630
259582
259582
null
615110
2
null
615046
1
null
If the text does not provide much information, i would check how its analysis was conducted, e.g. here "The primary analysis was the comparison of the overall response at the test-of-cure visit in the microbiologic intention-to-treat population. The 95% confidence interval for the weighted difference between treatment groups was calculated with the use of the Cochran–Mantel–Haenszel weighted Miettinen and Nurminen method (stratified according to age at the time of informed consent and baseline diagnosis)." So the difference between two proportions was calculated, thus this margin is an absolute margin, i.e. if the difference > -0.125 then NI was demonstrated. If the analysis is a risk ratio, then would be a relative margin, i.e. $p_{test}/p_{control}>0.875 $.
null
CC BY-SA 4.0
null
2023-05-06T22:28:53.300
2023-05-06T22:30:56.443
2023-05-06T22:30:56.443
387373
387373
null
615112
2
null
614956
2
null
If the negative binomial is for the number of failures before k successes, i.e. $Y \sim NB(p,k)$ , where p is the probability of success, k is the number of successes, then the dispersion parameter is $1/k$. Here dispersion takes the same definition as Proc GENMOD in SAS, some books may use the reciprocal, in that way it is k. Dispersion parameter is from the fact that the variance is inflated than the expectation: $$E(Y)=\mu$$ $$Var(Y)=\mu+\gamma\mu^2$$ From these two and the usual formula for the expectation and variance for the negative binomial then we can get the dispersion $$\gamma=1/k$$
null
CC BY-SA 4.0
null
2023-05-06T23:01:11.757
2023-05-06T23:10:05.590
2023-05-06T23:10:05.590
387373
387373
null
615114
1
null
null
3
73
I’m getting a bit confused about who to include in the control and treatment pools and would appreciate the help. I need to estimate the effect of treatment where 100% of the population was assigned a push notification that leads to a treatment, but not all of them saw it and are therefore not treated (or know they were assigned). The pool of control/treatment candidates can be broken down as: - People that don’t know they were sent a notification and therefore never receive treatment - People that see the notification, but then don’t choose it (treatment). It’s possible that knowledge of the notification has a causal effect on outcome. - People that see the notification and choose to take treatment by clicking it - People that see and select the notifications, but their selection into treatment is driven by unobserved variables that would likely also influence outcome. My approach was going to be using matching methods between treatment (Group 3) and control (Group 1), removing group 2 and 4 from the control/treatment samples entirely. I was going to condition on variables that would cause someone to self select into treatment group 3 that also influence outcome. Am I estimating the treatment effect in the right way here?
How should control candidates be decided for causal inference?
CC BY-SA 4.0
null
2023-05-06T23:37:59.597
2023-05-19T04:19:36.043
null
null
387375
[ "causality", "matching", "treatment-effect", "observational-study" ]
615116
1
null
null
0
9
I've read that it's important to report the overall fit of the model. I'm more interested in the individual relationships between IVs and DVs, but worried that the fact the overall model doesn't fit (pillai's trace/wilks' lambda non-significant) makes these relationships irrelevant, or something. But a lot, perhaps the majority, of multivariate multiple regression papers I've come across don't seem to bother reporting the overall model fit/significance. Here's two: [https://journals.sagepub.com/doi/epdf/10.1177/1069072708318905](https://journals.sagepub.com/doi/epdf/10.1177/1069072708318905) [https://www.emerald.com/insight/content/doi/10.1108/JWL-06-2015-0052/full/pdf](https://www.emerald.com/insight/content/doi/10.1108/JWL-06-2015-0052/full/pdf) To quote the entirety of the reporting of the regression in the first linked study: > A simultaneous multivariate multiple regression analysis was conducted to examine whether Mexican orientation, Anglo orientation, college self-efficacy, and college outcome expectations predicted Mexican American high school students’ educational aspirations and expectations. Anglo orientation was found to be significant, Wilks’Λ=.87,F(2, 83) =6.08,p<.05,ηm=.13,where ηm represents the multivariate effect size. However, Mexican orientation, Wilks’s Λ=.94,F(2, 83) =2.51,p>.05,ηm=.05, college self-efficacy, Wilks’s Λ=.98,F(2, 83) =.85,p>.05,ηm=.02, and college outcome expectations, Wilk’s Λ=.98,F(2, 83) =.92,p>.05,ηm=.02, were not significant. Follow-up univariate analyses revealed that Anglo orientation had a sig-nificant positive relationship with educational aspirations,F(1,88) =10.06,p<.01,η=.11(where η represents the univariate effect size), and educational expectations,F(1,88) =9.58,p<.01,η=.10, for Mexican American high school students No mention of overall model fit, how all IVs simultaneously explain the combined DVs. They just report the individual relationships. Weird, right? Maybe it's taken as a given that it won't fit, because some of their IVs were non-significant, but still, they could have mentioned it or performed another regression using the significant IVs to see if the overall model fit after removing them. Even when all IVs are significant in studies like this, they often don't seem to bother reporting the overall model fit. When would it be appropriate for me to ignore the fact my overall model is non-significant, and just report the individual relationships?
Is the overall model fit/significance always important? (multivariate multiple regression)
CC BY-SA 4.0
null
2023-05-07T01:40:43.167
2023-05-07T01:40:43.167
null
null
339056
[ "regression", "multivariate-analysis" ]
615117
2
null
484000
0
null
There are two issues with exploratory analysis on big data - any analysis that shows individual data points (eg a scatterplot) gets harder to understand - computations get slow With 150,000 you shouldn't really be having the second problem, but you will have the first problem. With 150,000,000 points you might have both problems. The second problem can be solved quite well by sampling. You can begin by taking a simple random sample. If you need to explore in more detail in a small subpopulation (left-handed avocado farmers, say) you might use the whole data set for that subpopulation, or a sample from just that subpopulation. For the first problem, you can either work with a random sample (often called "thinning") or with some way of aggregating points. Two useful general-purpose ways to aggregate points in a scatterplot are [hexagonal binning](https://datavizproject.com/data-type/hexagonal-binning/) and [overplotting with partial transparency](https://blogs.sas.com/content/iml/2011/03/04/how-to-use-transparency-to-overcome-overplotting.html). The former makes individual outliers more visible; the latter makes them less visible. There's a book [Graphics of Large Datasets: Visualising a Million](https://www.goodreads.com/book/show/425571.Graphics_of_Large_Datasets) that discusses approaches to data viz when data sets are too big to look at each point individually.
null
CC BY-SA 4.0
null
2023-05-07T01:47:49.990
2023-05-07T01:47:49.990
null
null
249135
null
615118
1
null
null
2
17
The data I collected was measuring plant shoot masses and these were extremely small, so I could only find the total mass of the shoots from 30 plants. Hence, I only have the mean and none of the individual values for the shoot masses in each of the five groups. Is there any statistical test that would work with this data set? I cannot seem to find a parametric or non-parametric one.
Is there a statistical test for five groups with averages ONLY?
CC BY-SA 4.0
null
2023-05-07T02:09:28.697
2023-05-07T02:09:28.697
null
null
387381
[ "statistical-significance", "nonparametric", "descriptive-statistics", "biostatistics", "parametric" ]
615119
1
615135
null
1
37
I was reading that the Kolmogorov Smirnov 2 sample test is consistent, that is Probability of rejection under $H_1$ is 1 for sample size going to infinity. Say we have 2 random variables X and Y. K-S Test checks if $F=G$. The test statistics is: $sup_z|F_n(z)-G_n(z)|$ The test is consistent (for some level $\alpha$) means : $$Lim_{n\rightarrow \infty}P(sup_z|F_n(z)-G_n(z)|>D_{n,\alpha})=1$$ where $G_n$ is the empirical cdf distribution of Y and $G_n(y)=\sum_{i=1}^n\frac{\mathbb{1}_{Y_i<y}}{m}$ where m is the number of sample of Y. $F_n$ is the empirical cdf of X, $F_n(X)=\sum_{i=1}^n\frac{\mathbb{1}_{X_i<x}}{n}$ where n is the number of sample of X. I cannot prove the consistency can anyone help in it ? Thanks in advance.
Kolmogorov Smirnov Test Consistency
CC BY-SA 4.0
null
2023-05-07T02:45:13.027
2023-05-07T10:56:09.220
2023-05-07T09:33:18.030
247165
338134
[ "nonparametric", "kolmogorov-smirnov-test", "consistency", "two-sample" ]
615120
2
null
615114
0
null
As Henry pointed out, if the four groups are defined and each individual be able to be clearly identified into one of the group? Assume the groups are correctly defined, from the analysis viewpoint, yes, I think the approach is on the right way. However the removal of the group 2 and 4 in the matching step is ok but not necessary as the data of group 2 & 4 may also provide valuable information of the selection mechanism. After the matching the weights arising from the matching should be considered in the analysis, no matter what analysis method is to be used in the next step.
null
CC BY-SA 4.0
null
2023-05-07T02:56:42.910
2023-05-07T02:56:42.910
null
null
387373
null
615121
2
null
540092
1
null
I agree with shimao. $y_i^{(j)}$ is $x_i^{(j)}$, $\hat{y}_i^{(j)}$ is $\mu_i^{(j)}$ for jth datapoint. But the $\hat{y}_i^{(j)}$(or $\mu_i^{(j)}$) should not be considered model output (estimated label). The model obtains the output by sampling in the distribution $N(x|\hat{\mu}_i^{(j)}, \hat{\sigma}_i^{(j)})$. In VAE training, we don't need sampling in the decoder,no estimated label, we just maximize the reconstruction loss $p(x|z)$,this is a likelihood, as shimao said, this is equivalent to minimizing $\sum_i x_i^{j} - \mu_i^{j}$, similar in form to MSE.
null
CC BY-SA 4.0
null
2023-05-07T03:09:25.767
2023-05-07T03:11:12.957
2023-05-07T03:11:12.957
387329
387329
null
615122
1
null
null
1
6
[Biclustering](https://en.wikipedia.org/wiki/Biclustering) encompasses techniques for clustering a (usually) data matrix among its rows and columns. Some forms of data are multidimensional arrays rather than 2D arrays, and it would be beneficial to cluster across multiple modes (that is "axes") of a data array. Is there a literature on generalizations of biclustering to k-mode arrays (i.e. sometimes called "tensors", although I don't require it in the multilinear map sense of the term)?
Clustering across multiple modes of an array as an extension of biclustering?
CC BY-SA 4.0
null
2023-05-07T04:40:47.357
2023-05-07T04:40:47.357
null
null
69508
[ "clustering" ]
615123
1
null
null
0
15
I've come across some papers employing z-scores of permutation null distributions as a primary metric in neuroscience (for an example, see [here](https://www.nature.com/articles/s41467-018-06876-w)). The authors computed a coefficient of interest in a multiple linear regression and then permuted the order of the predictors to obtain a permutation null distribution of that coefficient. "The true coefficient for functional connectivity is compared to the distribution of null coefficients to obtain a z-score and P-value." The authors employed this permutation testing approach to avoid the need to model potentially complicated autocorrelations between the observations in their sample and then wanted a statistic that provided a measure of effect size rather than relying solely on p-values. Is there any meaningful interpretation of a z-score of a permutation null distribution under the alternative hypothesis? Is this a commonly used approach? This approach would not appear to find meaningfully normalized estimates of effect size given the variability of the permutation null distribution may not have anything to do with the variance of the statistic of interest under its own distribution. In this case, I'm not sure a z-score based on the permutation null provides much information beyond significance. The variability of the permutation null distribution will also be a function of the sample size in this case. Could we argue that permutation null distributions would in many cases (I'm thinking about simple differences in means rather than regression coefficients) tend to overestimate the variability of the true statistic given permutation tests are conservative compared to tests based on known distributions of the statistic of interest? This z-score approach would then tend to produce conservative effect sizes. I'm not finding references to this approach online beyond this R package: [getPermStat](https://rdrr.io/github/databio/PCRSA/man/getPermStat.html).
General interpretations of permutation null distributions under the alternative
CC BY-SA 4.0
null
2023-05-07T05:27:33.133
2023-05-07T05:41:33.683
2023-05-07T05:41:33.683
387390
387390
[ "hypothesis-testing", "regression-coefficients", "effect-size", "permutation-test", "permutation" ]
615124
1
null
null
0
24
When I plot a contour plot of a variable over two principal components I can see what appears to be hills and valleys. But I also know I am only looking at the contours over a projection. Here is a quick self-contained example. ``` import matplotlib.pyplot as plt from sklearn.datasets import load_diabetes from sklearn.decomposition import PCA X, y = load_diabetes(return_X_y=True) plt.tricontourf( *PCA(n_components=2).fit_transform(X).T, y, cmap='terrain', levels=100 ) plt.xlabel('PC1') plt.ylabel('PC2') plt.colorbar() plt.show() ``` [](https://i.stack.imgur.com/2glVv.png) If I see hills and valleys on the contour plot, does that entail there exists locally convex/concave regions over the original space? --- If it helps, here is a quick 3D surface for the same contour plot: [](https://i.stack.imgur.com/tRZDw.png) --- [plt.tricontourf](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.tricontourf.html) produces a colour-filled [contour plot](https://www.itl.nist.gov/div898/handbook/eda/section3/contour.htm) based on a triangular tesselation computed from the data. The boundaries between levels are interpolated [level sets](https://en.wikipedia.org/wiki/Level_set). A link to further documentation and references on the diabetes dataset can be found [here](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset). My question isn't special to this dataset, but rather is just an example of the plotting procedure. Many other data sets are suitable mutatis mutandis. --- The local minima shown in `import numpy as np; from sklearn.decomposition import PCA; np.random.seed(100); X = np.random.normal(size=100*3).reshape(100,3); Q = PCA(n_components=2).fit_transform(X); plt.tricontourf(*Q.T, X[:,2]);plt.show()` convinces me that the 2D projection can still be misleading about convexity. [](https://i.stack.imgur.com/CF3jm.png) Sampling error is sufficient for demotivating my intuition.
Do contour plots over first two principal components reveal local convexity/concavity?
CC BY-SA 4.0
null
2023-05-07T05:52:52.897
2023-05-07T19:22:41.810
2023-05-07T19:22:41.810
69508
69508
[ "pca", "convex", "non-convex" ]
615126
1
615167
null
2
80
Given probability distributions of random variable $X,Y$, without any additional assumptions, is there any nice representation or properties of the combination $E[XY]-E[X^2]-E[Y^2]$? If not, is there any nice feature that arises with some additional assumptions? I tried to rewrite as covariance and variance, but they don’t necessarily fall into a nice form
$E[XY]-E[X^2]-E[Y^2]$, is there any special property?
CC BY-SA 4.0
null
2023-05-07T06:19:49.230
2023-05-07T18:28:01.467
2023-05-07T06:27:32.913
204068
387393
[ "probability", "variance", "expected-value", "joint-distribution" ]
615128
2
null
615065
0
null
For now I will be using the `flexsurvreg()` function of the `flexsurv` package, for fitting parametric distributions to right-censored data such as the `lung` dataset from the `survival` package. Below is the current state of code for goodness-of-fit tests for several parametric distributions. I highly recommend studying Chapters 18 and 19 of Harrell's Regression Modeling Strategies recommended by EdM as well as [https://grodri.github.io/survival/ParametricSurvival.pdf](https://grodri.github.io/survival/ParametricSurvival.pdf)! Code: ``` library(flexsurv) library(survival) data(lung) # Create vector of parametric distributions to test distList <- c("weibull", "exp", "gamma", "lnorm","llogis","gompertz") # Function fits each distribution and extracts AIC, BIC, log-likelihood values fit_dist <- function(dist) { tmp <- flexsurvreg(Surv(time, status) ~ 1, data = lung, dist = dist) c(AIC(tmp), BIC(tmp), as.numeric(logLik(tmp))) } # Apply above function to each distribution in the distList results_list <- lapply(distList, fit_dist) # Convert the above list of results to a data frame results_df <- data.frame(t(matrix(unlist(results_list), nrow = 3))) colnames(results_df) <- c("aic", "bic", "logLik") rownames(results_df) <- distList # Find the distribution with the lowest AIC, BIC, and logLik values bestFitAIC <- rownames(results_df)[which.min(results_df$aic)] bestFitBIC <- rownames(results_df)[which.min(results_df$bic)] bestFitLogLik <- rownames(results_df)[which.max(results_df$logLik)] # Print the results data frame and best fitting distributions results_df cat("\nBest fitting distribution using AIC:", bestFitAIC, "\n") cat("\nBest fitting distribution using BIC:", bestFitBIC, "\n") cat("\nBest fitting distribution using Log-Likelihood:", bestFitLogLik, "\n") ```
null
CC BY-SA 4.0
null
2023-05-07T07:19:49.657
2023-05-07T07:19:49.657
null
null
378347
null
615130
1
null
null
2
87
Consider a linear regression model $\boldsymbol y=X\boldsymbol\beta+\boldsymbol\varepsilon$, where $\boldsymbol y$ is an $n\times 1$ response vector, $X$ is an $n\times p$ matrix of covariates (fixed), and the error vector $\boldsymbol\varepsilon$ is multivariate normal $N_n(\boldsymbol 0,\sigma^2I)$. The $i$th internally studentized residual is $$r_i=\frac{e_i}{\hat\sigma\sqrt{1-h_{ii}}}\,,$$ where $e_i=y_i-\hat y_i$ is the $i$th residual, $h_{ii}$ is the $i$th diagonal entry of the hat matrix $H=X(X^TX)^{-1}X^T$, and $\hat\sigma^2=\frac{\boldsymbol e^T\boldsymbol e}{n-p}$ is the usual unbiased estimator of $\sigma^2$. I am trying to find $\operatorname{Cov}(r_i,r_j)$ without explicitly finding the joint distribution of $(r_i,r_j)$. I can see that $\operatorname E(r_i)=0$, since $\frac{e_i}{\lVert \boldsymbol e\rVert}$ is symmetric about $0$. Now, for $\operatorname{Cov}(r_i,r_j)=\operatorname E(r_ir_j)$, I need $$\operatorname{Cov}\left(\frac{e_i}{\lVert \boldsymbol e\rVert},\frac{e_j}{\lVert \boldsymbol e\rVert}\right)=\operatorname E\left[\frac{e_ie_j}{\boldsymbol e^T\boldsymbol e}\right]$$ Any suggestion in finding this quantity is welcome. I know that $\boldsymbol e\sim N_n(\boldsymbol 0,\sigma^2(I-H))$ and $\frac{r_i^2}{n-p}\sim \text{Beta}\left(\frac12,\frac{n-p-1}{2}\right)$. --- The answer is supposed to be the same as the covariance if $\sigma$ was not estimated: $$\operatorname{Cov}(r_i,r_j)=\operatorname{Cov}\left(\frac{e_i}{\sigma\sqrt{1-h_{ii}}},\frac{e_j}{\sigma\sqrt{1-h_{jj}}}\right)=\frac{-h_{ij}}{\sqrt{1-h_{ii}}\sqrt{1-h_{jj}}}\,,$$ where $h_{ij}$ is the $(i,j)$th entry of $H$.
Finding covariance between internally studentized residuals
CC BY-SA 4.0
null
2023-05-07T08:22:59.310
2023-05-08T04:00:45.410
2023-05-08T00:02:41.090
20519
119261
[ "regression", "correlation", "residuals", "covariance", "multivariate-normal-distribution" ]
615131
1
null
null
0
8
There is a class of MCMC algorithms which are called "ensemble samplers", so-called because they use an ensemble of walkers whose positions depend on each other to sample from the posterior distribution. The one I'm most familiar with is Goodman and Weare's affine-invariant ensemble sampler, as I have used it with the emcee package which is popular in my field (astrophysics). Is there a similar word or name for the simpler type of MCMC algorithms in which a single chain is constructed by sampling from the posterior distribution one point at a time? In the "walker" language, it's a chain with a single walker. Multiple chains can be run in parallel in these methods, but they remain completely independent. If I wanted to write a sentence contrasting the two, what word would be used? i.e. "Ensemble sampling methods vs _____ sampling methods".
Is there a collective name for non-ensemble MCMC methods?
CC BY-SA 4.0
null
2023-05-07T09:03:04.093
2023-05-07T09:03:04.093
null
null
202969
[ "terminology", "markov-chain-montecarlo" ]
615132
1
null
null
2
81
This is a problem I have been curious about for some time now. Suppose: - There is a coin where if it lands head then the probability of the next flip being heads is 0.6 (and if tails then the next flip being tails is also 0.6) - There are 100 students in a class - Each student flips this coin a random number of times - The last flip of student_n does not influence the first flip of student_n+1 (i.e. when the next student flips the coin, the first flip has 0.5 probability of heads or tails, but the next flip for this student depends on the previous flip) Here is some R code to represent this problem: ``` library(tidyverse) set.seed(123) ids <- 1:100 student_id <- sort(sample(ids, 100000, replace = TRUE)) coin_result <- character(1000) coin_result[1] <- sample(c("H", "T"), 1) for (i in 2:length(coin_result)) { if (student_id[i] != student_id[i-1]) { coin_result[i] <- sample(c("H", "T"), 1) } else if (coin_result[i-1] == "H") { coin_result[i] <- sample(c("H", "T"), 1, prob = c(0.6, 0.4)) } else { coin_result[i] <- sample(c("H", "T"), 1, prob = c(0.4, 0.6)) } } #tidy up my_data <- data.frame(student_id, coin_result) my_data <- my_data[order(my_data$student_id),] final <- my_data %>% group_by(student_id) %>% mutate(flip_number = row_number()) ``` The data looks something like this: ``` # A tibble: 6 x 3 # Groups: student_id [1] student_id coin_result flip_number <int> <chr> <int> 1 1 H 1 2 1 H 2 3 1 H 3 4 1 H 4 5 1 T 5 6 1 H 6 ``` My Problem: In this scenario, let's say that I do not have any prior knowledge about this coin (i.e. I only have access to the data from the students) and I think its possible that the coin might have "correlated probabilities" - particularly, I think the result of the previous flip might influence the next flip. To test this hypothesis, I can count the number of sequences observed: ``` final %>% group_by(student_id) %>% summarize(Sequence = str_c(coin_result, lead(coin_result)), .groups = 'drop') %>% filter(!is.na(Sequence)) %>% count(Sequence) # A tibble: 4 x 2 Sequence n <chr> <int> 1 HH 253 2 HT 186 3 TH 182 4 TT 279 ``` From these results, it seems like the result of a coin flips appears to be influenced by the previous flip - however, I am interested in placing Confidence Intervals around these estimates. I know that in general, this type of problem is characterized by a Multinomial Distribution (e.g. if the coin had more than two sides) and that there are exact formulae for Confidence Intervals based on the Multinomial Distribution. However, I am interested in learning about how to use Bootstrapping in this problem. Right away, I can see that if the standard Bootstrapping approach is used, we will likely "interrupt" the sequence of flips - that is, we might get the $n^{\text{th}}$ flip for the $j^{\text{th}}$ student immediately followed by the $n^{\text{th}}$ flip for the $k^{\text{th}}$ student , thus invalidating the estimates produced from the Bootstrapping. The first idea that comes to mind is to modify the Bootstrapping procedure so that the sampling process is not interrupted. For example: - Approach 1: Randomly sample with replacement students until you have the same number of students as the original data. Using all available data for these students, calculate the probabilities and 95% Confidence Intervals. Repeat this process $k$ times - Approach 2: Randomly sample with replacement students until you have the same number of students as the original data. For each of these students selected, randomly choose a starting point $x$ and ending point $y$ (where $y > x$), and select all available data between $x$ and $y$ for a given student. Then, calculate the probabilities and 95% Confidence Intervals. Repeat this process $k$ times. Here is my attempt at Approach 1: ``` # Initialize results <- data.frame(iteration_number = numeric(0), h_given_h = numeric(0), h_given_t = numeric(0), t_given_h = numeric(0), t_given_t = numeric(0)) # Set the number of iterations n_iter <- 1000 # Loop for (i in 1:n_iter) { # Randomly sample 100 student ids with replacement sampled_ids <- sample(ids, 100, replace = TRUE) # Select rows for sampled students sampled_data <- my_data[my_data$student_id %in% sampled_ids, ] final <- sampled_data %>% group_by(student_id) %>% mutate(flip_number = row_number()) # Calculate conditional probabilities cond_prob <- final %>% group_by(student_id) %>% summarize(Sequence = str_c(coin_result, lead(coin_result)), .groups = 'drop') %>% filter(!is.na(Sequence)) %>% count(Sequence) %>% mutate(prob = n / sum(n)) # Extract probabilities p_HH <- cond_prob$prob[cond_prob$Sequence == "HH"] p_HT <- cond_prob$prob[cond_prob$Sequence == "HT"] p_TH <- cond_prob$prob[cond_prob$Sequence == "TH"] p_TT <- cond_prob$prob[cond_prob$Sequence == "TT"] # Create a vector with the probabilities prob_vector <- c(p_HH, p_HT, p_TH, p_TT) print(prob_vector) # Append results[i, ] <- c(i, prob_vector) } colnames(results) <- c("iteration_number", "h_given_h", "h_given_t", "t_given_h", "t_given_t") library(ggplot2) results_long <- tidyr::pivot_longer(results, cols = -iteration_number, names_to = "condition", values_to = "probability") # Plot ggplot(results_long, aes(x = iteration_number, y = probability, color = condition)) + geom_line() + labs(x = "Iteration", y = "Probability", color = "Condition") ``` [](https://i.stack.imgur.com/OypC7.png) Then, based on these results - I can calculate the 95% Confidence Intervals based on the Quantile function: ``` h_given_h_percentiles <- quantile(results$h_given_h, c(0.05, 0.95)) h_given_t_percentiles <- quantile(results$h_given_t, c(0.05, 0.95)) t_given_h_percentiles <- quantile(results$t_given_h, c(0.05, 0.95)) t_given_t_percentiles <- quantile(results$t_given_t, c(0.05, 0.95)) percentiles_results <- data.frame(condition = c("h_given_h", "h_given_t", "t_given_h", "t_given_t"), `5%` = c(h_given_h_percentiles[1], h_given_t_percentiles[1], t_given_h_percentiles[1], t_given_t_percentiles[1]), `95%` = c(h_given_h_percentiles[2], h_given_t_percentiles[2], t_given_h_percentiles[2], t_given_t_percentiles[2])) # Calculate the mean for each column h_given_h_mean <- mean(results$h_given_h) h_given_t_mean <- mean(results$h_given_t) t_given_h_mean <- mean(results$t_given_h) t_given_t_mean <- mean(results$t_given_t) percentiles_results$mean <- c(h_given_h_mean, h_given_t_mean, t_given_h_mean, t_given_t_mean) condition X5. X95. mean 1 h_given_h 0.2973799 0.2984679 0.2979292 2 h_given_t 0.1978462 0.1981885 0.1980179 3 t_given_h 0.1977906 0.1981054 0.1979481 4 t_given_t 0.3056818 0.3065346 0.3061047 ``` My Question: Can someone please tell if this approach to Bootstrapping Longitudinal/Repeated Measures data is statistically valid? Or is what I have done meaningless? Thanks! References: - https://pratheepaj.github.io/bootLong/articles/bootLong.html - https://link.springer.com/article/10.3758/s13428-019-01252-y - Bootstrapping with repeated measurements - https://link.springer.com/content/pdf/10.3758/BF03202577.pdf - https://arxiv.org/abs/1809.01832 Note: I am currently finalizing the R Code for Approach 2 - if someone is interested, I can also post it.
Bootstrap for Longitudinal/Repeated Measures Data?
CC BY-SA 4.0
null
2023-05-07T09:03:10.787
2023-05-14T21:20:57.497
2023-05-07T17:54:58.267
77179
77179
[ "r", "confidence-interval", "bootstrap" ]
615133
2
null
615097
3
null
You have been given two good answers. I thought it might be instructive to come at this from a different angle, and suggest how one might realise themselves that the statement is false, by finding a counterexample. It can often be useful to run simulations (in say R or Python), to test our understanding of things or in this case look for counter examples. The Python code below took very little time to write (minutes), and gave me a counter example almost immediately. ``` import numpy as np for i in range(1000): data = np.random.randint(-100,100,(3,3)) cov = np.cov(data) sum_diag = np.diag(cov).sum() sum_all_elements = cov.sum() sum_off_diag = sum_all_elements - sum_diag if sum_off_diag > sum_diag: print('Data:' ,data, "\nCovariance matrix: ", cov, "\nSum diag:", sum_diag, "\nSum off diag", sum_off_diag) break; ``` Having a counter example(s) means you can focus your attention in the right place, perhaps if you had a few counter examples you may have then have observed that highly correlated variables seem to violate this, such as pointed out in Michael M's answer.
null
CC BY-SA 4.0
null
2023-05-07T09:12:51.213
2023-05-07T09:12:51.213
null
null
358991
null
615134
2
null
613558
0
null
The best solution depends on whether the values in each column are paired or not. If paired, a scatterplot with a 1:1 line would be best. Depending on the data distribution, you may want to use some transparency for the points or even consider using a hexbin plot instead. If unpaired, a split violin plot, or overlapping density plots with some transparency would be good options.
null
CC BY-SA 4.0
null
2023-05-07T09:29:34.117
2023-05-07T09:29:34.117
null
null
121522
null
615135
2
null
615119
0
null
Assume, under $H_1$, $F\not=G$. This means that for some $z$ and $\epsilon>0$,we have $|F(z)-G(z)|>\epsilon$. Glivenko-Cantelli gives $F_n\to F, G_n\to G$ uniformly in probability. In particular, with probability going to 1, $|F_n(z)-G_n(z)|\to\epsilon>0$. You get the result from $D_{n,\alpha}\to 0$. This follows from the fact that if $F=G$, Glivenko-Cantelli implies that the supremum difference goes to zero in probability, therefore all quantiles of the distribution of that supremum for fixed $\alpha$.
null
CC BY-SA 4.0
null
2023-05-07T09:32:52.793
2023-05-07T10:56:09.220
2023-05-07T10:56:09.220
247165
247165
null
615137
1
615174
null
1
40
When using propensity score matching or weighting, a column of weights is generated that is used to estimate the effect of interest. According to a [blog](https://notstatschat.rbind.io/2020/08/04/weights-in-statistics/) I read, there are three types of weights commonly used in statistics: - aweight: These weights describe the precision (1/variance) of observations. - fweight: Used in categorical data analysis, these weights describe cell sizes in a dataset. For example, a weight of 10 means that there are 10 identical observations in the dataset. - pweight: Sampling weights for survey data. An observation with a weight of 10 was sampled with probability 1/10. I am wondering which of these three types of weights is produced by propensity score weighting or matching(The point estimates obtained using every type of weights are the same, but their standard errors differ significantly), and what R functions should be used to analyze them.
how to understand the weights in PSM?
CC BY-SA 4.0
null
2023-05-07T10:17:09.770
2023-05-07T21:05:22.733
2023-05-07T10:24:15.060
341034
341034
[ "propensity-scores", "matching", "weights", "weighted-regression", "survey-weights" ]
615138
2
null
446996
1
null
The joint bivariate moment generating function is $$M(t,s) = E[e^{t X + s Y}]$$ and the cumulant generating function is it's logarithm $$K(t,s) = \log E[e^{t X + s Y}]$$ The Taylor series expressions of the moment generating function is $$M(t,s) = \sum_{k,l \in \mathbb{N_0}^2} \frac{\sigma_{k,l}}{k!l!} t^k s^l $$ where $\sigma_{k,l} = E[X^kY^l]$ are the raw moments of the variables. The logarithm can be expressed similarly as a power series for this we make use of $$\log(1+u) = \sum_{m=1}^\infty (-1)^{m-1}\frac{u^m}{m}$$ the $$\begin{array}{} K(t,s) &= &\log\left(M(t,s) \right)\\ &=& \log\left( \sum_{k,l \in \mathbb{N_0}^2} \frac{\sigma_{k,l}}{k!l!} t^k s^l \right) \\ &=& \log\left(1 + \left(-1 + \sum_{k,l \in \mathbb{N_0}^2} \frac{\sigma_{k,l}}{k!l!} t^k s^l \right)\right) \\ &=& \sum_{m=1}^\infty (-1)^{m-1}\frac{1}{m} \left(-1+ \sum_{k,l \in \mathbb{N_0}^2} \frac{\sigma_{k,l}}{k!l!} t^k s^l \right)^m \\ &=& \sum_{k,l \in \mathbb{N_0}^2} \frac{\kappa_{k,l}}{k!l!} t^k s^l \end{array}$$ Where the coefficients $\kappa_{k,l}$ in the power series are the cumulants and these can be found by writing out the before last expression. That expression is a bit nasty and needs something like [Faà Di Bruno's formula](https://en.m.wikipedia.org/wiki/Fa%C3%A0_di_Bruno%27s_formula) but multivariate. In the case of $\kappa_{3,3}$ we can find out all the ways, for $m$ from $1$ to $6$, that the we get powers $t^3s^3$ $$\begin{array}{} &&& &(\frac{\sigma_{0,1}}{1} t^0s^1& + &\frac{\sigma_{0,2}}{2} t^0s^2& + &\frac{\sigma_{0,3}}{6} t^0s^3& + \\ &&\frac{\sigma_{1,0}}{1} t^1s^0& + &\frac{\sigma_{1,1}}{1} t^1s^1& + &\frac{\sigma_{1,2}}{2} t^1s^2& + &\frac{\sigma_{1,3}}{6} t^1s^3& + \\ &&\frac{\sigma_{2,0}}{2} t^2s^0&+ &\frac{\sigma_{2,1}}{2} t^2s^1& + &\frac{\sigma_{2,2}}{4} t^2s^2& + &\frac{\sigma_{2,3}}{12} t^2s^3& + \\ &&\frac{\sigma_{3,0}}{6} t^3s^0& + &\frac{\sigma_{3,1}}{6} t^3s^1& + &\frac{\sigma_{3,2}}{12} t^3s^2& + &\frac{\sigma_{3,3}}{36} t^3s^3)^m& \end{array}$$ Or not, I have to rethink this a bit. The expression is gonna contain a large sum with products of 6 terms $\sigma_{1,1}\sigma_{1,1}\sigma_{1,1}\sigma_{1,1}\sigma_{1,1}\sigma_{1,1}$ down to 1 term $\sigma_{3,3}$ Anyways, this is the derivation that is behind the table that you found and posted in the other answer.
null
CC BY-SA 4.0
null
2023-05-07T10:30:34.863
2023-05-07T10:30:34.863
null
null
164061
null
615139
1
null
null
0
40
these are the results obtained for the square gap decomposition using the Oaxaca-Blinders method in Python: Oaxaca-Blinder Three-fold Effects Endowment Effect: -0.02663 Coefficient Effect: 0.19933 Interaction Effect: 0.05553 Gap: 0.22824 Oaxaca-Blinder Two-fold Effects Unexplained Effect: 0.23207 Explained Effect: -0.00383 Gap: 0.22824 what does a negative value for the explained part mean?
How to interpret the results of Oaxaca-Blinder decomposition? [Python]
CC BY-SA 4.0
null
2023-05-07T10:33:00.513
2023-05-07T10:33:00.513
null
null
387409
[ "python", "decomposition", "blinder-oaxaca" ]
615140
2
null
275197
0
null
Models with L1/L2 regularisation handle these hierarchies naturally. (so ridge/lasso regularisation). The reason is they impose a trade off between reducing the total error and adding/increasing a weight/coefficient: a 'region' weight can reduce error on more datapoints (ie across region's departments) than each individual deparment weight - think for example of the limiting case of only region level effects - what is the norm from having same coefficient repeated on each department's coefficient vs a single coefficient on region level (and zero on department level). Essentially penalising the weight norm means that the average region effect is placed on the region, and ('significant' deviations from the region effect are placed on the department coefficients. crossvalidating the regularisation term will determine how 'significant' the effect is: if you have a little data for each department, you are likely to have a strong regularisation and only learn region level effects.
null
CC BY-SA 4.0
null
2023-05-07T10:48:50.517
2023-05-07T10:48:50.517
null
null
27556
null
615141
1
null
null
0
11
I was wondering whether it is possible that a t test and a point biserial correlation can give different results (t-test shows groups differ significantly, correlation implies that variable does not increase/decrease by group). If yes, why is that?
Different results of t test and biserial correlation?
CC BY-SA 4.0
null
2023-05-07T10:51:09.113
2023-05-07T10:51:09.113
null
null
277811
[ "r", "correlation", "t-test", "group-differences", "pearson-r" ]
615142
1
null
null
0
14
In a standard OLS setting, assume you have a dataset consisting of 12 variables $x_1$, ..., $x_{10}$, $y$, and $z$. $y$ is the dependent variable, $x_1$ to $x_{10}$ and $z$ are the regressors with $z$ being also a dichotomous moderator for all of them, meaning that $x_1$ to $x_{10}$ are different for $z=0$ and $z=1$. There are (at least) two procedures for regression: - Regress $y$ on all regressors (incl. $z$) and include interaction terms with $z$, that is: $y = \alpha + \beta_0z + \beta_1x_1 + ...+ \beta_{10}x_{10}+\beta_{11}x_1z+...+\beta_{20}x_{10}z+\varepsilon$. - Split the sample by $z$ into two subsamples for $z=0$ and $z=1$. Then, regress $y$ for both subsamples on the $x$-regressors: $y = \alpha + \beta_1x_1 + ...+ \beta_{10}x_{10}+\varepsilon$. Am I a correct in the assumption that these procedures should, in theory, be equivalent? In particular, does the second procedure work? Whereas the first one tests the moderating effect directly, the second one provides the coefficient estimates for both subsamples directly (what I would prefer in my case).
Equivalence of interaction effect and sample split
CC BY-SA 4.0
null
2023-05-07T10:56:04.657
2023-05-07T10:56:04.657
null
null
383321
[ "regression", "least-squares" ]
615143
2
null
614733
2
null
This is such a difficult question to answer because 'data quality checks' is so broad, and even the definition of anomaly detection is not completely clear. Moreover, to give the best suggestions we would need to know a lot more about your use case. However will all that being said, I have prepared an answer which I hope raises interesting points which you could think about. # Brief description of Anomaly Detection It can be difficult to summarise anomaly detection (note that outlier is often used as a synonym for anomaly). The following summary is a reasonable one: > "Anomaly detection refers to the problem of finding patterns in data that do not conform to expected behavior. These non-conforming patterns are often referred to as anomalies, outliers, discordant observations, exceptions, aberrations, surprises, peculiarities or contaminants in different application domains. Of these, anomalies and outliers are two terms used most commonly in the context of anomaly detection; sometimes interchangeably." (From [Anomaly Detection: A Survey](http://cucis.ece.northwestern.edu/projects/DMS/publications/AnomalyDetection.pdf), Chandola, Banerjee, Kumar) The following pithy description of an outlier is from Hawkins 1980 Identification of Outliers, may also be helpful to keep in mind. "An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism" (see Hawkins D., Identification of Outliers, Chapman and Hall, 1980.) I'll give some 'textbook' applications of anomaly detection: Examples of Anomaly detection: - Credit card fraud detection: Looking for anomalies(outliers) in credit card transaction data, for example unusual patterns in spending, such as an abnormally large purchase or spending in a new location. An anomaly detection model might detect this transaction and some action may be taken - for example the bank temporarily blocks the payment or contacts the customers. - Fault detection, for instance in manufacturing: We might monitor parameters like temperature, pressure, vibration and product dimensions in a production process. We can establish a baseline from historical data, and significant deviations from expected ranges or patterns can be flagged as anomalies or faults. e.g. sudden temperature spikes or irregular product dimensions. Anomaly detection methods can help us identify these anomalies (so we don't sell defective products), and allow us to take timely actions. # Brief description of 'Data Quality' checks Defining data quality is difficult (even [Wikipedia](https://en.wikipedia.org/wiki/Data_quality) agrees!) - it depends a lot on what you are trying to do with the data as to how you would judge the quality. The same data may be high quality for one use case but low for another. Broadly speaking, we might consider data quality to refer to the overally reliability, accuracy, completeness, consistency and relevance of data, ensuring that it is fit for its intended purpose and can be trusted for decision-making and analysis. Examples of things to consider: - How will the data be used? Age categories Child/Adult may be fit for some purposes, but for others we may need more granular ages. - Missing data - are we missing data, how is the missing data recorded - are rows completely omitted or do they appear with some null value? - Correctness. Is the data correct? Note that the data can be correct, and still be low quality. # Your specific problem and the overlap of the two The relevance of anomaly detection techniques to your problem depends exactly on what data quality issues you are trying to find? Without more details it is impossible to precisely answer your question, however I will raise a few points you might like to think about. You mention tables, it is possible to set up data quality tests for your tables (say in SQL) - which allow you to check for things such as null values or non-uniqueness of ids. ([see dbt tests](https://docs.getdbt.com/docs/build/tests)). You mention statistical models/machine learning, and so I wonder whether what you are really driving at here is detecting anomalies in your data and assuming that these anomalies are symptoms of data quality issues. However I would caution you here to make sure you are familiar with the underlying processes in play here. For example in the Fraud detection example above, if you had a column in your table with user transacation amount, then your model might detect an unusually large value - but this value may well be correct, and so there is no data correctness issue here - the data may also have been delivered to you in near real time, so in many regards this is high quality data. On the other hand, if you were analysing data collected by human researchers of children's heights, and you noticed an usually large height of 113 metres, then you would probably conclude this is a genuine data quality issue (whereby the researcher has recorded in metres rather than centremetres). - Can you use simple rules to highlight certain anomalous values. For example values which are physically impossible (e.g. the weight in kilograms of a component cannot be negative). - Do you have some baseline data and a general understanding of what your data should look like? -- You could use something as simple as a z-score to highlight values which are extreme in the sense that they are "far" from the mean value. In some contexts this might make sense but in others will be meaningless. - How often is data recorded and when will you be running your checks (near real time? In batch at the end of the month?) - You could compare two sets of data to see if there is a statistically significant difference in some statistic. However even if you find a difference, it does not necessarily mean there is a data quality issue. - If you have time series data you could consider something like comparing the 'distance' of the current value with a rolling mean/median (see Hampel filters). You should also think about what it would mean if you run some rule or model and find there is a potential data quality issue with a piece of data. Will a human review it? Will the data be omitted or will you replace it with some more 'suitable' value? (mean? median?) What is the cost of getting this wrong - for instance if you fail to detect incorrect data what would the consequence of this be on the business decisions? On the other hand, if you wrongly flag data as having some issue, what would the cost of that be (human operational costs?) To summarise, my advice for you would be to try to understand the underlying process which generates your data and bear this in mind when considering what 'unusual' means and is my unusual data actually 'incorrect'.
null
CC BY-SA 4.0
null
2023-05-07T10:57:44.423
2023-05-07T10:57:44.423
null
null
358991
null
615144
1
null
null
1
14
I am faced with the task, using the data of Sentinel-2 channels, to build a classifier based on time series (time series cassification), where the dependent variable is the wheat class (1) or another crop (0). I am using R language. For greater clarity, I will give a small example of data. ``` exampl=structure(list(date = c("19.12.2020", "24.12.2020", "03.01.2021", "08.01.2021", "13.01.2021", "18.01.2021", "23.01.2021", "28.01.2021", "02.02.2021", "07.02.2021", "17.02.2021", "22.02.2021", "27.02.2021", "04.03.2021", "09.03.2021", "14.03.2021", "24.03.2021", "29.03.2021", "03.04.2021", "13.04.2021", "18.04.2021", "28.04.2021", "08.05.2021", "01.10.2020", "06.10.2020", "11.10.2020", "16.10.2020", "21.10.2020", "31.10.2020", "05.11.2020", "10.11.2020", "20.11.2020", "30.11.2020", "05.12.2020", "10.12.2020", "20.12.2020", "25.12.2020", "04.01.2021", "29.01.2021", "18.02.2021", "23.02.2021", "28.02.2021", "05.03.2021", "10.03.2021", "15.03.2021", "20.03.2021", "30.03.2021", "04.04.2021", "09.04.2021", "19.04.2021", "29.04.2021", "09.05.2021", "14.05.2021", "01.10.2020"), red = c(1103L, 1084L, 1504L, 1259L, 1230L, 1393L, 1225L, 1482L, 1386L, 1316L, 1400L, 1418L, 1546L, 1540L, 1644L, 1568L, 1682L, 1828L, 1887L, 1992L, 2024L, 1965L, 1915L, 1600L, 1360L, 1520L, 1360L, 1528L, 1600L, 1634L, 1508L, 1380L, 1548L, 1456L, 1460L, 1732L, 1080L, 1008L, 1068L, 784L, 580L, 920L, 578L, 1016L, 1296L, 1184L, 1374L, 1224L, 1576L, 2094L, 1932L, 1799L, 1738L, 248L), green = c(964L, 940L, 1392L, 1054L, 999L, 1244L, 964L, 1350L, 1043L, 1044L, 1064L, 1045L, 1170L, 1148L, 1212L, 1176L, 1290L, 1404L, 1410L, 1485L, 1501L, 1496L, 1489L, 1392L, 1056L, 1248L, 1120L, 1160L, 1296L, 1426L, 1388L, 1172L, 1372L, 1352L, 1252L, 1524L, 936L, 1008L, 1092L, 640L, 660L, 884L, 602L, 808L, 1256L, 912L, 1162L, 952L, 1208L, 1598L, 1560L, 1495L, 1474L, 392L), blue = c(737L, 668L, 864L, 838L, 672L, 1060L, 628L, 1166L, 678L, 772L, 792L, 639L, 802L, 756L, 844L, 776L, 930L, 1044L, 986L, 1084L, 1084L, 1153L, 1117L, 960L, 712L, 816L, 784L, 664L, 960L, 1226L, 1204L, 932L, 1068L, 1200L, 1044L, 1292L, 728L, 832L, 860L, 272L, 420L, 644L, 362L, 408L, 1072L, 480L, 826L, 680L, 840L, 1166L, 1268L, 1275L, 1138L, 152L), nir = c(2488L, 2262L, 2120L, 2516L, 2504L, 2566L, 2414L, 2556L, 2568L, 2621L, 2616L, 2584L, 2744L, 2743L, 2840L, 2772L, 2676L, 2865L, 2952L, 3077L, 3051L, 3005L, 2890L, 3096L, 2696L, 2904L, 2680L, 2960L, 2392L, 2120L, 2020L, 2232L, 2088L, 1928L, 2072L, 2488L, 1912L, 1928L, 2296L, 2472L, 2696L, 2792L, 3000L, 2568L, 2740L, 2504L, 2456L, 2296L, 2584L, 2976L, 2624L, 2568L, 2696L, 2632L), swir = c(2976L, 2885L, 2720L, 3216L, 3104L, 3104L, 3020L, 3120L, 3104L, 3337L, 3488L, 3232L, 3603L, 3552L, 3808L, 3614L, 3760L, 3856L, 3859L, 3982L, 4172L, 4076L, 4240L, 3040L, 3104L, 3040L, 3040L, 3232L, 2848L, 2656L, 2256L, 2656L, 2656L, 2464L, 2592L, 2464L, 2080L, 2144L, 2080L, 2016L, 1744L, 2080L, 1888L, 2144L, 2128L, 2256L, 2400L, 2464L, 2848L, 3520L, 3280L, 3304L, 3168L, 1376L), B05 = c(1376L, 1440L, 1712L, 1568L, 1504L, 1712L, 1554L, 1773L, 1632L, 1632L, 1696L, 1632L, 1862L, 1888L, 2000L, 1923L, 2067L, 2189L, 2160L, 2322L, 2347L, 2327L, 2287L, 2016L, 1696L, 1952L, 1824L, 1952L, 1888L, 1824L, 1712L, 1696L, 1760L, 1648L, 1696L, 1952L, 1376L, 1376L, 1424L, 1184L, 1056L, 1296L, 1056L, 1376L, 1632L, 1568L, 1648L, 1632L, 1888L, 2520L, 2216L, 2080L, 2128L, 608L), B06 = c(1952L, 1952L, 1968L, 2050L, 1952L, 2208L, 2062L, 2224L, 2075L, 2081L, 2208L, 2208L, 2250L, 2283L, 2462L, 2336L, 2318L, 2448L, 2470L, 2574L, 2600L, 2551L, 2519L, 2656L, 2272L, 2512L, 2400L, 2528L, 2208L, 2016L, 1872L, 1952L, 1952L, 1824L, 1888L, 2160L, 1696L, 1888L, 2016L, 1952L, 2096L, 2272L, 2208L, 2144L, 2352L, 2144L, 2144L, 2016L, 2208L, 2688L, 2416L, 2352L, 2464L, 2016L), B07 = c(2185L, 2123L, 2080L, 2314L, 2272L, 2400L, 2144L, 2452L, 2208L, 2395L, 2400L, 2330L, 2466L, 2400L, 2644L, 2541L, 2525L, 2736L, 2732L, 2838L, 2860L, 2762L, 2835L, 2976L, 2528L, 2784L, 2528L, 2784L, 2272L, 2080L, 2016L, 2016L, 2016L, 2016L, 2016L, 2320L, 1824L, 1952L, 2128L, 2272L, 2576L, 2656L, 2848L, 2464L, 2704L, 2464L, 2320L, 2144L, 2336L, 2912L, 2528L, 2480L, 2736L, 2592L), B08A = c(2455L, 2336L, 2272L, 2540L, 2524L, 2592L, 2388L, 2604L, 2588L, 2648L, 2699L, 2656L, 2833L, 2720L, 2953L, 2902L, 2800L, 2981L, 2992L, 3147L, 3167L, 3096L, 3108L, 3232L, 2848L, 2976L, 2848L, 2976L, 2464L, 2208L, 2080L, 2256L, 2144L, 2016L, 2128L, 2464L, 1952L, 2080L, 2192L, 2464L, 2656L, 2784L, 2976L, 2656L, 2784L, 2528L, 2464L, 2400L, 2592L, 3104L, 2784L, 2720L, 2784L, 2720L), B12 = c(2336L, 2144L, 2096L, 2592L, 2392L, 2528L, 2336L, 2576L, 2448L, 2656L, 2845L, 2512L, 2947L, 2842L, 3087L, 2784L, 2976L, 3232L, 3104L, 3253L, 3476L, 3412L, 3612L, 2400L, 2400L, 2400L, 2528L, 2496L, 2400L, 2336L, 1744L, 2144L, 2336L, 2224L, 2272L, 2144L, 1760L, 1616L, 1632L, 1552L, 1312L, 1568L, 1376L, 1696L, 1680L, 1760L, 1888L, 1888L, 2336L, 2976L, 2856L, 2784L, 2664L, 800L), wheat = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L)), class = "data.frame", row.names = c(NA, -54L)) ``` I am interested in the methodological aspect of solving the problem. What is the most correct time series classification algorithm i should use in the context of geodata for field crops. The matter is that it is necessary to use all independent variables (covariates) for forecasting of a class of crops. Does anyone have experience with this kind of data. Or is there somewhere an illustrative example of an algorithm in R where shown how best to make a classifier? Thank you for your help.
Is there an Optimal time series classification algorithm for classifying crops in R?
CC BY-SA 4.0
null
2023-05-07T11:09:18.800
2023-05-07T11:18:08.413
2023-05-07T11:18:08.413
1352
387408
[ "r", "time-series", "classification" ]
615145
2
null
614733
0
null
In general there may be more aspects of data quality checking than anomaly detection, as elaborated in the excellent answer by 8e9yQBKVlIDwoIVegfkJ. Also note that an anomaly is not necessarily a data quality issue. A data point/observation may be perfectly fine but still be an outlier or an "anomaly" because something special happened in the underlying process that is correctly identified by a good quality observation. It is important that anomalies are not necessarily wrong. They should be taken as indicating something of potential interest, but this may or may not be an issue with data quality. As written by 8e9yQBKVlIDwoIVegfkJ, this depends however on the exact aim of data analysis. Even a correct good quality observation that is actually an outlier can be a problem for certain statistical analyses, and may cause misleading results. As such it can still be seen as a data quality issue, but I think that this is a misnomer, and the problem really is potential lack of robustness of the planned analysis. That said, anomaly detection is very often a major part of data quality checking, highlighting potential issues with certain observations (optimally then it can be checked what led to the anomaly and if it's really a data quality issue). An alternative form of data quality checking that could be of interest is that you may have a good model of the existing understanding of the data generating process that is not driven by the data you want to check. If data deviate from this systematically, this might point to a data quality issue, and data might be systematically biased, even if none of the individual observations looks an anomaly compared to the others. (Alternatively it may point to flaws in the "existing understanding".)
null
CC BY-SA 4.0
null
2023-05-07T11:11:45.240
2023-05-07T11:11:45.240
null
null
247165
null
615146
1
null
null
0
11
Suppose that I have data with dimension $(N,H,F)$, where $N$ represents the number of different datasets, $H$ is the history size and $F$ is the input size. how would you split it into a train-validation test? Here are my thoughts, with the relevant problems for each - based on the dataset index (syntaxed as (dataset_range, :, :)): we take in the first $[0,1,...,T]$ datasets as train, $[T+1,...,T+V]$ as validation, and $[T+V+1,...,N-1]$ as test. In theory, this is good as my goal is to collect a new dataset and test the model's performance on it. One big problem I see is that when extracting training data statistics for normalization, this should be ideally per every dataset. It is now impossible to normalize the validation dataset (for example), as the datasets have different shape (namely, I cannot normalize a validation set shaped as $(V,H,F)$ using mean and std statistics shaped as $(T,F)$) - based on the history index (syntax as (:, history_range, :)): each train-val-test split contains all datasets but for different ranges of history. Namely, we consider the timestamps $[0,1,...,T]$ as train, $[T+1,...,T+V]$ as validation and $[T+V+1,...,N-1]$ as test. This seems like a problem as the model is trained to provide future values for datasets already seen, whereas my goal is to provide it with an entirely new dataset PS - I currently use the first approach, but with a global normalization statistics, which is quite bad as the datasets are from possibly different underlying distributions
multiple datasets train-val-test split for time series
CC BY-SA 4.0
null
2023-05-07T11:20:27.900
2023-05-07T11:20:27.900
null
null
365891
[ "time-series", "dataset", "validation", "train-test-split" ]
615147
1
null
null
2
25
I see that I can get polychoric correlation coefficients using package [girth](https://pypi.org/project/girth/). I have following data: ``` y1 = [0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, ] y2 = [0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, ] ``` I use the package to get polychoric correlation matrix: ``` data = np.array([y1, y2]) polychoric_corr = gcm.polychoric_correlation(data, start_val=0, stop_val=1) print(polychoric_corr) ``` The output is: ``` [[1. 0.56250022] [0.56250022 1. ]] ``` Hence, polychoric correlation coefficient is 0.56. How can I calculate P value (significance) for this? Thanks for your insight.
How to calculate P value in polychoric correlation
CC BY-SA 4.0
null
2023-05-07T11:41:03.077
2023-05-07T12:31:02.100
2023-05-07T12:31:02.100
56211
56211
[ "correlation", "p-value", "polychoric" ]
615148
1
null
null
0
29
Is it possible to calculate the weight vector anlyticaly for linear classifier? Just like we can do it for linear regression where $w_* = (X^T*X)^{-1} * X^T * y$ is the vector of weights.
Is there an analytic solution for linear classifier?
CC BY-SA 4.0
null
2023-05-07T11:49:53.413
2023-05-07T13:26:04.663
null
null
387401
[ "classification", "linear-model" ]
615150
2
null
615148
1
null
There's no “the” linear classifier, but there are different linear classifiers. [Wikipedia](https://en.wikipedia.org/wiki/Linear_classifier) mentions examples of a few of them. [Linear probability model](https://en.wikipedia.org/wiki/Linear_probability_model) is one of them and the simplest one. It is just linear regression fitted to data with binary labels. Other models like perceptron, logistic regression, LDA, etc don't have closed-form solutions, but we have efficient algorithms for fitting them.
null
CC BY-SA 4.0
null
2023-05-07T13:26:04.663
2023-05-07T13:26:04.663
null
null
35989
null
615151
1
null
null
1
15
I would like to examine the relationship of a variable Y with a social construct. There are three tests (A,B,C) that more or less characterise different aspects of the construct (f.e., A= laboratory test versus B= questionnaire about everyday life, ...). Also, the disadvantage that biases one test are partially resolved with the other test, and the other way around. (f.e. A = bias because of reason x, but no bias because of reason y; B = bias because of reason y, but no bias because of reason x). Because of this situation I thought that it would make most sense to include all three predictors into my regression model. However, I am not sure what it means exactly when one of the predictors is significant. Does it mean that this is "the true construct with (all) influences of errors removed"? If one of the predictors only becomes significant after I have included the other one (let's say, A alone is only significant in the presence of B), would one also say that it is because of the removed error variance? I have also tested how the Tests are related, B and C are correlated signficantly. How do I handle this?
Inclusion of three predictors that explain each other's variance
CC BY-SA 4.0
null
2023-05-07T13:29:48.960
2023-05-07T13:29:48.960
null
null
277811
[ "regression", "multiple-regression", "multicollinearity", "predictor", "confounding" ]
615152
1
null
null
1
23
Here is the data. when the numeric variable is below 100, I can get its accurate number but when it is above 100, I can only group them into >100, >200 and >300. The explaining variable is discontious variable with 2 groups only. how can I analyse this kind of data as the data exhibit both continuous characteristics and discontinuous characteristics? Should I set dummies for each group of numeric variables like when it is 1 to 99 then dummies = 1, when it is 100 to 199 then dummies = 2 and so on to do a statistical test?
How to analysis a data with both continuous and discontinuous characteristics
CC BY-SA 4.0
null
2023-05-07T14:12:18.837
2023-05-07T14:12:18.837
null
null
387418
[ "hypothesis-testing" ]
615153
1
null
null
2
28
In 'big data' settings where the number of samples $n$ may be very large (for fixed number of features), is bagging less or more effective at reducing variance? I heard the claim that it is less effective but intuitively it should be more effective, since the chance of overlap between bootstrap samples must be smaller, and so correlation between bootstrap samples is lower.
Is bagging less useful in 'big data' settings?
CC BY-SA 4.0
null
2023-05-07T14:35:34.590
2023-05-07T15:35:14.797
2023-05-07T15:35:14.797
53690
55946
[ "machine-learning", "bootstrap", "large-data", "ensemble-learning", "bagging" ]
615154
1
null
null
0
7
I want to run an LMM with a 3-level fixed factor "block_condition" (levels: A,B,C) and 2-level block_number (1,2). SubjectID was also added as a random factor to the model. Now the problem is that I have for block_condition A and B - two repetitions/levels (A1, A2, and B1, B2), but in condition C, I have only 1 repetition/level (C1), and I don't know how to run an LMMs with one condition that has level less when I try to set the following LMM: lmer(DV1 ~ block_number * block_condition + (1|sub), data = dfT1). Is there to run such LMM? or a legitimate way to complete/replicate the missing information in condition C - for the sake of the specific comparison? Thanks in advance, G
How to use a Linear Mixed Models (LMMs) to compare a fixed factor with 3 levels, which differ in size
CC BY-SA 4.0
null
2023-05-07T14:35:37.383
2023-05-07T14:35:37.383
null
null
387421
[ "lme4-nlme", "glmm" ]
615155
1
null
null
1
9
I am working on my school project, is about the study of household expenditure pattern using regression approach. Say that I having following linear ols regression model, where y is household expenditure. $$ \begin{equation} y= ε+B_0+ ∑_1^k B_k X_k \end{equation} $$ where $y$ is the dependent variable, $X$ are the independent variables, $B_0$ is the intercept term, $B_1,...,B_K$ are the regression coefficients for the independent variables and $ε$ is the error term. And assume I have further information about household expenditure, which is expenditure pattern $(y_1,...y_k) $ such as food, transportation, and health, as denoted by equation below: $$ \begin{equation} y = y_1+y_2+...+y_k \end{equation} $$ I have code a python OLS regression and also a Random Forest regression using household expenditure as $y$, the dependent variable, and I obtained good r2 scores, about 0.84 for both model. Then, I started to further analyse the expenditure pattern, by developing multivariate OLS and Random Forest, however, the results are very bad, for both models. Below is how I build the model (for linear one). ``` # target_list is the list of expenditure pattern y = data[target_list] x = data.drop(columns = target_list, axis=1) x_train, x_test, y_train, y_test = train_test_split(x, y,train_size = 0.8, test_size = 0.2, random_state = 42) y_test = np.array(y_test) lm = LinearRegression() wrapper = MultiOutputRegressor(lm) wrapper.fit(x_train, y_train) y_pred = wrapper.predict(x_test) r2 = r2_score(y_test, y_pred) mse = mean_squared_error(y_test, y_pred) print("r2: ",r2) -------------------------------- r2: 0.12448338493025608 ``` I have searched some papers about householold expenditure pattern, but I cannot find a paper that use the multivariate regression approach. Anyone have similar studies or suggestion about my topic? Should I give up multivariate regression approach on analysing the expenditure pattern? Appreciate for any guidance.
estimate household expenditure pattern using a multivariate regression model, or other approach?
CC BY-SA 4.0
null
2023-05-07T14:45:20.363
2023-05-07T14:45:20.363
null
null
380642
[ "regression", "multiple-regression", "python", "random-forest", "multivariate-regression" ]
615156
1
null
null
0
15
What test would fit for the following research question? Data: measured in 5 subsequent years, is within design Dependent variable = A is categorical and ordinal Independent variables = B, C, D and E, where D and E are categorical and ordinal and B and C are both numerical. I would like to test whether a change in B or a change in C over two or multiple time measurements is more responsible for a change in A. Hypothesis = when B increases by x, this gives a stronger rise in A than when C increases by x. Hereby controlling on D and E and keeping these variables constant so that they don't create noise. First I thought about a logistic regression, and that the bèta-coefficients can explain whether the effect of B or C is stronger on A. However, I would like to compare 2 years with each other. So how can this be done?
Statistical test for multiple time measures, categorical dependent variable, percentages
CC BY-SA 4.0
null
2023-05-07T14:53:46.890
2023-05-07T14:55:35.287
2023-05-07T14:55:35.287
387420
387420
[ "time-series", "categorical-data", "explanatory-models" ]
615157
1
null
null
0
6
I heard that a sufficient amount of variance/heterogenity is needed in order to get reasonable results for a correlation test. Why is that and what solutions exist when variance is too low? Is splitting the obervations by a cut-off reasonable? F.e., the median? (This question also relates to floor and ceiling effects)
Variance low and ceiling effect - split by median`?
CC BY-SA 4.0
null
2023-05-07T15:08:25.213
2023-05-07T15:08:25.213
null
null
277811
[ "correlation", "variance", "group-differences", "heterogeneity", "range" ]
615159
2
null
611723
0
null
Given the large amount of comments I thought that it might be better when I place some of them in an answer. What I understand from the comments following the question is that the idea is about performing some form of MCMC-sampling, but with a Kernel that adapts after each new sample. And, the motivation is to ensure that the sample will satisfy a certain condition. ### Simple example A simple example would be the sampling of a distribution symmetric around zero (e.g. a standard normal distribution) but with the constraint that the sample mean needs to be zero (typically, if the distribution has zero mean, a sample from it does not need to be zero). If we have a kernel that proposes every time that the sample mean is unequal to zero a sample whose value makes the sample mean equal to zero. Then we ensure that this property is fulfilled (at least every odd step). The set of samples that can be sampled will not be iid variables and will not be the same as sampling the target distribution without the constraint. However, if the target distribution is symmetric around zero, then this algorithm will generate a sample whose empirical distribution approaches the true distribution. Below is a code example that has the proposed sample based on the complete history $$x^\star|x_0,x_1,\dots,x_t \sim \begin{cases} -\sum_{i=0}^t x_i & \qquad \text{if $\sum_{i=0}^t x_i \neq 0$} \\ N(x_t,0.04) & \qquad \text{if $\sum_{i=0}^t x_i = 0$} \\ \end{cases}$$ Below is an example of the histogram of a sample of size 50000 when the target function is a standard normal. [](https://i.stack.imgur.com/Hj7zO.png) This sample is in not a typical sample from a normal distribution. The sample will be having zero mean with probability 1, whereas a sample from a normal distribution will be having zero mean with probability 0 (and also the samples will be relatively symmetric). However, the empirical distribution will approach the the distribution function of the target distribution. So in that sense this sampling 'works'. For different more complex cases it will depend. For example, when we use the method above with a non-symmetric distribution, then it stops 'working'. ``` set.seed(1) newsample = function(old_sample, LikelihoodFunction) { L = length(old_sample) m = sum(old_sample) ### if the current sample has not zero mean then the suggestion will always be a sample that makes the mean zero if (m!=0) { suggest = -m } else { suggest = rnorm(1,old_sample[L],0.2) } ### compare the likelihood and base the next sample on it u = runif(1) l1 = LikelihoodFunction(suggest) l2 = LikelihoodFunction(old_sample[L]) if (u<l1/l2) { return(suggest) } else { return(old_sample[L]) } } ### start the mcmc in the point 0 sample = c(0) ### generate 50000 samples for (i in 1:50000) { sample = c(sample, newsample(sample, LikelihoodFunction = function(x) {dnorm(x)})) } ### plot histogram with curve for target density as comparison hist(sample, seq(-5,5,0.1), xlim =c(-3,3), freq = 0) xs = seq(-4,4,0.01) lines(xs, dnorm(xs)) ```
null
CC BY-SA 4.0
null
2023-05-07T15:23:29.057
2023-05-07T15:23:29.057
null
null
164061
null
615160
1
null
null
0
15
I am using [girth](https://pypi.org/project/girth/) package which uses polychoric correlation and then applies factor analysis. I am using the example code given on above webpage under the subheading "Polychoric Correlation Estimation": ``` import girth.synthetic as gsyn import girth.factoranalysis as gfa import girth.common as gcm discrimination = np.random.uniform(-2, 2, (20, 2)) thetas = np.random.randn(2, 1000) difficulty = np.linspace(-1.5, 1, 20) syn_data = gsyn.create_synthetic_irt_dichotomous(difficulty, discrimination, thetas) polychoric_corr = gcm.polychoric_correlation(syn_data, start_val=0, stop_val=1) results_fa = gfa.maximum_likelihood_factor_analysis(polychoric_corr, 2) ``` The data (20 columns and 1000 rows) is as follows: ``` 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 0 0 0 0 0 0 0 1 0 1 1 1 1 0 0 1 0 1 1 0 1 1 0 0 0 0 1 1 1 0 0 1 0 1 1 0 1 0 1 0 1 1 2 0 1 1 0 0 0 0 0 0 1 0 1 0 1 1 0 1 1 1 1 3 0 0 1 0 1 0 0 1 0 1 0 1 0 1 1 1 1 1 1 0 4 0 0 0 1 0 1 1 0 0 1 0 0 1 1 1 0 1 1 1 0 .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. 995 1 1 0 0 0 1 1 0 1 0 1 0 1 0 0 1 0 0 1 0 996 0 1 1 1 0 0 0 1 0 1 0 1 1 1 1 0 0 1 1 1 997 1 0 0 1 1 1 1 0 1 0 0 0 0 1 0 1 1 1 0 1 998 0 0 1 0 0 0 0 0 0 1 0 0 1 0 1 1 0 1 1 1 999 0 0 1 1 1 0 1 0 0 0 1 0 0 0 0 0 0 1 1 1 ``` The correlation matrix created is as follows: ``` 0 1 2 3 4 5 6 ... 13 14 15 16 17 18 19 0 1.000000 -0.372014 -0.242639 -0.450080 0.130449 0.453910 0.016924 ... -0.340188 -0.507171 0.482642 -0.018359 -0.285936 -0.422905 0.226303 1 -0.372014 1.000000 0.149975 0.333496 -0.195981 -0.301749 0.010126 ... 0.248877 0.353242 -0.360966 -0.032249 0.300746 0.250678 -0.176474 2 -0.242639 0.149975 1.000000 0.020036 -0.187157 -0.149063 -0.502865 ... -0.129884 0.294766 0.026245 -0.240363 0.511740 0.374489 -0.117930 3 -0.450080 0.333496 0.020036 1.000000 -0.043634 -0.372508 0.201392 ... 0.328995 0.395505 -0.534622 0.075794 0.159446 0.353241 -0.218804 4 0.130449 -0.195981 -0.187157 -0.043634 1.000000 0.171968 0.192930 ... 0.123934 -0.156043 -0.011919 0.139299 -0.245182 -0.232658 0.080830 5 0.453910 -0.301749 -0.149063 -0.372508 0.171968 1.000000 0.022395 ... -0.276296 -0.418714 0.473535 -0.014285 -0.327602 -0.396633 0.226920 6 0.016924 0.010126 -0.502865 0.201392 0.192930 0.022395 1.000000 ... 0.344361 -0.125337 -0.234694 0.301752 -0.356692 -0.280085 -0.069264 7 -0.037914 0.011994 0.507751 -0.162141 -0.275842 -0.081360 -0.574436 ... -0.311638 0.158223 0.177512 -0.366945 0.529308 0.364736 -0.033857 8 0.562803 -0.378088 -0.071710 -0.498235 0.053983 0.418691 -0.184258 ... -0.443017 -0.400253 0.601103 -0.105491 -0.121025 -0.337324 0.243227 9 -0.529324 0.384604 0.154708 0.516636 -0.123017 -0.397763 0.155550 ... 0.321337 0.401681 -0.478831 0.009805 0.321941 0.389378 -0.307586 10 0.361794 -0.220212 0.048604 -0.334888 0.000692 0.273246 -0.304177 ... -0.394676 -0.211652 0.400769 -0.186524 -0.049662 -0.181061 0.131394 11 -0.518512 0.239324 0.148274 0.309680 -0.025221 -0.344541 0.025191 ... 0.282115 0.429718 -0.422606 -0.010329 0.232012 0.388386 -0.213912 12 0.024234 0.021528 0.358700 -0.120740 -0.177754 -0.043941 -0.356709 ... -0.237652 0.057160 0.188560 -0.237153 0.263305 0.191856 0.007850 13 -0.340188 0.248877 -0.129884 0.328995 0.123934 -0.276296 0.344361 ... 1.000000 0.210238 -0.519403 0.170950 -0.051980 0.103658 -0.209891 14 -0.507171 0.353242 0.294766 0.395505 -0.156043 -0.418714 -0.125337 ... 0.210238 1.000000 -0.429463 -0.028562 0.372955 0.461054 -0.194371 15 0.482642 -0.360966 0.026245 -0.534622 -0.011919 0.473535 -0.234694 ... -0.519403 -0.429463 1.000000 -0.071710 -0.240631 -0.385008 0.242634 16 -0.018359 -0.032249 -0.240363 0.075794 0.139299 -0.014285 0.301752 ... 0.170950 -0.028562 -0.071710 1.000000 -0.207235 -0.197238 -0.070399 17 -0.285936 0.300746 0.511740 0.159446 -0.245182 -0.327602 -0.356692 ... -0.051980 0.372955 -0.240631 -0.207235 1.000000 0.459764 -0.125293 18 -0.422905 0.250678 0.374489 0.353241 -0.232658 -0.396633 -0.280085 ... 0.103658 0.461054 -0.385008 -0.197238 0.459764 1.000000 -0.145590 19 0.226303 -0.176474 -0.117930 -0.218804 0.080830 0.226920 -0.069264 ... -0.209891 -0.194371 0.242634 -0.070399 -0.125293 -0.145590 1.000000 ``` The results of factor analysis are as follows: ``` (array([[ 0.0419671 , -0.73520752], [-0.03984031, 0.50963606], [-0.64915359, 0.24317354], [ 0.1862389 , 0.64211114], [ 0.31887008, -0.15306844], [ 0.07858975, -0.61287159], [ 0.73044949, 0.07450868], [-0.80897573, 0.02504498], [-0.21818786, -0.73632751], [ 0.04928449, 0.70744866], [-0.29856161, -0.49485133], [-0.02069669, 0.58686592], [-0.52482931, -0.04802906], [ 0.41904667, 0.48510999], [-0.20016447, 0.63688427], [-0.2389381 , -0.7398583 ], [ 0.41800295, 0.01914967], [-0.58684168, 0.42456658], [-0.40758167, 0.58997834], [-0.01106376, -0.33936853]]), array([3.12844677, 5.14338998]), array([0.45770867, 0.73868383, 0.51946626, 0.55300836, 0.87489192, 0.61821207, 0.46089202, 0.34493097, 0.41021586, 0.49708743, 0.66598313, 0.65516004, 0.72224741, 0.58906818, 0.55431261, 0.39551828, 0.82490682, 0.47536005, 0.48580274, 0.88470659])) ``` There are 3 arrays in the output. What do these 3 arrays represent? How does one interpret this output? Thanks for your insight.
How to interpret factor analysis output here
CC BY-SA 4.0
null
2023-05-07T15:24:48.370
2023-05-07T15:24:48.370
null
null
56211
[ "factor-analysis", "item-response-theory" ]
615163
1
615251
null
0
31
maybe someone can help me with my data. I analyse how macroeconomic indicators affect stock index. For this analysis I prefer VAR model.In my case data of all variables are non-stationary - I have checked it by plots and also with adf test.Also I made unit root test and it says that all variables have unit root. After that I decided to apply differencing method with first order. After that with adf test I got results that all variables are stationary, because p-value is less than 0.05 but unit root test shows that some variables still have unit root. My question is should I also apply cointegration test in this case after I found that differenced data still has unit roots? Or should it be applied before making changes as differencing in data? Basicly through a lot of sources I don't understand when should I use unit root test and cointegration test when I want to create VAR model. Thanks in advance!
Unit root test and cointegration
CC BY-SA 4.0
null
2023-05-07T16:38:39.547
2023-05-08T16:54:17.453
null
null
377935
[ "time-series", "vector-autoregression", "cointegration", "unit-root", "vector-error-correction-model" ]
615164
2
null
275220
0
null
The answer of @Danica provides several approaches for obtaining the distribution of $aX +bY$. This expression typically arises as the quadratic form in normal random variables. For completeness, I would like to add some additional approaches for computing $aX + bX$ numerically, which could then be used to calculate the expected square root numerically. Imhof (1961) ([https://www.jstor.org/stable/2332763](https://www.jstor.org/stable/2332763)) provides an exact numerical method, for which there is an R package available [here](https://rdrr.io/cran/CompQuadForm/man/imhof.html). Bodemham and Adams (2016) ([https://link.springer.com/article/10.1007/s11222-015-9583-4](https://link.springer.com/article/10.1007/s11222-015-9583-4)) evaluates the accuracy of 6 different approximations, including the Satterthwaite-Welch approximation. These are less accurate than Imhof's method, but generally considerably less computationally intensive, e.g. they found that the Satterthwaite-Welch approximation is abouth 50 times faster than Imhof's method. There is an associated Python package called momentchi2 available [here](https://pypi.org/project/momentchi2/). In my experience Imhof's method is sufficiently fast for 10 to 20 terms, even when I use it in Monte Carlo simulations. Also see @Danica's answer [here](https://stats.stackexchange.com/questions/67533/sum-of-noncentral-chi-square-random-variables/96953#96953), which refers to an approximate approach by Bausch (2013) ([http://arxiv.org/pdf/1208.2691.pdf](http://arxiv.org/pdf/1208.2691.pdf)) suitable when the number of terms are large.
null
CC BY-SA 4.0
null
2023-05-07T16:47:25.850
2023-05-07T16:47:25.850
null
null
325940
null
615165
2
null
615040
1
null
Olkin & Tate (1961) presented a general location model for mixed continuous and categorical variables. The categorical variables are cross-tabulated and an unstructured approach is to put a multinomial distribution over all the table's cells. Then conditional on each cell in the table, the continuous variables are distributed as multivariate normal. You can also structure the model, e.g., log-linear restrictions among the categorical items or a diagonal covariance matrix for the condition multivariate normal distributions, or a pooled, common multivariate normal distribution across all or parts of the contingency table.
null
CC BY-SA 4.0
null
2023-05-07T17:57:37.670
2023-05-07T17:57:37.670
null
null
227075
null