Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
617534 | 1 | null | null | 0 | 19 | I want to analyze my data related to two different timelines - measuring stress and anxiety in two different time points:
time point 1 before meeting dogs, time point 2 after meeting dogs.
Then I would compare the data of both time points with each other.
What SPSS analysis should I use for this?
| What SPSS analysis to use to compare stress and anxiety in two different time points? | CC BY-SA 4.0 | null | 2023-06-01T11:11:21.407 | 2023-06-01T12:48:37.800 | 2023-06-01T12:48:37.800 | 56940 | 389334 | [
"repeated-measures",
"spss",
"psychology"
] |
617535 | 1 | null | null | 0 | 12 | I have a complicated model and I want to fit it to my data. I used NonlinearModelFit. The fit quality is very poor (P is infinitesimal) and it comes with a warning that I don't know how to avoid.
FittedModel::constr: The property values {ParameterTable} assume an unconstrained model. The results for these properties may not be valid, particularly if the fitted parameters are near a constraint boundary.
```
data2 = {{0.26, 0.002}, {2.61, 0.0011}, {6.39, 0.0029}, {6,
0.003575}, {19, 0.002405}, {63, 0.004875}, {13, 0.007995}, {87,
0.002665}, {122, 0.00364}, {31, 0.00546}, {361, 0.007475}, {491,
0.007085}};
dataerror2 = {0.0008, 0.0004, 0.0005, 0.001235, 0.00104, 0.00078, 0.00364, 0.000455, 0.00039, 0.001495, 0.00104, 0.000975};
Ieff = 83.2 ;
lambdamin = LETmin/Ieff*1000 ;
LETmin = 0.26;
Np = 0.0811;
Ns = 0.080;
Nt = 0.1611;
lambdaLET[let_] = lambdamin*let/LETmin;
NpCumulative[k_, r0_, lambda_] = Np*(1. - CDF[PoissonDistribution[r0*lambda], k])/(r0*lambda);
NsCumulative[k_, r0_, lambda_] = Ns*(1. - CDF[PoissonDistribution[r0*lambdamin], k])/(r0*lambdamin);
NtCumulative[k_, r0_, lambda_] = NpCumulative[k, r0, lambda] + NsCumulative[k, r0, lambda];
datalist2 = Join[data2, List /@ dataerror2, 2];
Data2 = ListLogLogPlot[{#1, Around[#2, #3]} & @@@ datalist2,Axes -> False, PlotRange -> {{0.2, 1000}, {.0001, 0.5}}, Frame -> True, ImageSize -> 500, PlotStyle -> {Red, PointSize[0.008]},FrameLabel -> {Style["LET", 14], Style["rate ", 14]}, PlotLabel -> Style[" rate", 18]];
datafittD2 = NonlinearModelFit[data2, {Eff2*NtCumulative[1, r0, lambdaLET[let]], {0.001 < r0 < 0.1}, {0.1 < Eff2 < 0.9}}, {{r0, 0.003}, {Eff2, 0.6}}, {let}, Weights -> 1/dataerror2^2, VarianceEstimatorFunction -> (1 &)];
datafittD2plot = LogLogPlot[datafittD2[let], {let, 0.2, 1000}];
datafittD2["BestFitParameters"];
datafittD2["ParameterTable"]
DOF2 = Length[dataerror2] - 2
datachisquare2 =Sum[(datafittD2["FitResiduals"][[i]]/dataerror2[[i]])^2, {i, 1, Length[dataerror2]}]
datafittDplot2 = Show[Data2, datafittD2plot, Frame -> True]
```
| Nonlinear model fit using Mathematica | CC BY-SA 4.0 | null | 2023-06-01T11:26:40.323 | 2023-06-01T11:34:54.910 | 2023-06-01T11:34:54.910 | 110833 | 389335 | [
"predictive-models",
"data-visualization",
"nonlinear-regression",
"lognormal-distribution",
"logarithm"
] |
617536 | 1 | null | null | 0 | 25 | [](https://i.stack.imgur.com/uArPK.jpg)
As the number of values sampled from the uniform distribution increases, the distribution of the mean tend towards a Gaussian. Why is that?
| Uniform distribution and Gaussian distribution | CC BY-SA 4.0 | null | 2023-06-01T11:53:15.040 | 2023-06-01T12:00:16.630 | 2023-06-01T12:00:16.630 | 388783 | 388783 | [
"normal-distribution",
"uniform-distribution"
] |
617537 | 2 | null | 616899 | 1 | null | I'd suggest giving the p value and CI, but complement them with one additional number, an estimate of the false positive risk. If it's sensible to put a lump of prior probability on a point null (as it often is), then [Benjamin & Berger's approach](https://www.tandfonline.com/doi/full/10.1080/00031305.2018.1543135) gives a maximum Bayes factor in favour of H1
min BF= (-ep ln(p))^-1,
and if we are willing to assume that that H0 and H1 are equally probable, a priori, it follows that the false positive risk is at least
FPR = Pr(H0 | p) = 1/(1+BF)
If you've observed p = 0.05 and declare a discovery, the chance that its a false positive is 29%. In your case, you've found p = 0.04, so this approach implies a false positive risk of 26%.
If you had been doing a t test, [my 2019 approach](https://www.tandfonline.com/doi/full/10.1080/00031305.2018.1529622) gives similar results. I suggested calling the FPR for prior odds of 1, the FPR_50 (on the grounds that H0 and H1 are assumed to be 50:50, a priori). For p = 0.05 the FPR_50 for a well-powered experiment is 27%. And for p = 0.04 it's 22% (easily found by the [web calculator](http://fpr-calc.ucl.ac.uk/))
These approaches emphasize the weakness of the evidence against the null provided by marginal p values.
| null | CC BY-SA 4.0 | null | 2023-06-01T11:56:36.653 | 2023-06-01T11:56:36.653 | null | null | 73215 | null |
617538 | 1 | null | null | 0 | 9 | I'm currently studying multicategory logit models and practicing the codes with the `VGAM` package in R. I'm also studying the residuals, and I got a question about this part.
```
library(VGAM)
data(iris)
iris
fit <- vglm(Species ~., family = multinomial, data = iris)
res <- residuals(fit)
head(res)
# log(mu[,1]/mu[,3]) log(mu[,2]/mu[,3])
# 1 0.3378172 -0.6558926
# 2 0.5259338 -0.4686902
# 3 0.4453859 -0.5491334
# 4 0.4062911 -0.5892529
# 5 0.5543963 -0.4394571
# 6 0.3686853 -0.6259055
pr <- residuals(fit, "pearson")
head(pr)
# log(mu[,1]/mu[,3]) log(mu[,2]/mu[,3])
# 1 1.367196e-06 -1.371712e-06
# 2 1.128209e-05 -1.128116e-05
# 3 6.597292e-06 -6.598873e-06
# 4 4.817509e-05 -4.817845e-05
# 5 1.470626e-06 -1.468789e-06
# 6 5.607686e-06 -5.611555e-06
```
Practicing with the iris dataset, I fit the multinomial logistic regression and calculated the default residuals and Pearson residuals above. I would like to calculate these residuals without using `resid` or `residuals` function. I did calculate all the residuals in the binomial case by following the instructions on [this website](https://data.library.virginia.edu/understanding-deviance-residuals/). Unlike the binomial case, however, it is not easy for me to calculate the residuals from multinomial logistic regression. Can someone help with this problem?
| Calculating residuals in Logistic Regression with Multicategories | CC BY-SA 4.0 | null | 2023-06-01T12:20:41.417 | 2023-06-01T12:20:41.417 | null | null | 375779 | [
"r",
"regression",
"logistic",
"residuals"
] |
617540 | 1 | null | null | 0 | 15 | I am conducting a simulation study to assess the performance of several confidence intervals.
My approach is to simulated (N = 10000) datasets, estimate the parameters, construct a confidence interval, and for each simulated path check whether the true parameter is in interval estimate. This results is an empirical coverage rate.
This simulation study is a replication of an existing study, and I possess over the author's code. The author has a different approach: simulate (N = 10000) datasets, estimate the parameters, calculate the p-value of the test with H_0: estimate = true parameter, the empirical coverage rate is given by the fraction of times p-value > a for a confidence interval with level 1 - a.
The results of our approaches differ. If I apply the procedure of the author in my code I get the same results as the author. Hence my estimation procedures etc. work well. This raises my question on whether the procedure of the author is correct or not?
Thanks for your time.
| Is the empirical coverage rate of a confidence interval equal to the test's acceptance rate for a correct estimate (in a simulation study)? | CC BY-SA 4.0 | null | 2023-06-01T12:25:29.517 | 2023-06-01T12:52:45.860 | 2023-06-01T12:52:45.860 | 56940 | 305108 | [
"confidence-interval",
"p-value",
"simulation",
"coverage-probability"
] |
617541 | 1 | null | null | 0 | 8 | I have been using CNN-LSTM for action recognition with dataset split 70% for training , 20% for validation , 10% for testing and after training , the validation loss and training loss were very close to each other and they were low but the model performed poorly on the test set . I believe it might be because of Data leakage that my test set wasn't representative enough for the training set and validation set . But is this truly the problem or are there other cases?
| The validation loss and training loss are low and close to each other but there exists high test loss? | CC BY-SA 4.0 | null | 2023-06-01T12:25:51.120 | 2023-06-01T12:25:51.120 | null | null | 327943 | [
"machine-learning",
"neural-networks",
"conv-neural-network",
"lstm",
"data-leakage"
] |
617542 | 1 | 617576 | null | 1 | 39 | I have a random variable $X$ that follows a Gamma distribution.
$$ X \sim \text{Gamma}[\alpha, \beta] $$
I want to know the distribution of $Y$, i.e.,
$$ Y = a - \exp\left(-b X\right) $$
| Distribution of the exponential of a Gamma distributed random variable | CC BY-SA 4.0 | null | 2023-06-01T12:29:29.040 | 2023-06-01T15:53:08.580 | null | null | 327104 | [
"distributions",
"gamma-distribution",
"exponential-family"
] |
617543 | 1 | null | null | 0 | 4 | Consider this simplistic model:
```
mdl = pm.Model()
with mdl:
prior_p = pm.Beta("p", alpha = 1, beta = 1)
likelihood = pm.Binomial("likelihood",
n = [73, 100, 25, 125, 98],
p = prior_p,
observed = [46, 80, 5, 38, 13])
idata_mdl = pm.sample()
```
I can retrieve the `observed` data by:
```
idata_mdl.observed_data.likelihood.values.flatten()
```
But I don't find how to retrieve the `n` data (73, 100, ...) ???
Although they also must be stored somewhere in the model, I suppose... What instruction did I forget?
| How to retrieve n from PyMC.Binomial? | CC BY-SA 4.0 | null | 2023-06-01T12:33:21.897 | 2023-06-01T12:33:21.897 | null | null | 258431 | [
"python",
"pymc"
] |
617544 | 2 | null | 617534 | 1 | null | You could use a paired samples t test to compare the sample means across time.
| null | CC BY-SA 4.0 | null | 2023-06-01T12:33:22.067 | 2023-06-01T12:33:22.067 | null | null | 388334 | null |
617545 | 1 | null | null | 0 | 12 | I got two population estimates a and b calculated each by a random forest regression. Variable c is then subtracted from a and b. As expected, the estimates a-c and b-c show a nice correlation (r = .80).
Calculation A: Then, for all features used for the estimation of a, I performed sperate linear regression for each feature as independent variable and always b-c as dependent variable. However, the effects of each feature on estimate b-c are significant, but are in the opposite direction. E.g. feature z is expected to have a positive effect on estimate b-c, but the resulting effect is negative.
Calculation B: In order to solve this issue and also in line with previous research, I tried the following: I performed the same seperate linear regressions between the features composing estimate a and estimate b-c as independent variable, but now with variable c, the same as mentioned above, as covariate. The resulting effects of the respective features shrinked substantially, were not signficant anymore and were still partially in the opposite direction.
My Question: How can it be explained that the population estimates -c correlate with eachother, but the features, which compose estimate a, don´t show any effects (Calculation B) or opposite effects (Calculation A) on estimate b-c? And is there a better way to solve this problem?
Thanks for your help!
| Why do two population estimates a and b show correlation, but the features composing estimate a have no/unclear effects on estimate b? | CC BY-SA 4.0 | null | 2023-06-01T12:36:05.533 | 2023-06-01T12:36:05.533 | null | null | 389332 | [
"regression",
"psychology"
] |
617546 | 2 | null | 617525 | 0 | null | Interaction effects that are significant when the main effect is not is always a bit challenging to unpack. In brief, this means that, on average, there is no overall difference between treatment groups. However, if you were to look at the time trend for each group (from pre to post to follow-up), the trend will not be consistent for the two treatment groups.
One example might be where the groups start out the same, then one treatment condition does better at post and they flip at follow-up. This is just one possible example...but the idea is that something changes in the progression of time for each comparison via treatment.
Now, the interesting part is that RM-ANOVA is higher powered than your basic t-test. So, this means that you might do a t-test comparison at each time (comparing treatments), and none of the tests come up statistically significant. But, the interaction might be significant in the RM-ANOVA. While it will be difficult to tease apart exactly where the group difference might be, the RM-ANOVA is probably the better result to report.
My suggestion would be to generate a means-plot for each treatment group for the different time points. This may be more informative at revealing the trending change over time for the two groups.
| null | CC BY-SA 4.0 | null | 2023-06-01T12:38:39.287 | 2023-06-01T12:38:39.287 | null | null | 199063 | null |
617547 | 1 | null | null | 1 | 18 | Is there an agreement method that would be well-suited for a data annotation task where:
- the labels are discrete classes
- each datapoint belongs to exactly one class (multi-class classification)
- each datapoint is annotated by 3 or more annotators
- the labels are very imbalanced, with some labels occurring significantly more frequently than others.
Cohen's kappa is only for two annotators, so won't work here. Fleiss' kappa (allegedly) assumes that each annotator needs to assign a certain number of cases to each category, which is not the case here. Randolph's kappa seems to assume a uniform distribution of the classes instead, which is also not the case, and isn't very widely adopted.
Can anyone recommend a suitable metric to use? Or maybe one of the ones above is still applicable?
If the recommended metric has recommended thresholds for "acceptable" or "good" agreement levels, that would be even better.
Many thanks!
| Any good metric for measuring multi-annotator agreement on an imbalanced dataset? | CC BY-SA 4.0 | null | 2023-06-01T12:50:39.173 | 2023-06-01T17:05:13.907 | 2023-06-01T15:13:08.473 | 62290 | 62290 | [
"agreement-statistics",
"metric"
] |
617548 | 1 | null | null | 0 | 7 | I want to analyze the results of a survey. The survey consists of 16 questions. Answers for the first 15 questions can be ranked from -1,0 to 1 (when -1 is bad, 0 is medium, and 1 is good).
Question 16 can be ranked from -5 to 5: -5,-2,0,2,5.
I want to check the relationship between the last question and the first 15 questions. Can I use Ordinal logistic regression if my explanatory variables are categorical and the explained value is categorial?
Here is a pic of 2 two-dimensional graphs, where the y plane is the results for question 16 and the axis are results for question 14
[](https://i.stack.imgur.com/l4yzu.png)
Thank you for your answers!
| Multivariable Ordinal logistic regression - explanatory variables are categorical and the explained variable is categorial also | CC BY-SA 4.0 | null | 2023-06-01T12:55:49.363 | 2023-06-01T12:55:49.363 | null | null | 389341 | [
"regression",
"machine-learning",
"logistic",
"descriptive-statistics",
"ordinal-data"
] |
617549 | 2 | null | 617468 | 1 | null | The second option (`... + (1|school)`) is probably correct. In general the variables that you use as a grouping variable should be categorical and exchangeable (i.e., you should be able to switch labels without changing the meaning). You could conceivably treat tuition as categorical ("low", "medium", "high"), but even then it probably wouldn't be sensibly thought of as exchangeable.
In general modern mixed-model frameworks don't 'care' at what level the fixed effects vary. It does determine whether you can model their variation across units — for example, you could conceivably model the variation of the effect of student family income across schools, but you can't model the variation of the effect of tuition across schools because it doesn't vary within schools.
| null | CC BY-SA 4.0 | null | 2023-06-01T13:12:49.440 | 2023-06-01T13:12:49.440 | null | null | 2126 | null |
617550 | 1 | null | null | 0 | 17 | I am trying to implement [Soft Actor-Critic with automatic Entropy tuning](https://arxiv.org/abs/1812.05905).
One thing I noticed is that the authors, and also the majority of implementations of SAC use $-\text{dim}(|\mathcal{A}|)$ as entropy target $\bar{\mathcal{H}}$, and I am confused where this exact value came from. I have read the answer to [this question](https://stats.stackexchange.com/questions/561624/choosing-target-entropy-for-soft-actor-critic-sac-algorithm), but I am still not fully convinced.
I do understand the intuition behind setting $\bar{\mathcal{H}}$ proportional to $\text{dim}(|\mathcal{A}|)$. Let's say, for example, we consider the target distribution of $\text{tanh}(\bf{u})$ to be $\text{Uniform}(-1, 1)$ for each dimension (I am not sure if this is a good assumption though), then the entropy target should be $\text{dim}(|\mathcal{A}|) \ln(2)$. But I am not sure why the majority of the implement uses exactly $-\text{dim}(|\mathcal{A}|)$.
Is there any specific reason for this setting? Or, are there any papers or articles about how the target entropy setting affects the performance of SAC?
| Why use $-\dim(|A|)$ as target entropy in soft actor critic | CC BY-SA 4.0 | null | 2023-06-01T13:17:33.593 | 2023-06-01T17:02:58.973 | 2023-06-01T17:02:58.973 | 22311 | 363043 | [
"reinforcement-learning"
] |
617551 | 2 | null | 617456 | 2 | null | Yes, the second scenario would be considered MAR. The usual definition of MCAR is something like $P(Y=1|data) = P(Y=1)$; the probability of missingness is independent of the data. That doesn't apply in this case unless $\beta_1, \beta_2 = 0$. Therefore this scenario is not MCAR.
I suppose technically one could imagine the other direction. If we have MCAR data, can we say it is also MAR? I think the answer is yes, depending on how exactly MAR is defined.
| null | CC BY-SA 4.0 | null | 2023-06-01T13:23:07.770 | 2023-06-01T13:23:07.770 | null | null | 282433 | null |
617552 | 1 | null | null | 0 | 6 | I do have monthly nominal GDP/ M1 data over several years (2005 to 2022).
And I do have Inflationrates either as:
12-month inflation: normally considered as inflation rate, is defined as the percentage change in the monthly consumer price index (CPI)
or as
Annual average inflation: is the percentage change in the annual average consumer price index (CPI) of the corresponding months
To get realGDP values its normaly: (nominalGDP/ GDP-Deflator)*100.
However in my understanding, to make this work, the GDP Deflator is according to a base year.
My question would be now: is it possibile to calculate my real values now somhow on those 12 month inflation rate?
| nominal GDP and 12 Month Inflationrate | CC BY-SA 4.0 | null | 2023-06-01T13:28:50.057 | 2023-06-01T13:28:50.057 | null | null | 389345 | [
"macroeconomics"
] |
617553 | 1 | null | null | 0 | 15 | I would like to know whether I can use the graph produced by `dotplot(ranef(model))` to judge wether the random effects are significant or not.
GMA is my random effect in this case. I have edited out the y axis values, but every line belongs to a crop cultivar.
Can I say, based on this plot that, due to overlap with 0, that the upper 8 cultivars have a significant postive GMA, the middle 9 cultivars a non-significant GMA and the bottom 7 a significantly negative GMA.
Thanks!
[](https://i.stack.imgur.com/K4Lxu.jpg)
| R: Can judge significance based on overlap in graph produced by dotplot(ranef(model))? | CC BY-SA 4.0 | null | 2023-06-01T13:28:51.790 | 2023-06-02T13:14:34.973 | 2023-06-02T13:14:34.973 | 389257 | 389257 | [
"r",
"mixed-model"
] |
617554 | 2 | null | 617550 | 0 | null | If you are minimizing a loss function $f$, and there is a function $g$ that is proportional to $f$ something like this
$$ g = k \cdot f $$
then by minimizing $f$ you are also minimizing $g$.
It also extends to all transformations of f that do not change the ordering. If for a transformation $h(f)$ if you were to sort both $f$ and $h(f)$ and the order of the result is identical, then minimizing on $f$ is the same as minimizing on $h(f)$. Its said a little awkwardly, sorry.
Positive examples:
lets say that $f=abs(x)$, you could minimize $h(f)=f^2$ and get the minimum would be the same.
lets say that $f=abs(x)$ and $x$ is constrained to $0<x$. you could minimize $h(f)=log(f)$ and get to the same location.
Negative examples (how to do it wrong):
lets say that $f=abs(x)$ and $x$ is NOT constrained to $0<x$. you could minimize $h(f)=log(f)$ and you would run into the discontinuity at $x=0$ and your optimization would fail.
| null | CC BY-SA 4.0 | null | 2023-06-01T13:34:34.017 | 2023-06-01T13:34:34.017 | null | null | 22452 | null |
617555 | 2 | null | 617553 | 1 | null | No, this is not an appropriate way to evaluate if including the random effects improves the model fit. You could formally test that with a likelihood ratio test by comparing the model with and without the random effects. In R, this is done via the `anova()` function.
| null | CC BY-SA 4.0 | null | 2023-06-01T13:35:43.323 | 2023-06-01T13:35:43.323 | null | null | 219012 | null |
617556 | 1 | null | null | 0 | 10 | I'm trying to calculate multivariate GARCH estimation of conditional value-at-risk, by adopting a three-step model from Girardi & Ergun (2013) paper entitled "Systemic risk measurement: Multivariate GARCH estimation of CoVaR". I'm using R and up until now I can't find any good references as to how to construct the model. For your reference, this is the three-step calculation from Girardi & Ergun (2013):
Step 1
[](https://i.stack.imgur.com/yRSsq.png)
Step 2
[](https://i.stack.imgur.com/VE9Ij.png)
[](https://i.stack.imgur.com/zT7A2.png)
Step 3
[](https://i.stack.imgur.com/hLygI.png)
I have tried to put some prompt into chatGPT, and this is what it gave me.
```
# Install and load required packages
install.packages("quantmod")
library(quantmod)
library(rmgarch)
# Step 1: Compute VaR of each stock using univariate model
# Define the stock symbols and date range
symbols <- c("YAHOO", "GOOG")
start_date <- "2018-01-01"
end_date <- "2022-12-31"
# Download stock price data
getSymbols(symbols, from = start_date, to = end_date)
# Extract adjusted close prices
prices <- merge(Ad(YAHOO), Ad(GOOG))
colnames(prices) <- symbols
# Compute daily returns
returns <- na.omit(ROC(prices))
# Function to compute VaR
computeVaR <- function(returns, alpha = 0.05) {
VaR <- matrix(NA, ncol = ncol(returns))
for (i in 1:ncol(returns)) {
VaR[, i] <- quantile(returns[, i], alpha)
}
return(VaR)
}
# Compute VaR for each stock
VaR <- computeVaR(returns)
# Step 2: Estimate bivariate GARCH model using Engle's (2002) DCC specification
# Combine the returns into a multivariate matrix
data <- as.matrix(returns)
# Function to estimate DCC-GARCH model
estimateDCC <- function(data) {
spec <- dccspec(uspec = multispec(replicate(2, ugarchspec(variance.model = list(model =
"sGARCH")))),
dccOrder = c(1, 1), distribution = "mvnorm")
fit <- dccfit(spec, data = data, out.sample = 0)
return(fit)
}
# Estimate DCC-GARCH model
dcc_fit <- estimateDCC(data)
# Step 3: Calculate CoVar measurement for each stock using double integral
# Function to calculate CoVaR
calculateCoVaR <- function(returns, VaR, dcc_fit, alpha = 0.05) {
CoVaR <- matrix(NA, ncol = ncol(returns))
for (i in 1:ncol(returns)) {
conditional_variance <- as.matrix(rmgarch::rcov(dcc_fit)[,,i])
exceedances <- t(matrix(VaR[, i] > returns[, i], nrow = nrow(returns), ncol =
ncol(returns)))
CoVaR[, i] <- alpha + (1/alpha) * sum(conditional_variance * exceedances)
}
return(CoVaR)
}
# Calculate CoVaR for each stock
CoVaR <- calculateCoVaR(returns, VaR, dcc_fit)
# Print the results
print("VaR:")
print(VaR)
print("CoVaR:")
print(CoVaR)
```
I have tried run those code, somehow it work, but I'm not 100% convinced. I have tried to calculate step 1 in Excel and step 2 using Eviews, the result from step 2 is quite different. Calculation in step 2 was supposed to bring out the conditional variance series from the model GARCH(1,1), then calculate the VaR from there.
Now what I have in mind is:
- Is there any way to prove that those output chatGPT gave me, is the similar calculation following Girardi & Ergun (2013)?
- And if Q no. 1 is correct, I'm sure that calculation in step 3 is wrong, because from Girardi & Ergun (2013), I have to calculate double integral from joint PDF functions of 2 different stocks. How can I construct the function to calculate step 3?
P.S: I'm apologize in advanced if my question seems messed up, I'm a beginner in here and this is only my 1st time asking around.
| How to implement Girardi & Ergun's (2013) three-step multivariate GARCH estimation of CoVaR in R? | CC BY-SA 4.0 | null | 2023-06-01T13:45:10.983 | 2023-06-01T13:45:10.983 | null | null | 389346 | [
"r",
"mathematical-statistics",
"garch",
"risk",
"univariate"
] |
617558 | 1 | null | null | 0 | 12 | Can a Population stability index (PSI) be calculated for the XGBoost splits?
When I make a logistic regression with grouped variables, I can make PSI with the groups and see if the population in the groups are shifted to other groups.
Can this be done with the XGBoost splits? If it can't, what is the best option to track this metric? A mean, mode for the raw variables?
| Population stability index of XGBoost model | CC BY-SA 4.0 | null | 2023-06-01T13:50:50.533 | 2023-06-01T13:50:50.533 | null | null | 194458 | [
"boosting"
] |
617560 | 1 | null | null | 0 | 17 | everyone!
Imagine I have a dataset which I'd like to use it to train a churn model (fot example: logistic regression, xgboost binary classifier, lgbm binary classifier, etc.).
The structure of my available data is similar to the one below, where Feature1,Feature2,...,FeatureN are just placeholders for the real features.
[](https://i.stack.imgur.com/GA5u2.png)
When splitting the data into train,validation and test sets, I think I can't simply split it with train_test split functions (like the one in sklearn), because for each Company ID, I have multiple instances based on the year/month, and for each year/month the Churn result can be different (and so do the features), but all the instances within ID a are somehow correlated and splitting them between train and test would result in data leakage. Thus, i guess the right way to split the data is based on the whole ID, trying to sample the same proportion of IDs which had churn=1 in all train/test/val datasets.
Is that the correct way to approach this problem? And what if I'd like to perform a cross-validation with multiple folds, how to define this split logic based on the ID origin and not only on the % of the split?
Thank you in advance!
| Splitting data when object of study are correlated (based on "origin" group) | CC BY-SA 4.0 | null | 2023-06-01T14:13:09.907 | 2023-06-01T14:13:09.907 | null | null | 230056 | [
"machine-learning",
"cross-validation",
"train-test-split",
"churn",
"data-leakage"
] |
617562 | 1 | null | null | 0 | 6 | I am aware that a question very similar to mine has already been asked here ([Should AIC be reported on training or test data?](https://stats.stackexchange.com/questions/584998/should-aic-be-reported-on-training-or-test-data)), but some points remain unclear to me.
The accepted answer states:
>
On the other hand, when the model is evaluated on test data (not the same as the training data), there is no bias to −2ln(L)
. Therefore, it does not make sense to penalize it by 2p
, so using AIC does not make sense; you can use −2ln(L)
directly.
Could someone elaborate more on this? I don't see why the number of parameters in a model is relevant in the train data, but not anymore in the test data.
Is it correct that the AIC is only a measure for in-sample performance then?
| Should I calculate a value for the AIC based on a test set or a training set? | CC BY-SA 4.0 | null | 2023-06-01T14:20:58.480 | 2023-06-01T14:20:58.480 | null | null | 384768 | [
"aic",
"bic",
"train-test-split"
] |
617564 | 1 | null | null | 0 | 11 | Q: Let $X_t$ be an ARIMA(1,1,1) process and $Y_t = Y_{t-1} + X_t$. What kind of process is $Y_t$?
$X_t$ is an ARIMA(1,1,1), i.e $\nabla X_t = X_t - X_{t-1} = Z_t $ where $Z_t$ is a casual ARMA(1,1) process and satisfies $(1-\phi_1 B)Z_t = (1+\theta_1B)\epsilon_t. $ Since $Z_t$ is casual, we may write $Z_t = \frac{1 + \theta_1 B}{1-\phi_1 B} \epsilon_t $.
Then $X_t = \frac{1}{1-B} \frac{1 + \theta_1 B}{1-\phi_1 B} \epsilon_t$, and
\begin{equation}
\begin{split}
(1-B)Y_t &= X_t \\
(1-B)Y_t &= \frac{1}{1-B} \frac{1 + \theta_1 B}{1-\phi_1 B} \epsilon_t \\
(1-B)^2(1-\phi_1 B)Y_t &= (1+\theta_1B)\epsilon_t
\end{split}
\end{equation}
So $Y_t$ is an ARMA(3,1) process?
| Let $X_t$ be an ARIMA(1,1,1) process and $Y_t = Y_{t-1} + X_t$. What kind of process is $Y_t$? | CC BY-SA 4.0 | null | 2023-06-01T14:27:34.573 | 2023-06-01T14:27:34.573 | null | null | 384994 | [
"arima"
] |
617565 | 1 | null | null | 0 | 8 | I have a medical data set with missing data in few variables. But on carefully observing, missingness of some variables are dependent on the response of another variable. For example, "age at when become pregnant" has missing data. But many of them are missing / not entered as this question does not make sense for male participants. Hence, this missingness is conditional on gender - you can only mark the age in years in females and not in males.
In such cases, how to impute the real missing data - female gender and age at pregnancy is missing and not in cases of male gender.
I use `mice` package in R a lot, but unable to find a correct solution.
I cannot model such dataset without imputation - many of the conventional as well as AI models need complete dataset. I will end up having only few rows of data if I use "complete cases" option.
Need help...
| Imputation of a variable with missing data conditional on another variable in R | CC BY-SA 4.0 | null | 2023-06-01T14:43:27.110 | 2023-06-01T14:43:27.110 | null | null | 335358 | [
"r",
"missing-data",
"conditional"
] |
617566 | 1 | null | null | 0 | 8 | assume I have time series count data that come from a zero inflated poisson distribution (ZIGP).
My idea was to estimate a ZIGP-Regression with including AR-features (lagged values) among other features.
However, in
[https://timeseriesreasoning.com/contents/generalized-linear-regression-models/#:~:text=Therefore%20GLMs%20cannot%20be%20used,auto%2Dcorrelated%20time%20series%20data](https://timeseriesreasoning.com/contents/generalized-linear-regression-models/#:%7E:text=Therefore%20GLMs%20cannot%20be%20used,auto%2Dcorrelated%20time%20series%20data).
there was a statement that
[](https://i.stack.imgur.com/o6Uzq.png)
Why is that the case ? I know that one can fit AR-models with an OLS regression, which is a special case of a GLM model, with the result of biased coefficient but consistency. Nevertheless, many textbooks use OLS as a viable way to estimate AR processes. Are there any other disadvantages of including AR-features in a ZIGP-Regression (or more general) in a GLM-regression ?
| ZIGP-Regression for count data/with AR features | CC BY-SA 4.0 | null | 2023-06-01T14:45:40.180 | 2023-06-01T14:45:40.180 | null | null | 209359 | [
"time-series",
"generalized-linear-model",
"least-squares",
"poisson-regression"
] |
617567 | 2 | null | 617496 | 0 | null | If you are observing the same individuals each week for 4 or 5 weeks, and the values of the outcome variable can change arbitrarily during each week between observations, then you have Scenario 1. As the quote puts it, you only take "a 'snapshot' of the process" once a week, and "the states are unknown between observation times."
Based on your description of the outcome values, you might be better off modeling this as an [ordinal regression](https://stats.stackexchange.com/tags/ordered-logit/info) instead. If you have outcome values on a scale of 1 to 10, it seems that you would be losing information by further categorizing the outcomes into "low," "middle" and "high." The [ordinal](https://cran.r-project.org/web/packages/ordinal/index.html) and [GLMMadaptive](https://cran.r-project.org/package=GLMMadaptive) packages in R can handle the repeated measures on the same individuals.
| null | CC BY-SA 4.0 | null | 2023-06-01T15:00:59.523 | 2023-06-01T15:00:59.523 | null | null | 28500 | null |
617568 | 1 | null | null | 0 | 9 | I'm interested in developing a set of clusters pertaining to shopping habits using a dataset from a survey that a colleague recently fielded. The survey contains a wide array of topics--some related to shopping habits, some not. There's a pretty big set of demographic items as well (race, gender, political affiliation, SES, etc.)
What are the best practices for selecting items to include in my LCCA? Should I only use items relevant to shopping habits? Should I throw the whole dataset in? Is it better to include individual items, or should I combine scales of multiple items into single indicators?
Any advice or relevant sources would be greatly appreciated. Thanks!
| Best practices for item selection in Latent Class Cluster Analysis? | CC BY-SA 4.0 | null | 2023-06-01T15:07:41.107 | 2023-06-01T15:07:41.107 | null | null | 366813 | [
"clustering",
"latent-class"
] |
617569 | 2 | null | 616921 | 0 | null | There is no concern per se--it just means the single factor, 3-indicator model with different loadings cannot be falsified--except if some or all covariances between the three indicators were (near) zero or negative (in which case you'd run into estimation problems).
A model in which the single factor with the three indicators serves as outcome (dependent) variable would be overidentified with > 0 degrees of freedom and would contain testable restrictions.
| null | CC BY-SA 4.0 | null | 2023-06-01T15:10:48.033 | 2023-06-01T15:10:48.033 | null | null | 388334 | null |
617570 | 1 | null | null | 0 | 14 | My aim is to determine whether teacher salaries in all 50 of the United States over between 2013 and 2023—adjusted for inflation and the cost of living—differ significantly. I would like to ask what the wisest approach might be to modify unadjusted teacher salary averages (there is one average for each state) to account for these effects.
Afterwards, I would like to graph these modified salaries for a few of these states and examine whether changes in revenue receipts within all schools in a particular state leads to a significant difference in average salaries.
I am open to your insight on how I might best tweak teachers’ salaries to account for these effects and things I ought to consider when graphing the relationship I’ve described. Please bear in mind that I am referring to information from the National Education Association, which sources from public schools.
Thank you!
| Accounting for differences in average teacher salaries, adjusted for inflation and the cost of living in each state | CC BY-SA 4.0 | null | 2023-06-01T15:12:22.157 | 2023-06-02T05:39:59.627 | 2023-06-02T05:39:59.627 | 121522 | 389344 | [
"econometrics"
] |
617571 | 1 | null | null | 0 | 4 | im new to R and trying to create a grouped bar chart but having a few issues.
I written some code but the chart doesn't look the way I want it to.
```
dveR <- data.frame(values = c(3.921, 3.557, 3.793, 3.154, 2.387, 1.906),group = rep(c("Cardia","Anterior","Posterior"),each = 2),subgroup = LETTERS[1:2])
ggplot(dveR,aes(x = group,y = values, fill = subgroup)) + geom_bar(stat = "identity",position = "dodge") + scale_fill_manual(values=c("springgreen4","orange2")
```
[](https://i.stack.imgur.com/OtdYD.png)
I want the axis labels to be Y=Log Re and X=tissue and I want the order of the bars to be cardia, anterior, posterior but it seems to sort them in to alphabetical order. the subgroup labels should also be 0 hours and 24 but im not sure how to change this. Any help would be appreciated, thanks!
| Plotting a grouped bar chart using ggplot | CC BY-SA 4.0 | null | 2023-06-01T15:22:07.060 | 2023-06-01T15:22:07.060 | null | null | 389090 | [
"r",
"ggplot2",
"barplot"
] |
617572 | 1 | null | null | 0 | 8 | I have just started learning fitting arima model for time series. So I am not very sure what AR and MA order I should use. May I know what are some possible arima models for the following ACF and PACF plot?
[](https://i.stack.imgur.com/rN8ny.png)
| arima model for time series prediction based on the given acf and pacf plots | CC BY-SA 4.0 | null | 2023-06-01T15:26:12.717 | 2023-06-01T16:03:42.283 | 2023-06-01T16:03:42.283 | 362671 | 389356 | [
"time-series",
"arima",
"acf-pacf"
] |
617573 | 1 | 617578 | null | 2 | 134 | Please consider the Table below
|Item |Sample Mean |Sample count, n |Sample Standard Deviation |Standard Error SD/sqrt(n) |
|----|-----------|---------------|-------------------------|-------------------------|
|Unit A |95.9461 PSI |n=430 |3.8397 |0.185166776 |
|Unit B |94.488 PSI |n=25 |2.5344 |0.50688 |
Now when I want to show if these two tests are significantly different from each other.
$$
\frac{94.488 - 95.9461}{ \sqrt{0.50688^2+0.185166776^2}} \approx -2.7
$$
Now according to this [Website](http://homework.uoregon.edu/pub/class/es202/ztest.html) these two units are significantly different!? Even if I have the true standard deviation (in this case its probably similar). I would still get similar results. Dividing by the sample count completely destroys any kind of SE, as you can see: as n approaches infinity, SE approaches 0. You could make any two averages significantly different from each other with a large enough sample size, simply because you are dividing by a smaller and smaller number. Intuitively, I don' think there is any difference.
[Suggestion 1](https://stats.stackexchange.com/questions/342938/less-sensitive-t-tests-for-large-samples): I don't think this addresses the problem I am getting at. When doing a Z-test, we would be using standard deviation, but with a t-test, or in this case, we divide by n. This doesn't make sense to me, we should inherently have a larger SE with a sample than compared to a population because we don't have as many data points.
Edit: for those asking for the formula from the website.
```
If the Z-statistic is less than 2, the two samples are the same.
If the Z-statistic is between 2.0 and 2.5, the two samples are marginally different
If the Z-statistic is between 2.5 and 3.0, the two samples are significantly different
If the Z-statistic is more then 3.0, the two samples are highly signficantly different
```
$$
Z = \frac{\bar{X_1}-\bar{X_2}}{\sqrt{\sigma^{2}_{x_1}+\sigma^{2}_{x_2}}}
$$
Edit: To further clarify, below is a histogram of my tests. How can we possibly look at these two distributions and say that they are significantly different? Especially when, I only have 25 samples for one!
[](https://i.stack.imgur.com/jkc9a.png)
| T-test fails as sample size increases. Is there a solution? | CC BY-SA 4.0 | null | 2023-06-01T15:29:05.247 | 2023-06-01T16:23:35.400 | 2023-06-01T16:05:14.180 | 389357 | 389357 | [
"statistical-significance",
"t-test",
"sample-size"
] |
617574 | 1 | null | null | 0 | 16 | I am reading [Bai and Ng (2002)](https://onlinelibrary.wiley.com/doi/abs/10.1111/1468-0262.00273). In their proof of Corollary 1, they said “Next, consider k > r. Lemma 4 implies that $V(k)/V(r)=1 + O_p(C_{NT}^{-2})$”. However, Lemma 4 just states that $V(k)-V(r)=O_p(C_{NT}^{-2})$. Is there something wrong?
As a counterexample, if $V(k)\sim C_{NT}^{-2}$ and $V(r) \sim C_{NT}^{-3}$, then $V(k)/V(r)$ will not converge in probability to $1$.
How does the deduction in the paper make sense?
| How can $V(k)-V(r)=O_p(C_{NT}^{-2})$ imply $V(k)/V(r)=1 + O_p(C_{NT}^{-2})$, where $C_{NT}=\min \{ \sqrt{N} , \sqrt{T} \}$? | CC BY-SA 4.0 | null | 2023-06-01T15:31:27.120 | 2023-06-01T15:34:30.870 | 2023-06-01T15:34:30.870 | 362671 | 308327 | [
"econometrics",
"factor-analysis",
"asymptotics"
] |
617575 | 1 | null | null | 2 | 30 | I generated binomial data showing a logistic correlation to a predictor. I analysed such dataset with a generalised linear model which assumes a binomial distribution of residuals and a "logit" link function, but the residuals of such a model still fail to respect the model assumptions: the quantile-quantile plot shows that the residuals' distribution departs from the theoretical one, and residuals seem heteroscedastic. This happens regardless of sample size. Why is that?
Here is my example, simulated and analysed in R. To give the data some realism: let's imagine that I am interested in the probability of bird predation under different degrees of vegetation cover (expressed as percentage). For each level of vegetation cover I tracked 15 birds and recorded if they had been killed by a predator by the end of the study (data coded as 1 or 0).
```
# Generate data:
set.seed(666)
predictor <- c(0, 20, 40, 50, 70, 90)
y = 4 - 0.2 * predictor # linear combination of the variables
pr = 1/(1+exp(-y)) # pass y through an inv-logit function to get probability
# build dataset:
df <- data.frame(vegetation.cover = rep(predictor, 15), prob = rep(pr, 15))
df$predation.01 <- rbinom(length(df[,1]), 1, pr)
# df$predation.01 refers to the probability of a bird to be killed by a predator.
# To model predation.01 vs vegetation.cover:
glm.01 <- glm(predation.01 ~ vegetation.cover,
data= df,
family=binomial(link="logit")
)
```
These are the model diagnostic plots:
[](https://i.stack.imgur.com/pSGxj.png)
For comparison, these are the diagnostic plots for the same model fitted on a dataset with a replication of 1500 for each value of the predictor:
[](https://i.stack.imgur.com/PlLR6.png)
Here are the data together with the model predictions for n=15:
[](https://i.stack.imgur.com/VEIud.png)
I added a pinch of artificial noise to the observations to avoid a perfect overlap of all the 0s and 1s in an attempt to aid the interpretation of the graph.
| Why doesn't a binomial GLM successfully fix violated assumptions of simulated binomial data? | CC BY-SA 4.0 | null | 2023-06-01T15:33:27.957 | 2023-06-01T17:06:47.120 | 2023-06-01T15:38:46.943 | 214127 | 214127 | [
"r",
"generalized-linear-model",
"binomial-distribution",
"assumptions"
] |
617576 | 2 | null | 617542 | 1 | null | So, as mentioned on the comment the easiest way to find the distribution of $Y$ is to calculate the $\mathbb{P}(Y\leq y)$ since the $Y$ it is a function of $X$ for which you know the distribution.
$$\mathbb{P}(Y\leq y)=...= \mathbb{P}(X\leq - \frac{log(a-y)}{b}) = \frac{1}{\Gamma(\alpha)}\gamma(\alpha,\beta(-\frac{log(a-y)}{b}))$$
where the $\gamma(\alpha,\beta(-\frac{log(a-y)}{b}))$ is the lower bound incomplete gamma function. Also, in order to find the density of $Y$ you have to calculate the derivative of $\mathbb{P}(Y\leq y)$ with respect to $y$.
$$f_{Y}(y)=\frac{d}{dy}\mathbb{P}(Y\leq y)=\frac{1}{\Gamma(\alpha)}\frac{d}{dy}\gamma(\alpha,\beta(-\frac{log(a-y)}{b}))=\frac{1}{\Gamma(\alpha)}(-\frac{log(a-y)}{b})^{\alpha-1}e^{\frac{log(a-y)}{b}}\frac{d}{dy}(-\frac{log(a-y)}{b}) = \frac{1}{\Gamma(\alpha)}(-\frac{log(a-y)}{b})^{\alpha-1}e^{\frac{log(a-y)}{b}}\frac{1}{b(a-y)}$$
Now something to take care of is that since we are dealing with a gamma distribution for $X$ in order for the calculations to hold the $-\frac{log(a-y)}{b}$ must be positive, so $Y$ must be great of $a-1$ regardless of the value of $b$.
| null | CC BY-SA 4.0 | null | 2023-06-01T15:39:35.173 | 2023-06-01T15:53:08.580 | 2023-06-01T15:53:08.580 | 208406 | 208406 | null |
617577 | 2 | null | 617547 | 1 | null | The idea behind Cohen's $\kappa$ is to give context to the agreement rate by considering the agreement rate for a random annotater. This way, you do not make the mistake of regarding what seems to be a high agreement rate as actually being a high agreement rate when random annotaters would have almost as much agreement (perhaps even higher agreement).
From an [answer](https://stats.stackexchange.com/a/617474/247274) of mine yesterday, Cohen's $\kappa$ is defined by:
$$
\kappa = (p_a - p_r) / (1 - p_r)
$$
In this notation, $p_a$ is the actual agreement proportion. For this task, that would be the number of times all of the annotaters agree divided by the number of annotations.
Then there is the $p_r$, which is the random agreement proportion. This has been worked out for two annotaters, with references given in the link. For three or more annotaters, this might have a closed-form solution (that is my suspicion), but even if it does not, you can use a simulation to get quite close.
The random annotaters sample with replacement from the true labels. You can do this hundreds or thousands of times in a loop, tracking how much agreement the random annotaters have in each iteration of the loop. Then take the mean agreement over all of the iterations, and that should be a good approximation of $p_r$, particularly as you do many iterations of the loop.
I give an example in `R` below.
```
set.seed(2023)
# Define sample size
#
N <- 10000
# Define the number of loop iterations
#
R <- 9999
# Define some labels given by four annotaters
#
x1 <- rbinom(N, 4, 0.2)
x2 <- rbinom(N, 4, 0.2)
x3 <- rbinom(N, 4, 0.2)
x4 <- rbinom(N, 4, 0.2)
# Determine the agreement between the four annotaters
#
p_a <- length(which(x1 == x2 & x2 == x3 & x3 == x4))/N
# Loop R-many times to determine agreement for random annotations
#
random_agreement <- rep(NA, R)
for (i in 1:R){
# Define random labels
#
x1_random <- sample(x1, length(x1), replace = T)
x2_random <- sample(x2, length(x2), replace = T)
x3_random <- sample(x3, length(x3), replace = T)
x4_random <- sample(x4, length(x4), replace = T)
# Store the agreement between the four random annotaters
#
random_agreement[i] <- length(which(
x1_random == x2_random & x2_random == x3_random & x3_random == x4_random
))
}
# Calculate the p_r as the mean of the random agreement values
#
p_r <- mean(random_agreement)/N
# Calculate the Cohen-style agreement statistic
#
(p_a - p_r)/(1 - p_r) # I get 0.005398978
```
That agreement score of `0.005398978` indicates that the annotating is only slightly better than random. Given that the labeling indeed is random, this is not surprising. Contrast this with a situation where the labeling is not random.
```
set.seed(2023)
# Define sample size
#
N <- 10000
# Define the number of loop iterations
#
R <- 9999
# Define some labels given by four annotaters
# Create one starting set of labels (x1) and then add 1 to the labels with
# varying probabilities to allow for disagreements
#
x1 <- rbinom(N, 4, 0.2)
x2 <- (x1 + rbinom(N, 1, 0.1)) %% 5 # 0.1 probability of disagreement from x1
x3 <- (x1 + rbinom(N, 1, 0.2)) %% 5 # 0.2 probability of disagreement from x1
x4 <- (x1 + rbinom(N, 1, 0.3)) %% 5 # 0.3 probability of disagreement from x1
# Determine the agreement between the four annotaters
#
p_a <- length(which(x1 == x2 & x2 == x3 & x3 == x4))/N
# Loop R-many times to determine agreement for random annotations
#
random_agreement <- rep(NA, R)
for (i in 1:R){
# Define random labels
#
x1_random <- sample(x1, length(x1), replace = T)
x2_random <- sample(x2, length(x2), replace = T)
x3_random <- sample(x3, length(x3), replace = T)
x4_random <- sample(x4, length(x4), replace = T)
# Store the agreement between the four random annotaters
#
random_agreement[i] <- length(which(
x1_random == x2_random & x2_random == x3_random & x3_random == x4_random
))
}
# Calculate the p_r as the mean of the random agreement values
#
p_r <- mean(random_agreement)/N
# Calculate the Cohen-style agreement statistic
#
(p_a - p_r)/(1 - p_r) # I get 0.483549
```
With the labels being created to have some agreement, the agreement score is much higher, at `0.483549`. With that above modified to have even more agreement, that score gets even higher (`0.9341579` in my particular simulation).
```
set.seed(2023)
# Define sample size
#
N <- 10000
# Define the number of loop iterations
#
R <- 9999
# Define some labels given by four annotaters
# Create one starting set of labels (x1) and then add 1 to the labels with
# varying probabilities to allow for disagreements
#
x1 <- rbinom(N, 4, 0.2)
x2 <- (x1 + rbinom(N, 1, 0.01)) %% 5 # 0.01 probability of disagreement from x1
x3 <- (x1 + rbinom(N, 1, 0.02)) %% 5 # 0.02 probability of disagreement from x1
x4 <- (x1 + rbinom(N, 1, 0.03)) %% 5 # 0.03 probability of disagreement from x1
# Determine the agreement between the four annotaters
#
p_a <- length(which(x1 == x2 & x2 == x3 & x3 == x4))/N
# Loop R-many times to determine agreement for random annotations
#
random_agreement <- rep(NA, R)
for (i in 1:R){
# Define random labels
#
x1_random <- sample(x1, length(x1), replace = T)
x2_random <- sample(x2, length(x2), replace = T)
x3_random <- sample(x3, length(x3), replace = T)
x4_random <- sample(x4, length(x4), replace = T)
# Store the agreement between the four random annotaters
#
random_agreement[i] <- length(which(
x1_random == x2_random & x2_random == x3_random & x3_random == x4_random
))
}
# Calculate the p_r as the mean of the random agreement values
#
p_r <- mean(random_agreement)/N
# Calculate the Cohen-style agreement statistic
#
(p_a - p_r)/(1 - p_r) # I get 0.9341579
```
Following logic similar to the logic I use [here](https://stats.stackexchange.com/a/605819/247274), I would consider this to be the reduction in disagreement rate of your annotaters compared to the expected disagreement rate for random annotaters.
PROOF
$$
\kappa = \dfrac{p_a - p_r}{1 - p_r}
$$
Define $A$ as the total number of agreements in the annotations; $R$ as the expected number of agreements by random annotations; and $N$ as the total number of annotations by each annotater. Then $p_a = A/N$ and $p_r = R/N$.
Following the logic given [here](https://stats.stackexchange.com/a/605451/247274), the reduction in disagreement rate, compared to the expected disagreement rate given by random annotaters, is given by:
$$
1 - \dfrac{
\text{Disageement rate of the true annotations}
}{
\text{Expected disagreement rate of random annotations}
} =
1-\left(
\dfrac{
1 - p_a
}{
1 - p_r
}\right)
$$
Next...
$$
1 - p_a = 1 - \dfrac{A}{N} = \dfrac{N - A}{N}\\
1 - p_r = 1 - \dfrac{R}{N} = \dfrac{N - R}{N}
$$
Thus...
$$
\dfrac{
1 - p_a
}{
1 - p_r}
= \dfrac{
\dfrac{N - A}{N}
}{
\dfrac{N - R}{N}
} = \dfrac{N - A}{N - R}
$$
Thus...
$$
1 - \left(\dfrac{
1 - p_a
}{
1 - p_r}\right) \\=
1 - \left(
\dfrac{N - A}{N - R}
\right) \\= \left(\dfrac{N-R}{N-R}\right)-\left(\dfrac{N - A}{N-R}\right)\\ \\= \dfrac{
N - R - (N - A)}{N - R} \\= \dfrac{N - R - N + A}{N - R}\\=
\dfrac{A - R}{N - R}\\=
\dfrac{
\dfrac{
A - R
}{
N
}
}{
\dfrac{
N - R
}{
N
}
}\\=
\dfrac{
\dfrac{
A
}{
N
}-\dfrac{
R
}{
N
}
}{
1 - \dfrac{R}{N}
}\\=
\dfrac{p_a - p_r}{1 - p_r} \\
\square
$$
As this is the same interpretation as the usual Cohen's $\kappa$, if you are comfortable with guidelines for "acceptable" or "good" agreement levels of Cohen's $\kappa$, those might be a start, with the caveat that it will be harder and harder for random agreement to be high as the number of annotaters increases.
| null | CC BY-SA 4.0 | null | 2023-06-01T16:23:09.047 | 2023-06-01T17:05:13.907 | 2023-06-01T17:05:13.907 | 247274 | 247274 | null |
617578 | 2 | null | 617573 | 5 | null | The description of approach is crude to say the least, but the inference is correct - those two samples have a statistically significant mean difference. A histogram reveals very little here because you are dealing with over 430 observations, binning at intervals of 5 PSI when the mean difference is 1.5 PSI.
Note: the title says T test, but you are performing a Z test here. There are separate camps when it comes to qualifying statistical significance with descriptors like "marginally" or "highly". I'm in the don't-do-it-camp.
Industrial statistics often seeks very, very high levels of precision. As anyone who has worked with high pressure equipment can testify, a mean difference of 1.5 PSI can have far reaching consequences in terms of the safety and reliability of equipment. The range of the samples seems to be a completely separate issue.
| null | CC BY-SA 4.0 | null | 2023-06-01T16:23:35.400 | 2023-06-01T16:23:35.400 | null | null | 8013 | null |
617579 | 1 | null | null | 0 | 13 | I'm interested in training a Random Forrest model to predict conditional quantiles (e.g., P75, P90) given the features instead of the conditional mean. [Sklearn's RandomForestRegression](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html) has a mean absolute error (MAE) loss which build trees by using the median of each the terminal nodes, thus minimizing MAE - naturally, the resulting model is a P50 model.
If now I want a P75 model it makes sense to use the quantile loss with q=0.75 and grow the trees by predicting the empirical P75 (aka Q3). However, this is not implemented in sklearn.
The best solution I've found is [Quantile Regression Forests](https://stat.ethz.ch/%7Enicolai/quantregforests.pdf). Here authors build trees "normally" which I assume is with regular MSE loss and store the targets of all terminal nodes of all trees and use these to output the predicted quantile.
My questions are, why would this work if trees are built "normally"? which means the individual trees are grown to optimize MSE, not quantiles. By growing the tree with the desired quantile (as described previously) wouldn't we get better splits?
Thanks!
| Why Quantile Regression Forest grows trees in standard way (i.e. using MSE to decide splits)? | CC BY-SA 4.0 | null | 2023-06-01T16:38:58.703 | 2023-06-01T16:38:58.703 | null | null | 377182 | [
"machine-learning",
"random-forest",
"scikit-learn",
"quantiles",
"quantile-regression"
] |
617580 | 1 | null | null | 0 | 5 | I have a pandas Dataframe with columns A,B and C.
```
import pandas as pd
import numpy as np
size = 30
column_a_values = np.random.randint(1, 4, size=size)
column_b_values = np.random.choice(['x', 'y', 'z'], size=size)
column_c_values = np.random.rand(size)
data = {'A': column_a_values, 'B': column_b_values, 'C': column_c_values}
df = pd.DataFrame(data)
df.head()
```
```
A B C
0 2 y 0.351586
1 1 y 0.353851
2 2 x 0.404016
3 1 y 0.298571
4 1 x 0.018221
```
I want to split the rows into a train and a test set. My requirements are, that the split is stratified over column B, but grouped by column A, meaning that no value of A must be both in train and test. Depending on the split ratio, the current example data might not be fit for this purpose, but I'm able to drop some of the rows if needed.
| Split pandas dataframe stratified by one column but grouped by another | CC BY-SA 4.0 | null | 2023-06-01T16:54:43.017 | 2023-06-01T16:54:43.017 | null | null | 389361 | [
"pandas",
"train-test-split"
] |
617581 | 1 | null | null | 0 | 24 | Let's say we run a model with a three-way interaction that includes a quadratic term with its associated linear term:
```
library(emmeans)
library(ggplot2)
library(tidyverse)
car_data <- mtcars[,c("mpg" , "vs" , "am" , "disp")]
car_data$vs <- as.factor(car_data$vs)
car_data$am <- as.factor(car_data$am)
model <- lm(mpg ~ vs * am * disp + vs * am * I(disp^2), data=car_data)
```
And then we calculate the response (mpg) for disp = 100 based on the summary output of the model:
```
intercept_method1 = data.frame(coef(model))[1,]
coef1_method1 = data.frame(coef(model))[4,]
coef2_method1 = data.frame(coef(model))[5,]
x = 100
mpg_x100__method1 = intercept_method1 + coef1_method1 * x + coef2_method1 *x^2 # 0.85
```
Then we learn mpg is around 0.85. Now I tried to do the same in emtrends (as I am actually interested in obtaining SEs or CIs). I calculated mpg at dist=100 as follows:
```
intercept_method2 = data.frame(emmeans(model, c("vs", "am", "disp"),
at = list(vs="0", am="0", disp = 0)))$emmean
trends <- emtrends(model, ~ vs * am, var = "disp", max.degree=2)
df_trends <- data.frame(summary(trends))
coef1_method2 = subset(df_trends, vs == "0" & am == "0" & degree == "linear")$disp.trend
coef2_method2 = subset(df_trends, vs == "0" & am == "0" & degree == "quadratic")$disp.trend
mpg_x100__method2 = intercept_method2 + coef1_method2 * x + coef2_method2 *x^2 # -12.44307
```
This yields -12.5 for mpg, which is very different from 0.85. The descrepancy comes from the difference in coef1 (the linear coefficient), as coef2 and the intercept are the same with both methods. coef1_method1 = 0.192 and coef1_method2 = 0.059. Why are they different? How do I obtain with emtrends the coefficient of the linear term (and how can I obtain its SE and CI)?
Also I was surprised to see that adding the argument
```
at = list(disp = 100)
```
to the emtrends call to have
```
trends <- emtrends(model, ~ vs * am, var = "disp", max.degree=2, at = list(disp = 100))
```
changes slightly coef1_method2 even though it is still very different. Shouldn't the slope /coef1 be the same for any disp value as it shows the change with one value change of disp?
PS. `library(ggeffects) ggemmeans(model, c("disp[100]", "vs[0]", "am[0]"))` provides the same result as method1
| Why do emtrends and summary(model) provide different coefficients for a linear effect? | CC BY-SA 4.0 | null | 2023-06-01T17:02:58.523 | 2023-06-01T17:10:13.773 | 2023-06-01T17:10:13.773 | 87337 | 87337 | [
"r",
"regression",
"regression-coefficients",
"lsmeans",
"marginal-effect"
] |
617583 | 2 | null | 617575 | 1 | null | It may help you to read my answer to [Interpretation of plot (glm.model)](https://stats.stackexchange.com/a/139624/7290). The plots that R is generating are not specific to a logistic regression. They are the plots for a standard OLS regression. For example, the qq-plot is not of the residuals vs a binomial distribution. It shows them relative to a normal distribution. However, there is no theoretical reason that the residuals from a logistic regression 'should' be normally distributed. On the other hand, the residuals from a logistic regression very much should be heteroscedastic, because the variance is a function of the mean. Your model seems fine, as far as I can tell.
| null | CC BY-SA 4.0 | null | 2023-06-01T17:06:47.120 | 2023-06-01T17:06:47.120 | null | null | 7290 | null |
617584 | 1 | null | null | 0 | 8 | I am using ggplot2 to visualise map-related data. I have coloured regions according to a continuous value, and I would like to add a legend with colors and region names. My own data is a bit cumbersome to share, but I have recreated the scenario with public data ([Mapping in ggplot2](https://r-charts.com/spatial/maps-ggplot2/)). The following code creates the included map:
```
library(ggplot2)
library(sf)
# Import a geojson or shapefile
map <- read_sf("https://raw.githubusercontent.com/R-CoderDotCom/data/main/shapefile_spain/spain.geojson")
ggplot(map) +
geom_sf(color = "white", aes(fill = unemp_rate)) +
geom_sf_text(aes(label = name), size = 2)
```
[](https://i.stack.imgur.com/MeDKN.png)
Instead of the continuous default legend, I would like to have a legend with names, numbers and colors. Basically, a legend that shows the `name` and `unemp_rate` columns of the data with colors matching the map (eg. `unemp_rate`). Somewhat like the legend of the second included picture (but the colors are not right).
|name |unemp_rate |
|----|----------|
|"Andalucía" |18.68 |
|"Aragón" |8.96 |
|"Principado de Asturias" |11.36 |
|"Islas Baleares" |9.29 |
|"Islas Canarias" |17.76 |
|"Cantabria" |8.17 |
|"Castilla y León" |10.19 |
|"Castilla-La Mancha" |14.11 |
|"Cataluña" |9.29 |
|"Comunidad Valenciana" |12.81 |
|"Extremadura" |16.73 |
|"Galicia" |11.20 |
|"Comunidad de Madrid" |10.18 |
|"Región de Murcia" |12.18 |
|"Comunidad Foral de Navarra" |8.76 |
|"País Vasco" |8.75 |
|"La Rioja" |10.19 |
|"Ceuta y Melilla" |23.71 |
[](https://i.stack.imgur.com/QdRUG.png)
| Adding a legend to a ggplot map | CC BY-SA 4.0 | null | 2023-06-01T17:12:31.123 | 2023-06-01T17:12:31.123 | null | null | 38227 | [
"r",
"data-visualization",
"ggplot2"
] |
617585 | 1 | null | null | 0 | 25 | I have a data set in which my dependent variable is a score that can vary between 8 and 400.
```
set.seed(2023)
n <- 50
v <- rexp(n, .01)
obs <- mean(v)
```
I'd like to calculate if the effect I observe `obs` is larger than expected by chance. I understand that I need to build a null distribution. I tried
```
set.seed(2023)
prms <- replicate(10^4, mean(sample(v,n,rep=T)))
mean(abs(prms)>=abs(obs))
```
Is this the correct approach?
| calculate if observed effect is larger than chance | CC BY-SA 4.0 | null | 2023-06-01T17:34:25.520 | 2023-06-01T17:34:25.520 | null | null | 212831 | [
"r"
] |
617586 | 1 | null | null | 1 | 46 | I had used the GPT chat to ask some straightforward questions. However, one of the responses left me feeling quite confused. Currently, I am studying the classical regression model given by
$$y_t = x_t\beta + u_t,$$
where $t = 1,...,T$ and $x_t$ is a deterministic (fixed) variable. As a result, I requested GPT to generate two realizations of this model using Julia. Here is the provided answer:
```
# Realization 1
T = 100
x = fill(3, T) # Fixed value of x for all observations (3 in this case)
y = 2 .* x .+ randn(T) # Y values with a linear relationship to x
# Realization 2
T = 100
x = fill(3, T) # Fixed value of x for all observations (3 in this case)
y = 2 .* x .+ randn(T) # Y values with a linear relationship to x
```
So his response led me to wonder about the concept of a population model in the context of a classical linear regression model with fixed coefficients. This model is commonly used in introductory econometrics courses for pedagogical purposes, but in my case, it has been somewhat confusing.
In my understanding, the population model would be represented as:
$$y = x \beta + u $$
where $x$ is deterministic and fixed, while $y$ and $u$ are random variables. So, GPT's answer aligns with this definition. However, I had anticipated a slightly different response
```
T = 100
x = rand(T) # Fixed
# Realization 1
y = 2 .* x .+ randn(T) # Y values with a linear relationship to x
# Realization 2
y = 2 .* x .+ randn(T) # Y values with a linear relationship to x
```
And now I'm confused. Could you help me?
| Understanding and simulating a classic linear regression model with fixed coefficients | CC BY-SA 4.0 | null | 2023-06-01T17:36:49.640 | 2023-06-02T06:48:17.157 | 2023-06-02T06:48:17.157 | 373088 | 373088 | [
"regression",
"self-study",
"simulation"
] |
617587 | 1 | null | null | 0 | 15 | I'm trying to learn on my own about confidence intervals for multinomial proportions, and I stumbled upon several resources on the subject talking about various methods (e.g. Goodman, or Sison-Glaz) . Yet, the resources I found are mathematically dense, and without guidance I'm afraid I have a hard time understanding how to interpret exactly such confidence intervals (beyond the general idea that they show some degree of uncertainty around each estimate).
For the moment, the two "hypothesis" I have relative to their interpretation are the following (but I guess that I may be completely on the wrong track):
- given a confidence level of 95% (just an example), if we were to repeat the sampling procedure a lot of times, then the true proportions (all of them) would fall 95% of the time in the simultaneous confidence intervals. In other words, to take an example with a sample of 4 exclusive categories: if we have 4 categories, with 3 confidence intervals capturing correctly the 3 corresponding true proportions in the population, but with 1 confidence interval failing to capture its corresponding real proportion, the whole thing is considered a failure, and the simultaneous confidence intervals for this sample all (all 4 of them) go in the 5% we expected to fail.
- or, taking the example above, this is not considered a complete failure, but considered 3 successes and 1 failure.
Is one of these two interpretations correct? If not, what is the correct interpretation? Do different methods require different interpretations?
I'm not against mathematical notation or explanation, but some concrete/numerical examples would also certainly help me wrapping my head around it. I'm totally comfortable with explanations using code snippets (in R or other languages).
Thanks.
| How to interpret simultaneous confidence intervals for multinomial proportions? | CC BY-SA 4.0 | null | 2023-06-01T17:38:52.413 | 2023-06-01T17:38:52.413 | null | null | 389365 | [
"confidence-interval",
"multinomial-distribution"
] |
617588 | 1 | null | null | 0 | 12 | Lets say I have $n$ observations of $Y$, $X_1$, and $X_2$. All three are normalized (mean 0, variance 1). I run two regressions:
- $Y=\beta_1X_1$
- $Y=\beta_2X_2$
Given these $n$ observations, is it possible for $\beta_1 > \beta_2$, but $tstat(\beta_1) < tstat(\beta_2)$ or for $\beta_1 < \beta_2$, but $tstat(\beta_1) > tstat(\beta_2)$?
| In standardized regression with same number of samples, can one X variable have higher beta but lower T-stat than another | CC BY-SA 4.0 | null | 2023-06-01T17:57:33.173 | 2023-06-01T18:36:46.550 | null | null | 233845 | [
"regression",
"regression-coefficients",
"standard-error"
] |
617589 | 2 | null | 533338 | 0 | null | You went looking for a difference and did not find one. You now want to claim that there is no difference.
>
Well how hard did you look?
If you do not find something but did not look hard, then your claim that the something is not there is a weak one. If you did look hard, then your claim that the something is not there means more.
In hypothesis testing, test [power](/questions/tagged/power) is what quantifies how hard you looked for a difference. Power depends on the difference you want to find. If you have high power to find a difference of $\delta$ and do not find a difference, you can start to make claims about there not being a difference of $\delta$ or more.
Until you quantify how hard you looked by considering your test power, however, you cannot make strong conclusions about distribution similarity.
A standard claim is that an absence of evidence (high p-value) is not evidence of absence. However, if you looked hard for evidence (high power) and still have an absence of evidence, then you have a stronger case for that absence.
| null | CC BY-SA 4.0 | null | 2023-06-01T18:05:19.133 | 2023-06-01T18:05:19.133 | null | null | 247274 | null |
617590 | 1 | null | null | 0 | 18 | Consider the classic regression setting where we want to model Y using some design matrix X. However, let's say that part of the design matrix is a matrix of familial relationships up to some degree of relatedness. For example, let's say there are Z families in the dataset, with many individuals belonging to a single-person family. So this part of the design matrix has Z columns but is very sparse (e.g. many columns are all zeroes but for the single individual that is in that family). However, this cannot just be ignored because there are families with multiple people who also have highly correlated characteristics both in the rest of the design matrix and Y (for example, if Y is a disease, there is a higher chance of people in the same family having the disease).
What are ways to address this so that the effect of belonging to different families is accounted for?
| How to incorporate/account for family membership in regression | CC BY-SA 4.0 | null | 2023-06-01T18:08:34.383 | 2023-06-01T18:08:34.383 | null | null | 263924 | [
"regression"
] |
617591 | 2 | null | 617501 | 0 | null | You could study this as a model with main effects for 'female/male' and for 'framing' along with different intercepts per question. Then, test whether adding an interaction for the two main effects makes the estimates significantly better.
A problem is however how to make this model. Is it gonna be a typical linear model or some more fancy function? And how are you gonna incorporate the interaction? In addition, are you gonna model the mean (is that a good parameter that describes the results) and how do you deal with the randomness and distribution of different participants?
- The interaction might be difficult to interpret.
The result might be more difficult depending on whether you regard the scale as linear or not. Is a change from 6.5 to 7 the same as a change from 6 to 6.5? You will need to investigate the scale before and use studies that designed the scale and questions to get an idea about the range and distribution of the response based on which you can create a sensible scale to compare results.
Example: say you test the performance of females and males on a running test and the effect of some training program. The men have on average times of 60 seconds and 50.4 seconds for with and without the training. The women have on average times of 70 seconds and 69.3 seconds for with and without training. In this case do you consider there to be a different effect with the men having 0.6 seconds improvement and the women having 0.7 seconds improvement, or do you consider there to be the same effect with both cases having 1% improvement?
- The interaction might be incorporated in different ways. Is the difference about a change in magnitude or about an absolute difference.
For example in your data, the men had effects on the three questions of M = 0.592, 0.393 and -0.647 while the women had effects of F = 0.724, 0.525 and -0.708. Do you consider a difference in the magnitude F/M ≈ 1.223, 1.336 and 1.094 where the difference between men and women is in the same >1 direction on all three questions? Or do you consider a difference in the absolute level F-M ≈ 0.132 0.132 -0.061 where the difference between men and women is not the same (positive/negative) direction on all three questions?
- The distribution of the likert scale is not a normal distribution. You could analyse it as a normal distribution, and for the mean this might be potentially without much problems (the estimate of the mean could be approximately normal distributed) . But possibly the change of the distribution in other aspects than the mean might be interesting. Possibly the median is more interesting or the percentage of people that differ with a specific amount. A similar change of the mean on the likert scale can be made if 50% of the people change by 2 points or if all change by 1 points, the two can have different interpretations.
A useful model to learn about is [two-way ANOVA](https://en.m.wikipedia.org/wiki/Two-way_analysis_of_variance). But, as mentioned above, the situation might be more complicated and require a more complicated model (E.g a [Rasch model](https://en.m.wikipedia.org/wiki/Rasch_model)).
| null | CC BY-SA 4.0 | null | 2023-06-01T18:22:39.277 | 2023-06-01T18:44:56.450 | 2023-06-01T18:44:56.450 | 164061 | 164061 | null |
617592 | 1 | 617607 | null | 3 | 140 | Let $X$ be a symmetric random variable with bounded moments and standard deviation $\sigma$. I want to lower-bound $\mathbb E[|X|]$ in terms of $\sigma$. Here is the formal conjecture; I wonder if this is true or could be refuted:
There exists a global constant $C$ such that for every symmetric r.v. $X$ with bounded moments and standard deviation $\sigma$, it holds that $\mathbb E[|X|] \geq C \sigma$.
I suspect that this is a straightforward result, but I could not find anything about it nor could I prove it myself. Any ideas?
Edit: Symmetry w.r.t. zero, namely $f(x)=f(-x)$ for all $x$.
---
Some examples:
- If $X$ has Rademacher distribution (-1 w.p. 0.5 and 1 w.p. 0.5), then $\mathbb E[|X|]=1$ and $Var(X)=1=\sigma^2$; therefore, $\mathbb E[|X|] = 1\cdot \sigma$ (the above holds with $C=1$).
- For $X\sim Uniform(-b,b)$ for some $b>0$, $Var(X)=\frac{b^2}{3}=\sigma^2$ and $\mathbb E[|X|]=\frac{b}{2}$; thus, the above holds for $C=\frac{\sqrt{3}}{2}\approx 0.866$.
- For $X\sim N(0,\sigma^2)$, $|X|$ is half-normal and $\mathbb E[|X|]=\sigma \sqrt{\frac{2}{\pi}}$; hence, the above holds for $C=\sqrt{\frac{2}{\pi}}\approx 0.797$.
| Expectation of first of moment of symmetric r.v. in terms of variance | CC BY-SA 4.0 | null | 2023-06-01T18:23:32.883 | 2023-06-01T20:29:38.897 | 2023-06-01T20:01:50.613 | 79114 | 79114 | [
"variance",
"bounds",
"inequality"
] |
617593 | 1 | null | null | 1 | 8 | I'm analyzing quality of governance on food security indicators for the last 3 years for 74 countries. To be precise, I am interested in a within-country variation across time (before and during the Covid-19 pandemic), as well as the between-country variation. This choice was informed by [Bell et. al.’s (2019) article.](https://eprints.whiterose.ac.uk/134123/10/Bell2018_Article_FixedAndRandomEffectsModelsMak.pdf)
REWB is a specific type of mixed effects model but since it is a recent method, I am not sure how to go about it on SPSS (codes for R are available but I can only use SPSS)
Anyone knows how to perform REWB on SPSS?
| Can I perform a random-effect-within-between analysis on SPSS? | CC BY-SA 4.0 | null | 2023-06-01T18:25:05.320 | 2023-06-02T07:00:45.747 | 2023-06-02T07:00:45.747 | 389368 | 389368 | [
"mixed-model"
] |
617594 | 2 | null | 559814 | 1 | null | You can think of a neural network as being layers of feature extraction followed by a regression on the extracted features. That is, it is common for the final hidden layer of a neural network to connect every neuron to the output neuron(s).
This is a generalized linear model on the final extracted features that are given in that last hidden layer. The image below that [I posted yesterday](https://stats.stackexchange.com/a/617489/247274) shows how this works for the final hidden layer and an output neuron.
[](https://i.stack.imgur.com/AkvZD.png)
However, nothing requires you to have that final regression be a linear model. Instead of $y = \sigma\left(
b +
\omega_{blue}x_{blue} +
\omega_{red}x_{red} +
\omega_{purple}x_{purple} +
\omega_{grey}x_{grey}
\right)$, for some activation function $\sigma$, you could have something like $y = \sigma\left(
b +
\omega_{blue}x_{blue}^{\beta_{blue}} +
\omega_{red}x_{red}^{\beta_{red}} +
\omega_{purple}x_{purple}^{\beta_{purple}} +
\omega_{grey}x_{grey}^{\beta_{grey}}
\right)$. Then the final layer is a nonlinear regression but also differentiable.
Any differentiable nonlinear regression on that final set of extracted features (which will come from your convolutional layers) could be viable. I will give a few more examples below.
$$
y = \sigma\left(
\dfrac{
\omega_{blue}x_{blue} + \omega_{red}x_{red}
}{
\left(\omega_{purple}x_{purple}\right)^2 + \left(\omega_{grey}x_{grey}\right)^2 + 1
}
\right)
$$
$$
y = \sigma\left(
b +
\sin\left(\omega_{blue}x_{blue}\right) +
\omega_{red}x_{red} +
\omega_{purple}x_{purple} +
\omega_{grey}x_{grey}
\right)
$$
$$
y = \sigma\left(
b +
\exp\left(\omega_{blue}x_{blue} +
\omega_{red}x_{red} +
\omega_{purple}x_{purple} +
\omega_{grey}x_{grey} \right)
\right)
$$
If you have an idea for a differentiable nonlinear regression on the neurons in the final hidden layer after your convolutional layers, the math says you can use it. You might lose the ability for your network to be a [universal approximator](https://en.wikipedia.org/wiki/Universal_approximation_theorem), however.
| null | CC BY-SA 4.0 | null | 2023-06-01T18:27:25.467 | 2023-06-01T18:27:25.467 | null | null | 247274 | null |
617595 | 2 | null | 617588 | 0 | null | If you allow for different signs, then yes. If the standardized slopes (i.e., the correlations) have the same sign, then assuming you are using the same data set (and same sample size), no...it would not be possible for the t-ratios to go in the other direction.
| null | CC BY-SA 4.0 | null | 2023-06-01T18:36:46.550 | 2023-06-01T18:36:46.550 | null | null | 199063 | null |
617596 | 1 | null | null | 0 | 6 | I've got a logistic model where the annual change for each subject is of interest. Regression, easy. Generate estimated marginal trend for each subject, easy. Now, I have a set of what I presume to be trends expressed as log-OR. How do I translate those into something easier to understand, like "percent change per year", for example?
| Interpreting marginal trends of logistic model | CC BY-SA 4.0 | null | 2023-06-01T18:39:47.907 | 2023-06-01T18:39:47.907 | null | null | 28141 | [
"logistic",
"odds-ratio"
] |
617597 | 1 | null | null | 0 | 5 | Given two groups of subjects (A & B) tested at multiple timepoints (A-T1, A-T2, A-T3 / B-T1, B-T2), with each subject tested multiple times at each timepoint - how can one build a single model to 1. Test if at specific timepoint (i.e. A-T2) group A is different from overall group B and 2. Test if there is a difference across time within groups?
Ideally, it would be great to test both hypotheses with a single model.
For now I am considering timepoint and group as fixed effects and subject as random effects:
```
lmer(measure ~ TP*group + (1|subject), df)
```
| How to obtain difference between groups at specific timepoints (mixed effects models) | CC BY-SA 4.0 | null | 2023-06-01T19:24:16.107 | 2023-06-01T19:24:16.107 | null | null | 294668 | [
"r",
"mixed-model"
] |
617598 | 1 | null | null | 0 | 11 | Can someone help me understand how exchangeability works for Latent Dirichlet Allocation (LDA) and how it enables us to treat words in the test set that are not present in the training set?
I know that LDA is by nature an exchangeable model, and I know the basic definition of exchangeability and how probability does not change subject to permutation. The thing that I am not sure about is the real-life implications of that on a model like LDA and how this really works in detail.
I would appreciate any thorough explanation or good references.
| exchangeability in LDA | CC BY-SA 4.0 | null | 2023-06-01T19:25:07.430 | 2023-06-01T19:25:07.430 | null | null | 351918 | [
"machine-learning",
"probability",
"inference",
"latent-dirichlet-alloc",
"exchangeability"
] |
617599 | 1 | null | null | 1 | 13 | >
Let $(X_i)$ be iid from a distribution proportional to $x^{\theta}1_{(0,1)}(x)$.
a) Find the most powerful test of size $\alpha$ of $H_0: \theta=0$ against $H_1: \theta=1$.
>
b) Find the UMP test of size $\alpha$ of $H_0: \theta=0$ against $H_1: \theta>0$.
I have solved part a) it turns out the critical region is $\{-\sum\log x_i < F^{-1}(\alpha)\}$ where $F$ is the $\Gamma(n,1)$ cdf by the Neyman Pearson lemma.
I was hoping that I would get the same critical region when I replace $1$ with $\theta_0$ in the alternative hypothesis, but instead we get $\{ -\sum\log x_i < F^{-1}_{\theta_0}(\alpha)\}$ where $F_{\theta_0}$ is the $\Gamma(n,\theta_0)$ cdf.
If we could find a $\theta_1$ such that $F^{-1}_{\theta_1} (\alpha)$ is minimal, then the size of such a test would be $\alpha$ and it would have maximal power over all $\theta_0>0$ And we would be done.
Please don’t quote Rubin’s theorem I would like to understand this
| Finding UMP test for composite hypothesis | CC BY-SA 4.0 | null | 2023-06-01T19:45:41.147 | 2023-06-01T19:45:41.147 | null | null | 389373 | [
"hypothesis-testing",
"statistical-significance"
] |
617600 | 2 | null | 414723 | 1 | null | >
This assumes that the analogous $Other$ term is zero for the linear quantile regression, bringing me to my question.
I was mistaken about this. It is totally acceptable to compare the performance on this "pinball loss" of a model of interest to some kind of baseline model, such as prediction the marginal/pooled quantile every time. Thus, that $R^2_q$ equation I gave four years ago would describe the reduction of pinball loss compared to predicting the marginal/pooled quantile every time. [This is quite analogous to the usual $R^2$ for square loss](https://stats.stackexchange.com/a/580261/247274) and seems to satisfy the original desire for something $R^2$-like for quantile regression.
| null | CC BY-SA 4.0 | null | 2023-06-01T19:53:22.387 | 2023-06-01T19:53:22.387 | null | null | 247274 | null |
617601 | 1 | null | null | 0 | 50 | As part of a proof, I need to take the first derivative of the log of the following multivariate normal density: $(2\pi)^{-k/2} |\Sigma|^{-1/2} \exp(\frac{-1}{2} x'\Sigma^{-1}x)$.
In this case, the mean is zero and $\Sigma$ has an exchangeable correlation structure, thus $\Sigma = D A(\rho)D$, where $D$ is a diagonal matrix with entries $d = (\sigma_1, \ldots , \sigma_k)$ and $A(\rho) = (1-\rho)I + \rho jj^\top$, where $I$ is the $k\times k$ identity matrix and $j = (1,\ldots,1)^\top$ is a vector of $1$s (of length $k$). I need to take derivatives with respect to $\sigma_j^2$, where $j \in \{1,\ldots,k\}$.
Clearly, the first algebraic steps are:
$$\frac{\partial}{\partial \sigma_j^2} \log \left[(2\pi)^{-k/2} |\Sigma|^{-1/2} \exp\left( \frac{-1}{2} x^{\top} \Sigma^{-1} x \right) \right]$$
$$= \frac{\partial}{\partial \sigma_j^2} \left[ \frac{-1}{2} \log |\Sigma| - \frac{-1}{2} x^\top \Sigma^{-1} x \right],$$
but I am getting stuck at the point where the derivative actually needs to be taken. How can I take the derivative with respect to $\sigma_j^2$?
I would appreciate any help on this! Eventually working towards showing that the first derivative (with respect to $\sigma^2$) is bounded, but for now, just need to derive a proper expression for the first derivative. Thanks!
| First derivative of multivariate normal density with exchangeable correlation structure | CC BY-SA 4.0 | null | 2023-06-01T19:57:41.047 | 2023-06-02T21:47:47.220 | 2023-06-02T16:26:51.777 | 229657 | 229657 | [
"normal-distribution",
"density-function",
"multivariate-normal-distribution",
"derivative",
"scoring-rules"
] |
617602 | 1 | null | null | 1 | 4 | Is there a website with a calculator to input 2 sets of data to perform the post paired t test adjusted with the Bonferroni correction? Or is anyone available to hire?
| posthoc paired t test | CC BY-SA 4.0 | null | 2023-06-01T20:00:17.143 | 2023-06-01T20:00:17.143 | null | null | 389375 | [
"hypothesis-testing",
"t-test",
"post-hoc",
"bonferroni"
] |
617603 | 2 | null | 549682 | 0 | null | This seems iffy to me, as it requires you to condition on the outcome, which is the quantity you want to predict (and will not always know).
Sure, `sklearn` can handle the calculations just fine. When you pluf the predicted and true values into `r2_score`, you get a calculation based on the following.
$$
R^2_{\text{sklearn}}=
1-\left(\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\bar y
\right)^2
}\right)
$$
This $\bar y$ is the mean of the values you have given the `r2_score` function, that is, the values in the percentile range of interest.
The interpretation of this is that you compare the square loss of your model on the input data to the square loss of a model that predicts the same value $(\bar y)$ every time.
I find it questionable to predict this $\bar y$ every time, however. If you do it for in-sample data, then you are not considering the possibility of overfitting. If you do it for out-of-sample data, then it is not clear what $\bar y$ should be, since you are not supposed to know the true outcomes to be able to calculate a mean of them.
Put yourself in the position of having data where you truly do not know the outcome. I do not see a reasonable way to calculate the denominator. At least for a more standard out-of-sample $R^2$-style statistic, [you can take $\bar y$ as the mean of the training data](https://stats.stackexchange.com/a/616976/247274). Once you start applying restrictions on the values of $y$, however, the interpretation gets iffy.
| null | CC BY-SA 4.0 | null | 2023-06-01T20:12:16.873 | 2023-06-01T20:12:16.873 | null | null | 247274 | null |
617605 | 1 | null | null | 1 | 11 | An interesting statistical curiosity I like is the following:
Suppose you have 3 random variables X, Y and Z.
Suppose that univariately X & Y can explain 2% and 5% of the variation in Z respectively.
The theoretical upper bound of how much of the variance you can explain in Z in a multivariate regression with X & Y is 100%.
Does anyone have any intuitive explanation for why this is the case, or even better can think of a real world example of this unintuitive fact?
| real world example of two variables that cannot explain a dependent variable univariately but perfectly explain the depend variable jointly | CC BY-SA 4.0 | null | 2023-06-01T20:14:42.867 | 2023-06-01T20:14:42.867 | null | null | 389376 | [
"regression",
"linear"
] |
617606 | 1 | null | null | 0 | 32 | My question come across from this post: [How to know that your machine learning problem is hopeless?](https://stats.stackexchange.com/questions/222179/how-to-know-that-your-machine-learning-problem-is-hopeless)
Is there any mathmatical or statistical way to prove that my machine learning problem is hopeless?
I have a project which tries to predict binary classes (0 and 1). However, I try the several models (some DL and classical ML) but model always give 50% of val_accuracy, which the models do not learn well. I have tried fine-tuning, feature engineering, dimensional reduction, and normalization/ stadardization, but it did not work.
How can I use mathmatical or statistical way to prove my ML problem is hopeless or is just not preditable?
| How do you use mathmatical way to know that your machine learning problem is hopeless? | CC BY-SA 4.0 | null | 2023-06-01T20:20:33.483 | 2023-06-02T08:36:47.673 | null | null | 382355 | [
"machine-learning",
"neural-networks",
"mathematical-statistics",
"forecastability"
] |
617607 | 2 | null | 617592 | 6 | null | You can actually develop your first example a little more to refute this conjecture, that is put some mass at $0$.
Let $X$ be the random variable such that $P[X = 0] = 1 - p$, $P[X = -1] = P[X = 1] =
\frac{1}{2}p$, where $p \in (0, 1)$. It is then easy to verify that $E[|X|] = p$,
$\sigma = \sqrt{E[X^2]} = \sqrt{p}$, whence $\frac{E[|X|]}{\sigma} = \sqrt{p}$. So if your conjecture held, the global constant $C$ must be bounded above by $\sqrt{p}$, which can be arbitrarily small. This requires that $C$ cannot be positive but $0$.
| null | CC BY-SA 4.0 | null | 2023-06-01T20:29:38.897 | 2023-06-01T20:29:38.897 | null | null | 20519 | null |
617608 | 2 | null | 255814 | 1 | null | You have an outcome of interest: if some stock market increases or decreases. While I have [my doubts](https://stats.stackexchange.com/a/616275/247274) about how useful this is, a binary outcome is a fine place to start practicing machine learning.
You also have predictor variables that you will consider, which you calculate from your text analysis.
Thus, you are asking if this text-based feature is predictive of whether or not the stock market increases or decreases.
After you did this, you got a model that has poor performance. There are a few reasons for that.
- Is there a reason to believe that the text-based data would be predictive of the stock market? There must be economic and political reasons for stock market movements, and you seem not to capture such data.
- Even if the Tweets contains a great deal of information that is predictive of stock market movements, is your way of extracting information from the Tweets one that should preserve that information?
- While investor sentiment about investments might be reasonably regarded as predictive of stock market movement, your Tweets capture much information that is unrelated. For instance, a Tweet like, "I feel so down after my date stood me up," has a negative sentiment, but I see little connection to the stock market. I would expect much of Twitter to deal with the kind of Tweet I gave above instead of investor sentiment about investing.
I see a huge issue leading to your poor performance being that your features probably just do not have much to do with investing and should not be expected to be predictive of stock market movements.
| null | CC BY-SA 4.0 | null | 2023-06-01T20:34:05.400 | 2023-06-01T20:34:05.400 | null | null | 247274 | null |
617609 | 1 | null | null | 1 | 21 | I am learning about the estimation of fractional response models (those with a lower and upper bound, say 0 to 1), using Stata. I came across [this example on the Stata page](https://www.stata.com/features/overview/fractional-outcome-models/), which I'm copy-pasting below for simplicity. I have 3 questions regarding the interpretation of coefficients, and any help is much appreciated.
"We are going to analyze an air-pollution index that is scaled 0 to 1, inclusive (...)
We model pollution as determined by the number of older, pollution-producing cars per capita; percentage of output due to industry; and annual rainfall. We use probit. We type:"
```
. fracreg probit pollution oldcars rainfall industrial
Iteration 0: log pseudolikelihood = -1001.8481
Iteration 1: log pseudolikelihood = -806.74595
Iteration 2: log pseudolikelihood = -806.55309
Iteration 3: log pseudolikelihood = -806.55309
Fractional probit regression Number of obs = 1,234
Wald chi2(3) = 116.91
Prob > chi2 = 0.0000
Log pseudolikelihood = -806.55309 Pseudo R2 = 0.0060
Robust
pollution Coefficient std. err. z P>|z| [95% conf. interval]
oldcars .7689171 .1748695 4.40 0.000 .4261791 1.111655
rainfall -.3165829 .0350128 -9.04 0.000 -.3852067 -.2479592
industrial .2295972 .053877 4.26 0.000 .1240002 .3351942
_cons -.3840791 .0393275 -9.77 0.000 -.4611596 -.3069986
```
"(...) `margins` will make interpreting our results easier. We can ask margins to report elasticities, which is to say, the percentage change in pollution for a 1% change in the covariate:"
```
. margins, dyex(_all)
Average marginal effects Number of obs = 1,234
Model VCE: Robust
Expression: Conditional mean of pollution, predict()
dy/ex wrt: oldcars rainfall industrial
Delta-method
dy/ex std. err. z P>|z| [95% conf. interval]
oldcars .0411578 .0093973 4.38 0.000 .0227393 .0595763
rainfall -.0581577 .0062469 -9.31 0.000 -.0704014 -.0459139
industrial .0347474 .0081857 4.24 0.000 .0187037 .050791
```
Question 1:
The page explains that we can state that "a 1% increase of older cars per capita increases pollution by 0.041". But can we also say that a 10% increase of older cars per capita increases pollution by $10 \times 0.041=0.41$? Or more generally, can I multiply the elasticity by any number X (say, X = the standard deviation of the variable older cars per capita) and claim that this is the effect on pollution of an increase of X% in older cars per capita?
Question 2:
I am also confused by the comment in that page regarding the use of `dyex` vs `eyex`: "we typed dyex(), not eyex(). The dependent variable is already a proportion and so is already on a percentage scale. We just need its change, not its percentage change."
Wouldn't it be more appropriate to talk about percentage points instead of percentages, if we are using `dyex`?
Question 3:
Would the answer to question 1 change depending on whether `dyex` or `eyex` was used?
Thanks!
| Interpretation of coefficients in fractional response (probit) model | CC BY-SA 4.0 | null | 2023-06-01T20:43:28.740 | 2023-06-01T20:43:28.740 | null | null | 389374 | [
"interpretation",
"stata",
"percentage",
"probit",
"marginal-effect"
] |
617612 | 1 | null | null | 0 | 19 | I know there are linear correlation measures such as Pearson that might not capture all cases of statistical dependence or nonlinear correlations. I know there are also nonlinear correlations, which can be measured by distance correlation
or mutual information.
Then there's the idea of dependence and association
[" association is synonymous with dependence and is different from correlation (Fig. 1a). Association is a very general relationship: one variable provides information about another."](https://www.nature.com/articles/nmeth.3587)
I know the [distance correlation ](https://en.wikipedia.org/wiki/Distance_correlation) and [mutual information](https://stats.stackexchange.com/questions/81659/mutual-information-versus-correlation) are other ways to measure dependence.
If we say 2 variables have a dependence/association, does that mean its either a linear correlation or nonlinear correlation? Or is it possible for it to exist without these?
In the following [article](https://theincidentaleconomist.com/wordpress/causation-without-correlation-is-possible/?fbclid=IwAR0FxpJlx7Gho4Vh_d247rxEOD-BFvChq4xOFVH3i90a-mgv99APvzC4VJ8), they discuss a case where
" x and y are uncorrelated but their magnitudes are not. That is, there are functions of x and functions of y that are correlated."
This would this be a nonlinear correlation?
| Does statistical association and dependence only include linear and nonlinear correlations | CC BY-SA 4.0 | null | 2023-06-01T21:06:27.553 | 2023-06-01T21:06:27.553 | null | null | 301533 | [
"correlation",
"independence",
"association-measure"
] |
617613 | 2 | null | 559814 | 1 | null | I would suggest considering a [Gaussian Process](https://en.wikipedia.org/wiki/Gaussian_process) regression (GPR).
GPR has all the properties asked for namely non-linear, good differentiability and working well with relatively small samples. In that sense, if we are really into it, we can affect the differentiability of the resulting GP too via the hyperparameters of its kernels (e.g. via controlling the $\nu$ degrees of a Matern kernel). There are quite a few implementations of GPs in Python; [GPyTorch](https://docs.gpytorch.ai/en/stable/index.html) seems the obvious one to try first if looking for something PyTorch based. Finally, it is worth mentioning there is a general view of "Deep Neural Networks as Gaussian Processes" by Li et al. (2018) in the work with [that same name](https://arxiv.org/abs/1711.00165) that provides a good theoretical background too; this work further generalised/extended in [Neural Tangents: Fast and Easy Infinite Neural Networks in Python](https://arxiv.org/abs/1912.02803) by Novak et al. (2021).
| null | CC BY-SA 4.0 | null | 2023-06-01T21:06:33.753 | 2023-06-01T21:06:33.753 | null | null | 11852 | null |
617614 | 1 | null | null | 9 | 425 | I am doing a simple Linear Regression (with intercept) which ends up presenting a negative R2, this should not be possible (cf comment 2 at the end)
---
## Reproducible examples of the issue:
Minimal `sklearn` reproducible code:
```
import numpy as np; print(np.__version__) # 1.23.5
import scipy; print(scipy.__version__) # 1.10.0
import sklearn as sk; print(sk.__version__) # 1.2.1
from sklearn.linear_model import LinearRegression
import pandas as pd
np.random.seed(8)
s = pd.Series(np.random.normal(10, 1, size=1_000))
l_com = np.arange(100)
df_Xy = pd.concat([s.ewm(com=com).mean() for com in l_com], axis=1)
df_Xy['y'] = s.shift(-1)
df_Xy.dropna(inplace=True)
X = df_Xy[l_com]
y_true = df_Xy.y
model = LinearRegression(fit_intercept=True) # fit_intercept=True by default anyways
model.fit(X, y_true)
print(model.score(X, y_true))
# -0.15802176533843926 = NEGATIVE R2 on VM 1
# -0.05854780689129546 on VM 2 (? dependent on CPU ?)
```
---
Minimal `scipy` reproducible code:
```
import numpy as np; print(np.__version__) # 1.23.5
import scipy; print(scipy.__version__) # 1.10.0
import pandas as pd
# Parameters:
(seed, N_obs, N_feat, mu_x, sigma_x, sigma_y) = (0, 100, 1000, 100, 10, 1)
# Building very weird X,y arrays (High Colinearity)
np.random.seed(seed)
s = pd.Series(np.random.normal(mu_x, sigma_x, N_obs))
X_raw = np.ascontiguousarray(np.stack([s.ewm(com=com).mean() for com in np.arange(N_feat)]).T)
y_raw = np.random.normal(0, sigma_y, N_obs)
# Center both arrays to zero
X_offset = X_raw.mean(axis=0)
y_offset = y_raw.mean()
X = X_raw - X_offset
y = y_raw - y_offset
# OLS: Finding parameters that minimise Square Residuals:
p, _,_,_ = scipy.linalg.lstsq(X, y) # <-- This is silently Failing! (resulting parameters are worst than the zero vector)
pred = np.matmul(X, p)
RSS = np.sum(np.power(y - pred, 2)) # 108.3406316733817
TSS = np.sum(np.power(y - np.mean(y), 2)) # 107.05357955882408
```
---
- Comment 1: Yes, X matrix is computed in a very specific way (Exponential Moving averages of the target). It seems that the problem arises particularly well in this case. I'm currently trying to find an example without this "complexity".
- Comment 2: If you are a beginner/intermediate Data Scientist, please refrain from commenting something like "R2 can sometimes be negative": we are in the case of simple OLS with intercept. The Sum of Squares should be minimised, by definition.
| Negative R2 on Simple Linear Regression (with intercept) | CC BY-SA 4.0 | null | 2023-06-01T21:09:42.093 | 2023-06-03T09:45:21.837 | 2023-06-03T08:41:41.553 | 53690 | 99438 | [
"regression",
"python",
"least-squares",
"r-squared",
"negative-r-squared"
] |
617616 | 1 | null | null | 1 | 27 | I have read about use of centering an explanatory variable in regression analysis on the mean (through demeaning) when there is presence of multicollinearity. I have applied this for a binary explanatory variable that takes only 0,1 values. Is there a valid way to demean an explanatory variable that is categorical?
| Is there a valid way to demean a categorical variable? | CC BY-SA 4.0 | null | 2023-06-01T21:26:19.790 | 2023-06-01T23:39:43.607 | 2023-06-01T23:34:38.703 | 17072 | 186153 | [
"probability"
] |
617617 | 1 | null | null | 0 | 13 | I have to compare two subgroups from the same population.
The two groups have partially the same records, so they are overlapping subsets of the same dataset.
Can I consider them as two independent groups and consequentially run analyses for independent groups?
| Subgroups comparison | CC BY-SA 4.0 | null | 2023-06-01T21:57:37.847 | 2023-06-01T21:57:37.847 | null | null | 210174 | [
"subset"
] |
617619 | 1 | null | null | 1 | 29 | Suppose $Y \in \{0, 1\}$ is a response variable and $X = (X_1, \cdots, X_p)$ are covariates with $X_j \in \{0, 1\}$ for each $j = 1, \cdots, p$.
In the Naive Bayes model, we assume conditional independence of the covariates given the class label, so that
$$p_{\theta}(y = 1 \mid x) = \frac{\prod_{j=1}^p p_{\theta}(x_j \mid y) p_\theta(y)}{p(x)}.$$
If we denote the parameters as $\theta_0 := P(Y = 1)$ and $$\theta_{j,k} = P(X_j = 1 \mid Y = k),\ 1 \leq j \leq p, k \in \{0, 1\}$$
then we can write
$$\begin{align*}
p_{\theta}(y = 1 \mid x) &= \frac{\prod_{j=1}^p \theta_{j, 1}^{x_j} (1 - \theta_{j, 1})^{1 - x_j} \theta_0}{
\prod_{j=1}^p \theta_{j, 1}^{x_j} (1 - \theta_{j, 1})^{1 - x_j} \theta_0 +
\prod_{j=1}^p \theta_{j, 0}^{x_j} (1 - \theta_{j, 0})^{1 - x_j} (1 - \theta_0)
} \\
&= \frac{1}{1 + \prod_{j=1}^p \left(\frac{\theta_{j,0}}{\theta_{j,1}}\right)^{x_{j}} \left(\frac{1 - \theta_{j,0}}{1 - \theta_{j, 1}} \right)^{1 - x_j} \frac{1 - \theta_0}{\theta_0}} \\
&= \frac{1}{1 + \exp \log \prod_{j=1}^p \left(\frac{\theta_{j,0}}{\theta_{j,1}}\right)^{x_{j}} \left(\frac{1 - \theta_{j,0}}{1 - \theta_{j, 1}} \right)^{1 - x_j} \frac{1 - \theta_0}{\theta_0}} \\
&= \frac{1}{1 + \exp(-(\beta_0 + \beta_1 x_1 + \cdots + \beta_p x_p))}
\end{align*}$$
where $$\beta_0 = \log(\frac{1 - \theta_)}{\theta_0}) + \sum_{j=1}^p \log(\frac{1 - \theta_{j,0}}{1 - \theta_{j,1}})$$
and
$$\beta_j = \log(\frac{\theta_{j,0}}{\theta_{j,1}}) - \log(\frac{1 - \theta_{j,0}}{1 - \theta_{j,1}}).$$
So, in a sense, Naive Bayes is a special case of logistic regression. The difference is that by using a generative model, we are able to make specific assumptions about the form of the likelihood $p_{\theta}(x \mid y)$ and the prior $p_{\theta}(y)$. A similar derivation can be shown for GDA (Gaussian Discriminant Analysis), where $p_{\theta}(x \mid y)$ is normally distributed.
My question is, which other generative models $p_{\theta}(x, y)$ where $p_{\theta}(x \mid y)$ is an exponential family, can be thought of as being a "special case" of logistic regression, in this way? For what other distributions is a derivation like this possible?
| Naive Bayes is a "special case" of logistic regression - which other models? | CC BY-SA 4.0 | null | 2023-06-01T22:06:19.920 | 2023-06-03T18:01:35.040 | null | null | 340035 | [
"regression",
"naive-bayes"
] |
617620 | 1 | null | null | 1 | 12 | Is there a generic way of generating prediction (not confidence) intervals for any machine learning model?
For example, instead of one single point forecast y_hat, could I use the residuals from cross validation and calculate the 25th and 75th percentiles (i.e. Q1 and Q3) and let lower_bound = y_hat + Q1 and upper_bound = y_hat + Q3 ?
It makes sense to me, but I see very little information about this online.
Thanks!
| Generic regression prediction interval | CC BY-SA 4.0 | null | 2023-06-01T22:43:44.030 | 2023-06-01T22:43:44.030 | null | null | 377182 | [
"regression",
"machine-learning",
"prediction-interval"
] |
617621 | 2 | null | 617616 | 0 | null | No need for that. Pick the appropriate contrast coding. I've found that sum contrasts work nicely if you have any interactions. Otherwise, old-fashioned dummy/treatment contrasts work nicely. No need to get excessively fancy if your predictor is plain multinomial. However, recoding a single variable will do nothing for multicollinearity.
| null | CC BY-SA 4.0 | null | 2023-06-01T23:39:43.607 | 2023-06-01T23:39:43.607 | null | null | 28141 | null |
617622 | 1 | null | null | 0 | 18 | I have a python script that ran fine on the old machine. On the new laptop, I suddenly get lower R-squared and Mean Absolute Error values. The python on the old machine is 3.9. on this one is 3.10. Strange. Anyone had any similar experience? Thanks so much.
I am trying to predict the future account openings count. Libraries I'm using: pyodbc, pandas, statsmodels.api, from sklearn.metrics import r2_score, mean_absolute_error ,numpy. I am using SARIMA model.
Today on the new machine:
R2 Score: 0.855523878838846
Mean Absolute Error: 769.7681919832539
Two days ago on the old machine:
R2 Score: 0.9094327739695302
Mean Absolute Error: 666.2574774327614
| Different R-squared on different machine | CC BY-SA 4.0 | null | 2023-06-01T23:47:50.570 | 2023-06-02T00:13:03.913 | 2023-06-02T00:13:03.913 | 389385 | 389385 | [
"python",
"r-squared",
"mae"
] |
617623 | 2 | null | 617614 | 7 | null | I can reproduce it with `np.random.seed(15)` and dig a bit deeper. It seems a lot like a computational round-off error due to the high collinearity.
The steps I took
- I manually added a column with ones to the matrix X
#X = df_Xy[l_com]
X = pd.concat([pd.DataFrame(np.repeat(1, 999)),df_Xy[l_com]], axis=1)
- and used directly the function lstsq(X, y_true), for which the function LinearRegression is a wrapper, and manually compute the sum of squares
p, res, rnk, sin = lstsq(X, y_true)
pred = np.matmul(X, p)
RSS = np.sum(np.power(y_true - model.predict(X=X), 2)) ## 1007.6190
RSS2 = np.sum(np.power(y_true - pred, 2)) ## 1007.6190
TSS = np.sum(np.power(y_true - mean(y_true), 2)) ## 995.24937
The R-squared is still negative.
(the above is for `intercept=False` interestingly, when I set it true then the result becomes even worse, but I can't yet figure out what the [sourcecode](https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/linear_model/_base.py) does with that boolean parameter)
A reason for this behavior might be that the parameters are very large because you generated the coefficients with some exponentially weighting (I do not know enough about python to figure out easily what you are doing there, and you might provide more comments about your code) and created highly correlated features. The `lstsq` gives as output a very small effective rank and the coefficients are in the order of $\pm 10^{10}$.
At the moment this is as far as I can get. I find the code in python libraries difficult to decipher. The [linearmodel](https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/linear_model/_base.py) from sklearn refers to a function [lstsq](https://github.com/scipy/scipy/blob/main/scipy/linalg/_basic.py) from scipy/linalg and that function refers to 'gelsd', 'gelsy', 'gelss' functions from lapack which is a fortran code that is somewhere but through all the layers of wrappers and imported packages it is difficult to figure out what happens in the blackbox between input and output of linearmodel.
---
### The influence of the processor
>
print(model.score(X, y_true))
# -0.15802176533843926 = NEGATIVE R2 on VM 1
# -0.05854780689129546 on VM 2 (? dependent on CPU ?)
In the answer to this question on stack overflow it is explained that the algorithm might take slightly different steps for different CPU architectures and as a consequence there can be small differences in results for different CPUs: [Is it reasonable to expect identical results for LAPACK routines on two different processor architectures?](https://stackoverflow.com/questions/69120608/)
Example: for a computer we have $(7+8) \times (1/9) \neq (7/9+8/9)$. Demonstration with R-code
```
options(digits=22)
(7/9) + (8/9) # 1.666666666666666518637
15/9 # 1.666666666666666740682
```
An algorithm might make such subtle changes in the computation to optimize the calculation speed for a processor.
In the case of a matrix inversion of a nearly singular matrix these small errors might get amplified due to the sensitivity of the computation on small differences.
(I don't have the ability to verify this with different processors, but I will check whether changing the number of cores can have a similar influence.)
| null | CC BY-SA 4.0 | null | 2023-06-01T23:52:27.100 | 2023-06-02T08:37:18.950 | 2023-06-02T08:37:18.950 | 164061 | 164061 | null |
617624 | 1 | null | null | 1 | 19 | I am attempting to calculate a 95% prediction interval from an adaptive lasso model using the `glmnet::` package in R. I adapted my method from the Python code in [this blog post](https://www.saattrupdan.com/2020-03-01-bootstrap-prediction). Essentially it boils down to bootstrapping the training data, and then getting a distribution of model variance, model bias, and sampling error, adding those error sources together, and then finding the 0.025 and 0.975 quantiles of the that final distribution. What is perplexing to me is why the bootstrapped prediction interval from adaptive lasso is so much wider than an analogous prediction interval from an OLS model on the same data in the artificial example below. At first, I thought this might be a result of the bias introduced through Lasso, but that should come with a concomitant reduction in variance. Additionally, the final model with adaptive lasso is much closer to the true data generating mechanism than the final OLS model. Finally, plotting the observed vs. predicted values from the prediction data set shows that almost all of the data are within the 95% prediction interval of the OLS (as expected). Am I including too many sources of error or miscalculating them? Note, I have tried increasing the number of bootstrap iterations, not setting a fixed lasso tuning parameter (i.e., lambda), and yet I get essentially the same results. Any guidance would be much appreciated.
```
##Loading Necessary Packages##
if(!require(glmnet)){
install.packages("glmnet")
}
## Bootstrapped Prediction Interval Function ##
pred_interval<-function(x0, x_train, y_train, lambda, w, alpha=1, alpha_thresh = 0.05){
require(glmnet)
n <- nrow(x_train)
b <- 50
val_resids<-NULL
bootstrap_preds<-NULL
for (i in 1:b){
N<-c(1:n)
train_ids <- sample(N, n, replace=TRUE)
val_ids <- N[-train_ids]
alasso <- glmnet(x_train[train_ids,], y_train[train_ids], alpha=alpha, lambda=lambda penalty.factor=w)
preds <- predict(alasso, x_train[val_ids,])
val_resids[[i]] <- y_train[val_ids] - preds
bootstrap_preds[[i]] <- predict(alasso, x0)
}
bootstrap_preds <- do.call(rbind, bootstrap_preds)
bootstrap_preds1 <- bootstrap_preds - mean(bootstrap_preds)
val_resids <-do.call(rbind, val_resids)
alasso<-glmnet(x_train, y_train)
preds<-predict(alasso, x_train)
train_resids<-y_train - preds
val_resids<-quantile(val_resids, seq(0,1,0.01))
train_resids<-quantile(train_resids, seq(0,1,0.01))
no_info_error <- mean(abs(sample(y_train, n, replace=FALSE)/sample(preds, n, replace=FALSE)))
generalization <- abs(mean(val_resids) - mean(train_resids))
no_info_val <- abs(no_info_error - train_resids)
rel_overfit<-mean(generalization/no_info_val)
weight <- .632 /(1-.368 * rel_overfit)
residual <- (1 - weight)*train_resids + weight*val_resids
k <- 1
C <- NULL
for (m in 1:length(bootstrap_preds1)){
for(o in 1:length(residual)){
C[k]<-bootstrap_preds1[m]+residual[o]
k<-k+1
}
}
qs <- c(alpha_thresh / 2, (1 - alpha_thresh / 2))
pred_qs<-quantile(C, qs)+mean(bootstrap_preds)
return(cbind(mean(bootstrap_preds),pred_qs))
}
##Fake Training Data##
set.seed(63)#For Reproducibility
x1<-runif(100, 30, 50)
x2<-runif(100, 23, 150)
x3<-runif(100, 400, 1500)
x4<-runif(100, 56, 123)
x5<-runif(100, 3, 12)
e<-rnorm(100, 10, 5) #
Y<-15.3+2.1*x1 + 6.3*x2 + 1.5*x4 +e
X<-data.frame(x1,x2,x3,x4,x5)
dtf<-data.frame(cbind(Y,X))
##Running Adaptive Lasso to get tuning parameter values##
mod.full<-lm(Y~., data=dtf)
w<-1/abs(matrix(mod.full$coefficients[-c(1)]))
alss<-cv.glmnet(x=as.matrix(X), y=Y, alpha=1, family="gaussian", penalty.factor=w)
##Fake Prediction Data##
set.seed(12)#To get a reproducible new dataset
x1<-runif(100, 30, 50)
x2<-runif(100, 23, 150)
x3<-runif(100, 400, 1500)
x4<-runif(100, 56, 123)
x5<-runif(100, 3, 12)
x0<-data.frame(x1,x2,x3,x4,x5)
##Calculating Bootstrapped Prediction interval using Adaptive Lasso##
out<-apply(x0, 1, pred_interval, x_train=as.matrix(X), y_train=Y, w=w, lambda=min(alss$lambda))
useit<-t(out)
testit<-15.3+2.1*x0[,1] + 6.3*x0[,2] + 1.5*x0[,4] +e
##Calculating prediction interval from OLS##
detm<-predict(mod.full, x0, level=0.95,interval="prediction")
##Plotting Results##
plot(useit[,2], testit)
points(useit[,3], testit, col="red") #Bootstrapped Lower Bound
points(useit[,4], testit, col="red") #Bootstrapped Upper Bound
points(detm[,2], testit, col="blue") #OLS Lower Bound
points(detm[,3], testit, col="blue") #OLS Upper Bound
```
| Bootstrapped Prediction Interval for Adaptive Lasso | CC BY-SA 4.0 | null | 2023-06-02T00:34:39.757 | 2023-06-03T00:23:45.257 | 2023-06-02T15:47:51.523 | 354118 | 354118 | [
"r",
"bootstrap",
"lasso",
"prediction-interval",
"glmnet"
] |
617625 | 2 | null | 601696 | 2 | null | Overfitting occurs when the error of a fitted model is dominated by the noise. (another source of error is bias)
In your case with only three points, the upperwards slope of the fitted linear line may easily be a wrong fit and the true pattern of the population, from which these three points are sampled, can be different.
[](https://i.stack.imgur.com/61RV3.png)
| null | CC BY-SA 4.0 | null | 2023-06-02T00:43:45.737 | 2023-06-02T00:43:45.737 | null | null | 164061 | null |
617626 | 1 | null | null | 0 | 30 | I am running a binomial logistic regression with a dependent variable is 0/1 and the explanatory variables are categorical - binary and multiple categories. When I add an interaction term, and run a VIF test for presence of multicollinearity, several of the coefficients of predictor variables take a VIF value of 5 or greater. I have read that as a rule of thumb 5 indicates issues with multicollinearity and that the model should be respecified. Thoughts on whether that threshold is appropriate.
Is there a rule of thumb recommended for a generalized variance inflation factor value to assess severe multicollinearity?
| If VIF value is greater than 5, | CC BY-SA 4.0 | null | 2023-06-02T01:07:26.870 | 2023-06-02T12:00:31.647 | 2023-06-02T12:00:31.647 | 186153 | 186153 | [
"r",
"regression",
"logistic",
"variance-inflation-factor"
] |
617627 | 2 | null | 348057 | 0 | null | Just look at a boxplot of the differences, and make sure it's symmetrical.
Watch this video, at 5:53 : [https://www.youtube.com/watch?v=Y4-wAT4SNM4](https://www.youtube.com/watch?v=Y4-wAT4SNM4)
| null | CC BY-SA 4.0 | null | 2023-06-02T01:38:16.430 | 2023-06-02T01:38:16.430 | null | null | 294655 | null |
617628 | 1 | null | null | 0 | 8 | I am looking for the best optimizer for a multiclass classification problem, because my results aren't very good and I am pretty sure the optimizer is the problem. Also, a good learning rate for an optimizer or a learning rate schedule would also be appreciated. I tried google search but it didn't give me optimizers. To summarize, I am looking for an optimizer, a good learning rate, or a good learning rate schedule for a multiclass classification problem.
| Best optimizer for multiclass classification | CC BY-SA 4.0 | null | 2023-06-02T01:40:07.590 | 2023-06-02T01:40:07.590 | null | null | 389011 | [
"classification"
] |
617629 | 1 | null | null | 0 | 15 | I am trying to teach myself more about the mathematical background, properties and theory involved with calculating different types of "weighted mean estimators" - specifically, if I have sample means from different experiments, how can I combine these sample means together and create an estimate of the "mean of means" (i.e. "weighted mean").
- As I understand, there are different types means that can be calculated. For example, a "weighted mean" based on sample size counts can be calculated in such a way that places more emphasis on individual means that were calculated with larger sample sizes.
- On the other hand, a different type of "weighted mean" can be calculated based on the variances associated with each mean, such that individual means with smaller variances influence the final estimate of the "weighted mean" more compared to means with larger variances. In a hypothetical example where we have access to the Population Variances of each mean and all individual means themselves are Normally Distributed with a common mean (but different variances) - this approach can be shown to provide a final estimate of the "weighted mean" having the overall lower variance (https://en.wikipedia.org/wiki/Inverse-variance_weighting). Note that when the assumption of Population Variances are relaxed and we only have access to Sample Variances, the following paper (https://www.jstor.org/stable/3001633) shows how to adjust the estimates of the "weighted mean" accordingly.
I am interested in further studying the mathematical properties of the "weighted mean" in more "realistic situations" where various of the above assumptions are not satisfied (e.g. unequal sample means and sample variances). I found the following paper ([https://www.jstor.org/stable/3001666](https://www.jstor.org/stable/3001666)) in which a "weighted mean estimator" is presented.
Part 1: First, lets consider an instance where the variances of all means are the same. Suppose we collected sample data (e.g. heights) on $k$ different countries. For some given country $i$, we can estimate the average height of country $i$ using $x_i$ - where $\mu_i$ is the actual mean of the population for country $i$ and $\mu$ is the mean of the population for all countries. In this case, the unweighted mean of all means $\bar{x}$ is still the "best estimator" of $\bar{x}$ (and $s_b^2$ is an unbiased estimator of $\sigma_{\mu}^2 + \sigma^2$) :
$$x_i = \mu_i + e_i = \mu + (\mu_i - \mu) + e_i$$
$$Var(x_i) = \sigma_{\mu}^2 + \sigma^2$$
$$\bar{x} = \frac{\sum_{i=1}^{k} x_i}{k}$$
$$s.e.(\bar{x}) = \frac{s_b}{\sqrt{k}} = \sqrt{\frac{\sum(x_i - \bar{x})^2}{k(k-1)}}$$
Part 2: Now, lets consider a more realistic case where the variances of each mean is different:
$$x_i = \mu_i + e_i = \mu + (\mu_i - \mu) + e_i$$
$$Var(x_i) = \sigma_{\mu}^2 + \sigma_i^2$$
In this situation, we can consider a new type of "semi-weighted estimator" (sw) :
$$\bar{x}_{sw} = \frac{\sum_{i=1}^{k} \hat{W_i} x_i}{\sum \hat{W_i}}$$
$$\hat{W_i} = \frac{1}{S_{\mu}^2 + S_i^2}$$
$$S_{\mu}^2 = \frac{\sum_{i=1}^{k} (x_i - \bar{x})^2}{k-1} - \bar{\sigma}^2$$
My Question: On page 122 (page 23 on the PDF - [https://www.jstor.org/stable/3001633](https://www.jstor.org/stable/3001633)), in (22), the author writes that the Variance of this semi-weighted mean (provided population variances) can be written as:
$$W_i = \frac{1}{\sigma_{\mu}^2 + \sigma_i^2}$$
$$V(\bar{x}_{sw}) = \frac{1}{\sum W_i}$$
(Note: In the following paper [https://www.jstor.org/stable/1267323](https://www.jstor.org/stable/1267323), the above weights are demonstrated to be the "optimal weights", i.e. produce the lowest variance)
I am interested in adjusting this variance formula when we have sample variances $S_{\mu}^2$ and $S_i^2$.
In this paper ([https://www.jstor.org/stable/3001633](https://www.jstor.org/stable/3001633)), a similar adjustment is provided for estimating the variance of the weighted mean when sample variances are provided and all sample means are same (note that a different notation is being used here, e.g. here $\mu$ is actually the sample mean) :
$$\hat{w_i} = \frac{n_i}{s_i^2}$$
$$\hat{w} = \sum \hat{w_i}$$
$$\bar{\mu} = \sum_{i=1}^{k} \hat{\theta}_i u_i$$
$$\hat{\theta}_i = \frac{(s_i^2)^{-1}}{\sum_{i=1}^{k}(s_i^2)^{-1}}$$
$$\text{Var}(\bar{\mu}) = V^* = \frac{1}{\hat{w}}\left(1 + 4\sum_{i=1}^{k}\frac{1}{n_i}\hat{\theta}_i(1-\hat{\theta}_i)\right)$$
Using a similar logic - I want to apply a similar adjustment factor to the situation where the sample means are different and the weights are now given by:
$$\hat{W_i} = \frac{1}{s_{\mu}^2 + s_i^2}$$
$$\text{Var}(\bar{x}_{sw}) = V^* = \frac{1}{\hat{w}}\left(1 + 4\sum_{i=1}^{k}\frac{1}{n_i}\hat{\theta}_i(1-\hat{\theta}_i)\right)$$
$$\text{Var}(\bar{x}_{sw}) = V^* = \frac{1}{\sum \frac{1}{s_{\mu}^2 + s_i^2}}\left(1 + 4\sum_{i=1}^{k}\frac{1}{n_i}\hat{\theta}_i(1-\hat{\theta}_i)\right)$$
Is this modification correct?
Thanks!
| Modifying the Variance of the Sample Mean | CC BY-SA 4.0 | null | 2023-06-02T01:42:59.920 | 2023-06-02T01:42:59.920 | null | null | 77179 | [
"sampling",
"mean"
] |
617630 | 1 | null | null | 1 | 8 | I have data that I am trying to analyse, but am not entirely sure if I am doing the right steps. It is a mixed design where every participant experienced condition A, then they were randomly assigned to condition B or C.
- Condition A: within-subjects
- Condition B & C: between-subjects (randomly assigned)
Each condition had 16 trials where a participant responded to a question with a slider from 0 to 100, then they provided their confidence for the answer on a similar type of slider. The data is not normally distributed (almost looks like a bimodal normal distribution with peaks around 35 and 65).
Most material I have come across thus far relating to mixed designs use different variables for their between and within conditions. My guess is that I would run seperate ANOVA's for the between and within trials.
Below is example data (not perfect, but close enough)
```
library(purrr)
sample_numbers <- map_dfc(
1:2, ~ map_dbl(
1:(665 / 2),
~ {
if (.x <= (665 / 4)) {
mean <- 65
} else {
mean <- 35
}
rnorm(1, mean = mean, sd = 10)
}
) %>%
pmax(pmin(., 100), 0) %>%
round() %>%
c(., sample(c(0, 100), size = 167, replace = TRUE, prob = c(.5, .5))))
test_df <- data.frame(
id = rep(LETTERS, each = 32),
question_n = rep(1:16, times = 52),
value = sample_numbers[[1]],
confidence = sample_numbers[[2]],
condition = c(replicate(n = 13, c(rep('A', times = 16), rep('B', times = 16), rep('A', times = 16), rep('C', times = 16))))
)
```
| Mixed factorial anova with one IV | CC BY-SA 4.0 | null | 2023-06-02T01:48:10.050 | 2023-06-02T01:48:10.050 | null | null | 389308 | [
"hypothesis-testing",
"anova"
] |
617631 | 2 | null | 616761 | 0 | null | a rule of thumb formula for minimum sample size is: $n = \frac{16 * \sigma^2}{\delta^2}$, there are two things that need more clarify:
- n is the minimum sample size for each group: if you have an unequal sample size for treat/control group, you can refer here(R - power.prop.test, prop.test, and unequal sample sizes in A/B tests)[https://stats.stackexchange.com/questions/108226/r-power-prop-test-prop-test-and-unequal-sample-sizes-in-a-b-tests]
- n is 「unique active users during experiment」not 「daily active users (DAU) * duration days」 for each group
| null | CC BY-SA 4.0 | null | 2023-06-02T02:03:45.630 | 2023-06-02T02:03:45.630 | null | null | 347393 | null |
617632 | 2 | null | 617626 | 1 | null | Some of the trouble with going by the VIF is that there is more to the story than how the standard errors get inflated by multicollinearity, which is what VIF captures.
You could drop some of the variables that cause this variance inflation, but this puts you at a risk of increasing the residual variance. Since the coefficient standard error is a function of both the VIF and the residual variance, it is not a given that taking steps to lower the VIF actually works toward your goal of lowering the standard error; your steps to lower the VIF might be successful but lead to such an increase in the residual variance that the coefficient standard error winds up higher, despite the lower VIF.
Another issue to consider is omitted-variable bias. When you drop variables from a regression in order to lower multicollinearity, you risk biasing the estimates of the remaining coefficients. Thus, you can wind up with a low estimation variance but a high estimation bias. You might not want this. You might prefer more variance with less bias to little variance with considerable bias.
Consequently, there is much more to the story than just a VIF threshold of, say, $5$ or $10$ like sometimes get recommended in introductory courses.
Finally, the usual variance-inflation factor from OLS linear regression does not describe how the standard errors are inflated by multicollinearity in a logistic regression. Generalized variance-inflation factors would be considered for generalized linear models.
| null | CC BY-SA 4.0 | null | 2023-06-02T02:33:27.497 | 2023-06-02T02:33:27.497 | null | null | 247274 | null |
617633 | 1 | null | null | 0 | 10 | I'm trying to reproduce results in [this](https://clinicaltrials.gov/ProvidedDocs/64/NCT02108964/Prot_001.pdf) protocol with [OncoBayes2 package](https://cran.r-project.org/web/packages/OncoBayes2/OncoBayes2.pdf) to learn [BLRM with EWOC](https://onlinelibrary.wiley.com/doi/10.1002/sim.3230) (Neuenschwander 2008) in dose finding studies. In section 14.2.2.1 (appendix 2) of the protocol it gives the prior median, sd and correlation of log alpha and log beta as mean: -3.068,0.564 sd: 2.706,0.728 corr: -0.817. The documentation of OncoBayes2 package seems to use diagonal matrix for the prior sd and therefore the correlation is not considered into the priors.
I tried the following code, with no correlation specified:
```
library(tibble)
EGF816_prior = blrm_trial(data = tibble(group_id = as.factor("All"),
drug_A = NA,
num_patients = NA,
num_toxicities = NA),
dose_info = tibble(group_id = as.factor("All"),
drug_A = c(50,75,150,300,450,600,800,1000),
dose_id = seq(1, 8, 1), stratum_id = "all"),
drug_info = tibble(drug_name = "drug_A",
dose_ref = 300,
dose_unit = "mg/kg"),
simplified_prior = FALSE,
EXNEX_inter = FALSE,
EX_prob_inter = 1,
interval_prob = c(0, 0.16, 0.33, 1),
interval_max_mass = c(prob_underdose = 1,
prob_target = 1,
prob_overdose = 0.25),
prior_EX_mu_mean_comp = matrix(c(-3.068,0.564), 1, 2, TRUE),
prior_EX_mu_sd_comp = matrix(c(2.706,0.728),1, 2),
prior_EX_tau_mean_comp = matrix(c(0, 0), 1, 2),
prior_EX_tau_sd_comp = matrix(c(1, 1), 1, 2),
prior_EX_prob_comp = matrix(1, nrow = 1, ncol = 1),
prior_tau_dist = 0,
prior_PD = FALSE)
summary(EGF816_prior,"dose_prediction")
```
It gives:
```
group_id drug_A dose_id stratum_id mean sd `2.5%` `50%` `97.5%` prob_underdose prob_target prob_overdose ewoc_ok
1 All 50 1 all 0.0387 0.118 0.0000000278 0.00134 0.423 0.932 0.0348 0.0332 TRUE
2 All 75 2 all 0.0515 0.136 0.000000419 0.00281 0.514 0.905 0.0488 0.0462 TRUE
3 All 150 3 all 0.0894 0.182 0.0000208 0.0104 0.721 0.839 0.07 0.0912 TRUE
4 All 300 4 all 0.173 0.252 0.000236 0.0468 0.899 0.692 0.112 0.196 TRUE
5 All 450 5 all 0.260 0.308 0.000527 0.109 0.970 0.573 0.124 0.303 FALSE
6 All 600 6 all 0.328 0.340 0.000811 0.178 0.990 0.481 0.136 0.383 FALSE
7 All 800 7 all 0.396 0.361 0.00118 0.278 0.998 0.404 0.130 0.466 FALSE
8 All 1000 8 all 0.447 0.372 0.00160 0.377 0.999 0.355 0.120 0.526 FALSE
```
The result is similar to the prior distribution listed in table 14-7 in the protocol. Considering that the package uses MCMC for calculation so it is acceptable to not have the exact same numbers, can I assume that the correlation between log alpha and log beta is not actually affecting the results? Moreover, where should I put the correlation in the code if I want to specify it?
Any help is appreciated!
| Correlation for priors in BLRM with EWOC in R package OncoBayes2 | CC BY-SA 4.0 | null | 2023-06-02T03:02:56.057 | 2023-06-02T03:02:56.057 | null | null | 389388 | [
"r",
"bayesian",
"correlation",
"prior"
] |
617634 | 1 | null | null | -3 | 28 | While "statistical-significance" has served well to test the significance of an experiment by the use of a NULL test, this measure can become highly inadequate in decision making as my experience shows. The problem arises when a target is added to descriptive statistics, and as an example: A public election poll is held for approval of a single person running for public office. The mean, standard deviation, Margin Of Error, and a Confidence Interval, are calculated based on a desired Confidence Level (Z-Value), etc. If the Confidence Level is satisfactorily chosen, maybe at 99.73% or 99.99%, for stringent confidence, the problem remains in that this Confidence Level relates only to the data mean position. So if 60% of the voters voted "yes", there is a corresponding vote mean based on the "yes" voters percentage AND "no" voters percentage, or we can use the standard statistical equation for computing the data mean. With a high number of voters (a very large sample size), we can be very confident of the computed data mean of the vote. While the "Confidence Level" measure is useful, as has been traditionally used, it falls dramatically short in utility for decision making for a person. Assuming the vote mean is computed to be 0.2 for this example (60% "yes" voters, with proper data representation of a "yes" vote = 1, and a "no" vote = -1), then the vote mean of 0.2 can be meaningless in evaluating the vote, unless there is a vote target. Just like attaching an accuracy to a gun based on a target test, with the center of "the target" equaling 100% if all the bullets hit the center, then the accuracy of the gun would be 100%, and the distance from this center give the error %, a poll has "an ideal" target or goal of 100% "yes" support, and a person running for office has an ideal target of gaining 100% of the voters (with 0% missed opportunity, as missed opportunity indicates problems or error with the candidate). Therefore, the distance of this vote mean of 0.2 is 0.8 away from the target 1 (100%), and therefore this distance is the error. With this error of 0.8 (80%) in the vote, a person can now make judgement, because he has an "error" value for this vote. With that much error, the vote is looked upon as highly unacceptable. I have simplified all the factors involved, merely to keep focus on the metric "error" or its compliment "accuracy" because accuracy is simply = 1- error = 1 - 0.8 = 0.2 !!!! Now we have a measure for accuracy which we can use. Here we can see the inadequacy in the utility of a "Confidence Level" measure which basically says we are sure of the mean location, and a statistician can tell the public "the vote results computations are highly reliable" which may sound impressive to a person, except, that if the statistician adds "and the vote accuracy is 20% and the vote error is 80%", then the person will have a different and more important metric to focus. Doing the mathematics, we can see that the null test for a vote at z = 2 is equivalent to an 82% "yes" support in a vote. This implies that if a poll has less than 82% "yes" support, it can be classified as a random event, and discarded! But above 82% vote support, a vote is accepted as "not random". At this point, the utility of the NULL test ends. And the "accuracy" or "error" measure takes us to the next level where we can say that "The vote is not a random event or experiment, however, at 82% "yes" in the vote, the accuracy is only "0.82 - 0.18 = 0.64 or 64%", and equivalently, "the vote error is 36%". Now, what previously may have sounded like a fantastic poll election results at 82% "yes" support, now seems like an error-full decision. To see the full complexity involved in this issue, I can provide full references, and will be glad to answer thoughtful questions. And I ask for your support to include "Accuracy" and/or "Error" as a new statistical category well defined by an equation, just as Standard Deviation is well defined by an equation. I have shown and explained the basic form of the equation. To see if you understood the problem, see if you can calculate (wiyhout any outside help) the accuracy of a USA Supreme Court decision adopted by a 5 to 4 majority. If you cannot compute this extremely basic statistical problem, with very significant scientific value, then we know statistics students and professionals are lacking knowledge in an important new subject category "accuracy". If you are able to compute the answer, post it along with how you computed this.
| Is there support for adding a new statistics subject tag "Accuracy"? | CC BY-SA 4.0 | null | 2023-06-02T03:36:55.930 | 2023-06-03T01:14:50.870 | 2023-06-03T01:14:50.870 | 389390 | 389390 | [
"statistical-significance",
"accuracy"
] |
617635 | 1 | null | null | 1 | 10 | I'll try to keep this short. I am using the R fixest package. I have the following regression that I've simplified for the example:
`feols(y ~ x1 + i(x1, x2, "reference") | x2 ~ instrument, data = myData)`
Basically, I am running a 2SLS, where I am using "instrument" as an instrument for one of my variables, x2, but I also want to interact the fitted values for x2 against another variable, x1. When I run this code, the output calls x2 "fit_x2" like normal, but it only says "x2" for the interaction terms. I'm unsure if the function is actually using the fitted values for x2 from the 1st stage for the interaction terms, and I wish to have this clarified. So, if anyone has experience with this package, your help would be greatly appreciated.
Econometrically, I'm 90% sure it is OK to have interactions with an instrumented variable, so I believe I am seeking out sane results. But, if I am mistaken, please excuse me, and just politely let me know! Perhaps I am supposed to instrument my interactions as well, explicitly with the instrumental variable.
Thanks!
| Does 2SLS Endogenous Interactions Use Fitted Values? | CC BY-SA 4.0 | null | 2023-06-02T04:13:57.110 | 2023-06-02T18:23:54.990 | 2023-06-02T18:23:54.990 | 389397 | 389397 | [
"r",
"regression",
"interaction",
"instrumental-variables"
] |
617636 | 2 | null | 617113 | 1 | null | I think in this instance it would be best to give some high-level answers to your questions and direct you to appropriate sources for details.
Your original question was:
>
How do we select the functional forms of the variational distribution $q$?
Recall that we wish to consider a restricted family of distributions $\mathcal{Q}$ and that we wish to find a member of this family to minimise the Kullback-Leibler diveregence. Typically, this family needs to be both tractable and sufficiently expressive.
From Chapter 10: Approximate Inference in Bishop (2006), there are two ways of thinking about this. We could either:
- Specify $\mathcal{Q}$ to be some known parametric distribution family, consisting of distributions $q(\mathbf{Z} ; \omega)$ indexed by a parameter $\omega$.
- Specify $\mathcal{Q}$ to be the mean-field family of distributions.
In the second case, this distribution family is defined by the assumption that $q(\mathbf{Z})$ has the following factorisation structure
$$q(\mathbf{Z}) = \prod^M_{i=1} q_j(\mathbf{Z}_j)$$
Strictly, one does not explicitly specify the functional forms/parametric distribution families of each of the $q_j$. Variational inference only requires a restriction on $\mathcal{Q}$ that it is a mean-field family.
Instead, the parametric distribution family one should use for each $q_j$ emerges naturally when deriving the update equations for each of the variational factors.
This is a position that can be found in earlier presentations of variational inference e.g. in Bishop (2006).
In practice however, the 'natural emergence' of what distribution family to use for the $q_j$ is reliant on (conditional) conjugacy, and this is essential to being able to derive closed form updates for the original version of variational inference that uses a co-ordinate ascent algorithm for updating. You will need to have a look at the worked example of variational linear regression in Bishop (2006) to see this in action.
All that being said, in more modern presentations of the variational inference and its more developed variants in the literature, such as in Hoffman et al. (2013) and Blei et al. (2016), you will find that the setting assumed for co-ordinate ascent variational inference is that all the complete conditionals in the model are in the exponential family.
From Hoffman et al. (2013):
>
With the
assumptions that we have made about the model and variational distribution—that each conditional
is in an exponential family and that the corresponding variational distribution is in the same exponential
family—we can optimize each coordinate in closed form.
In this case, you just specify the distribution family of each of the $q_j(\mathbf{Z}_j; \phi_j)$ to be the same as that of the model distrbution $p_j(\mathbf{Z}_j; \theta_j)$, and index the $q_j$ with its own set of variational parameters $\phi_j$.
>
How are the optimised variational distribution $q^*$ and the iterative update equations determined?
The review by Blei et al. (2017) is slightly terse for my liking. A similar more verbose derivation of the following update equations can be found in Bishop (2006):
$$q_j(\mathbf{Z}_j) \propto \exp\{\mathbb{E}_{-j}[\log p(\mathbf{Z}_j \vert \mathbf{Z}_{-j}, \mathbf{X})]\} \quad j = 1, \dots M$$
Where the expectation is with respect to the product of all variational distributions $\prod_{i \neq j} q_i(\mathbf{Z}_i)$ except that of the $j$th factor.
Briefly, each update equation is the result of maximising the evidence lower bound functional $\mathcal{L}(q)$ with respect to the $j$th variational distribution $q_j(\mathbf{Z}_j)$ whilst holding all other variational distributions $q_{i}(\mathbf{Z}_{i})$ for $i \neq j$ fixed.
Note that in this abstract derivation, there is no functional form/parametric distribution family specified for each of the $q_j$ in advance, and that maximisation occurs over all possible functional forms that $q_j$ can take, with the only caveat being that $\mathcal{Q}$ is the mean-field family.
Rewriting the maximisation of the evidence lower bound functional $\mathcal{L}(q)$, the update equations arise as solutions to the the minimisation of a negative Kullback-Leibler divergence between $q_j$ and $\tilde{p}(\mathbf{X}, \mathbf{Z}_j)$, with the latter defined as
$$\log \tilde{p}(\mathbf{X}, \mathbf{Z}_j) = \mathbb{E}_{i \neq j}[\log p(\mathbf{X}, \mathbf{Z})] + \text{const.}$$
Where again, the expectation is with respect to $\prod_{i \neq j} q_i(\mathbf{Z}_i)$.
Another way to derive the update equations is using context from the outset, and this kind of strategy is used in Blei et al. (2003)
Under the assumption that the complete conditionals in the model belong to the exponential family, then we can just assume that each of the variational distributions $q_j(\mathbf{Z}_j)$ are in the same parametric distribution family as the corresponding model distributions $p_j(\mathbf{Z}_j; \theta_j)$. Meaning that we can write each of the $q_j$ as $q_j(\mathbf{Z}_j ; \phi_j)$, indexing each variational distribution with its own variational parameter $\phi_j$
In this case, we now have functional forms for all the variational distributions $q_j(\mathbf{Z}_j; \phi_j)$, the corresponding model distributions $p_j$ and also any other model distributions. All we do now is to substitute these into the evidence lower bound, which then becomes a function of the model parameters $\boldsymbol{\Theta}$ and variational parameters $\boldsymbol{\Phi}$.
$$\mathcal{L}(\boldsymbol{\Theta}, \boldsymbol{\Phi}) = \mathbb{E}_{q(\mathbf{Z})}[\log p(\mathbf{X}, \mathbf{Z})] + \mathbb{E}_{q(\mathbf{Z})}[\log q(\mathbf{Z}) ]$$
The update equations for each of the variational parameters then arise by maximising $\mathcal{L}(\boldsymbol{\Theta}, \boldsymbol{\Phi})$ with respect to the variational parameters $\boldsymbol{\Phi}$.
The co-ordinate ascent algorithm then proceeds with an initialisation of the variational parameters $\boldsymbol{\Phi}^{t=0}$. Using what is derived above, at time step $t = 1$, you would update $\phi_1$, given $\phi_2 = \phi_2^{t=0}, \dots , \phi_M = \phi_M^{t=0}$. After this is done for all $M$ variational parameters, you would compute the evidence lower bound using $\boldsymbol{\Phi}^{t=1}$
You would then repeat until your evidence lower bound has converged to some local maximum at $\boldsymbol{\Phi}^*$. As the evidence lower bound is in general non-convex, you will need to re-initialise the algorithm using different starting values $\boldsymbol{\Phi}^{t=0}$ and repeat.
Depending on the coupling structure of the update equations and your motivations, there is also scope for maximising this evidence lower bound with respect to model parameters $\boldsymbol{\Theta}$ also, in which case you would alternate between updating all variational parameters $\boldsymbol{\Phi}$ in a 'variational E-step', followed by all model parameters in a 'variational M-step'.
>
I will appreciate if you can a provide comprehensive example as answer while highlighting important concepts which are surely missing.
The application of variational inference to Bayesian linear regression as a worked example is too long to place here, and others have done more comprehensively than I ever could. Instead, see
- Chapter 10: Approximate inference of Bishop has a simple example of Bayesian linear regression.
- For a more general worked example of Bayesian linear regression see Drugowitsch (2013) and the supplementary note by Rapela (2017)
Some suggestions:
- Have a look at the theoretical derivations in the sources I've outlined above, but also work through the Bayesian linear regression worked examples.
- Personally, when looking at the derivations in abstracto, it can be difficult to see why or how certain steps are being made, only a worked example will advance your understanding.
- In particular, for Bayesian linear regression, you would have, in your notation, $\mathbf{Z} = \{\beta, \sigma \}$ and $\boldsymbol{X} = Y$.
References
Bishop, C. (2006). Pattern Recognition and Machine Learning. Springer New York.
Blei, D. M., Kucukelbir, A., & McAuliffe, J. D. (2017). Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518), 859–877. [https://doi.org/10.1080/01621459.2017.1285773](https://doi.org/10.1080/01621459.2017.1285773)
Blei, D. Variational inference: Foundations and innovations. [http://www.cs.columbia.edu/~blei/talks/Blei_VI_tutorial.pdf](http://www.cs.columbia.edu/%7Eblei/talks/Blei_VI_tutorial.pdf)
Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent Dirichlet allocation. Journal of Machine Learning Research, 3, 993–1022. [https://doi.org/10.1162/jmlr.2003.3.4-5.993](https://doi.org/10.1162/jmlr.2003.3.4-5.993)
Blei, D., Ranganath, R., & Mohamed, S. (2016). Variational inference: Foundations and modern methods. NIPS 2016 Tutorial.
Drugowitsch, J. (2013). Variational Bayesian inference for linear and logistic regression.
arXiv preprint arXiv:1310.5438.
Hoffman, M. D., Blei, D. M., Wang, C., & Paisley, J. (2013). Stochastic variational inference. Journal of Machine Learning Research, 14(1), 1303–1347.
Rapela, J. (2017) Derivation of a Variational-Bayes linear regression algorithm using a Normal Inverse-Gamma generative model.
[https://sccn.ucsd.edu/~rapela/docs/vblr.pdf](https://sccn.ucsd.edu/%7Erapela/docs/vblr.pdf)
| null | CC BY-SA 4.0 | null | 2023-06-02T05:05:03.553 | 2023-06-03T21:34:36.237 | 2023-06-03T21:34:36.237 | 294515 | 294515 | null |
617637 | 1 | null | null | 9 | 960 | I have been told both things with regard to e.g. summing noisy time series, to justify opposing expectations.
On the one hand, I have been told to expect that summing multiple noisy inputs should lead to noise reduction for the output (the sum), because the noise components average to zero. I imagine this explains why larger sample sizes increase sensitivity, or why brains can generate exquisitely patterned activity despite individual neurons being very noisy. It all cancels out in aggregate.
On the other hand, I have been told to expect that variance sums, so the combination of multiple highly variable inputs should lead to a many times more variable output (N times). This too seems very intuitive, fitting with the idea that "error accumulates" like in dead reckoning.
When faced with the sum of noisy inputs, when should we expect that the noise cancels versus accumulates in the output? What am I missing here that makes these two facts appear contradictory?
| Noise cancels but variance sums - contradiction? | CC BY-SA 4.0 | null | 2023-06-02T05:20:00.227 | 2023-06-03T13:23:26.197 | null | null | 11472 | [
"measurement-error",
"noise",
"sum"
] |
617638 | 1 | null | null | 0 | 11 | I have the dataset:
```
df <- data.frame(
station = rep(c("A", "B", "C", "D"), each = 20),
vla = rpois(80,2),
latitude = c(rep(40.7128, 20), rep(34.0522, 20), rep(41.8781, 20), rep(39.9526, 20)),
longitude = c(rep(-74.0060, 20), rep(-118.2437, 20), rep(-87.6298, 20), rep(-75.1652, 20))
)
years <- rep(1979:1998, length.out = nrow(df))
df$year <- unlist(tapply(years, df$station, FUN = rep))
```
and I would like to create a correlogram to show that the correlation of vla between stations decreases with distance. I thought to calculate the Spearman correlation between each series and plot it against the corresponding distance. Do you have any suggestions?
| Spatial correlogram of time series | CC BY-SA 4.0 | null | 2023-06-02T06:02:53.110 | 2023-06-02T06:02:53.110 | null | null | 355434 | [
"r",
"time-series",
"correlation",
"spatial"
] |
617639 | 2 | null | 617637 | 12 | null | I think you've confused averaging, and summing. You're almost onto it with "because the noise components average to zero."
Consider a series of independent variables $\{X_i\}$ (with $i$ running from $1$ to $n$). Let's say they all have expectation $\mu$ and variance $\sigma^2$.
Variance does sum. If we define sum $T=\sum_{i=1}^n X_i$, then by some fairly simple properties of variances of sums, we find the expectation is $n\mu$ and variance $n \sigma^2$. So that aligns with your "I have been told to expect that variance sums" comment. And it does - as $n$ increases, so does the variance of the sum.
Your "the noise components average to zero" correctly reflects means. The mean is $\frac{1}{n} \sum_{i=1}^n X_i = T/n$. Using some algebra, we find that the expectation of this function is $\mu$ and the variance is $\sigma^2/n$. In this case, as $n$ increases, the variance decreases.
In conclusion - there's nothing contradictory in these facts. You just need care and clarity in language.
| null | CC BY-SA 4.0 | null | 2023-06-02T06:09:54.160 | 2023-06-02T06:09:54.160 | null | null | 369002 | null |
617640 | 2 | null | 476041 | 0 | null | By adding a random intercept per nest (and possibly also per loction), you are adding a lot of flexibility in the model and potential to overfit.
With logistic regression this may cause full separation (even when the effect is random instead of fixed) and large parameter estimates.
- See for the explanation of overfitting and high dimensionality here Why is logistic regression particularly prone to overfitting in high dimensions?
- A case of overfitting with random effects is here Perfect separation, perhaps? In binary outcome and repeated measure (random effect) with multiple independent variables (using R)
You have measured both nest types on the same location. Then, you can use a [proportional odds model](https://en.m.wikipedia.org/wiki/Proportional_hazards_model) which eliminates the variable risk between locations.
Also useful can be to make a plot of the survival curves (eg use the emperical distribution) to get a visual inspectation of the difference between the two next types and risk treatments. This helps to see whether the output of your function makes sense or not.
| null | CC BY-SA 4.0 | null | 2023-06-02T06:28:02.480 | 2023-06-02T06:46:32.177 | 2023-06-02T06:46:32.177 | 164061 | 164061 | null |
617641 | 2 | null | 617637 | 10 | null | It depends if you ask about summing or averaging random variables. By the basic properties of [variance](https://en.wikipedia.org/wiki/Variance), for the sum of independent variables, their variance is
$$
\mathrm{Var}\Big(\sum_{i=1}^n X_i\Big) = \sum_{i=1}^n \mathrm{Var}(X_i)
$$
If you average them, it however is smaller than the average of variances
$$
\mathrm{Var}\Big(\tfrac{1}{n} \sum_{i=1}^n X_i\Big) = \frac{ \sum_{i=1}^n \mathrm{Var}(X_i)}{n^2}
$$
and with $n \to \infty$ the variance goes to zero.
| null | CC BY-SA 4.0 | null | 2023-06-02T06:35:05.513 | 2023-06-02T08:58:31.420 | 2023-06-02T08:58:31.420 | 35989 | 35989 | null |
617642 | 1 | null | null | 0 | 21 | I need to do a PCA on a big dataset with multiple variables.
But for some of this variables, I have only one constant value fo each locations.
My question is : Will this repetitive value change and influence more my PCA ? Is there a better way than PCA to mix a dataset that have multiple values for a location with another one with only on value per location ?
It is important for me to include both variables.
Here is a simplified dataset to illustrate my problem (factor is the repetitive and constant value, where temp and sal are values changing over time):
```
temp <- sample(1:30, 40, replace=TRUE)
sal <- sample(30:35, 40, replace=TRUE)
factor <- c(rep(9, 10), rep(5, 10), rep(7, 10), rep(1, 10))
time <- c(1:10, 1:10, 1:10, 1:10)
zone <- c(rep("A", 10), rep("B", 10), rep("C", 10), rep("D", 10))
d <- data.frame(time, zone, factor, temp, sal)
```
In my case, there is much more variables of each type, so I need to do a PCA or a similar type of analysis.
EDIT: I'll try to clarify more my question.
It's like I have two dataset with different resolution. The A include stations AND time, with variables evolving across time. The other one (B) do not include time, resulting with only one observation per station.
Then I merge this two dataset to create a new one.
To use the previous example, the variable "factor" was in dataset B, while time, temp or sal were in dataset A.
I want to do a PCA with this new dataset (or with both dataset separated). How is it possible to do a PCA or a multivariate analysis with two dataset with different resolution ?
EDIT2: I want to do a PCA to do multiple things as 1) reducing the number of variables to k-dimensions 2) See where are my stations in the new space and how do they evolve according to the variables 3) classify my stations, etc
| Is it the best choice to use PCA when some values do not change? | CC BY-SA 4.0 | null | 2023-06-02T07:01:56.213 | 2023-06-02T16:32:02.043 | 2023-06-02T16:32:02.043 | 11887 | 389349 | [
"r",
"pca"
] |
617643 | 1 | null | null | 0 | 10 | I have an experiment where groups received either one of two treatments over several rounds, and I observe whether an outcome happens for a group at any given round. The outcome is thus binary, it either happens or not.
I would like to test the hypothesis that the outcome happens more frequently under treatment 1 than under treatment 2. I would generally fit a logit/probit model and test whether the coefficient linked to treatment 1 is positive and significant. However, the problem is that in my data the outcome happened 0 times for treatment 2. Is there a way I can test whether the difference between the two treatments is significant? I am using R studio as software.
Below I constructed a reduced form of my data; in reality I have 20 groups playing over 10 rounds.
```
group <- c("A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L",
"A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L")
round <- c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2)
t1 <- c(1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0)
t2 <- c(0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1)
outcome <- c(1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0)
df <- data.frame(group, round, t1, t2, outcome)
df
```
```
group round t1 t2 outcome
1 A 1 1 0 1
2 B 1 1 0 1
3 C 1 1 0 0
4 D 1 1 0 1
5 E 1 1 0 0
6 F 1 1 0 0
7 G 1 0 1 0
8 H 1 0 1 0
9 I 1 0 1 0
10 J 1 0 1 0
11 K 1 0 1 0
12 L 1 0 1 0
13 A 2 1 0 0
14 B 2 1 0 1
15 C 2 1 0 1
16 D 2 1 0 1
17 E 2 1 0 0
18 F 2 1 0 1
19 G 2 0 1 0
20 H 2 0 1 0
21 I 2 0 1 0
22 J 2 0 1 0
23 K 2 0 1 0
24 L 2 0 1 0
```
```
df %>%
group_by(t2) %>%
summarise(t2_mean = mean(outcome),
t2_sd = sd(outcome))
```
```
# A tibble: 2 × 3
t2 t2_mean t2_sd
<dbl> <dbl> <dbl>
1 0 0.583 0.515
2 1 0 0
````
```
| Test frequency of outcome between two treatments when for one treatment it's 0 | CC BY-SA 4.0 | null | 2023-06-02T07:49:45.493 | 2023-06-02T08:27:28.570 | 2023-06-02T08:27:28.570 | 22047 | 389404 | [
"r",
"logistic",
"statistical-significance"
] |
617644 | 1 | null | null | 0 | 18 | I fitted a model, which has 20k sample. And I get the model's AIC with 388,109. Then, I begin added other variables in my model. In the case of adding only one variable to the original model, the value of the AIC is reduced by a large amount(more than 1k) regardless of the added variable.
So my question is whether the added variables are more likely to pass the test under the model when the original model has a very high AIC value?
| If add a variable in a model with high AIC is more easier to be significant? | CC BY-SA 4.0 | null | 2023-06-02T08:07:10.357 | 2023-06-02T13:57:11.287 | 2023-06-02T13:57:11.287 | 110833 | 389406 | [
"aic",
"parameterization"
] |
Subsets and Splits