Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
613816
1
null
null
0
36
I am interested in understanding the relationship between a group of four variables - one is a response variable , `Y`, and other variables (`A`, `B` and `C`) are being considered as predictors. It has been previously shown that changing `A` changes `Y`, but nothing has been said about the relationship between `B` and `Y` or the relationship between `C` and `Y`. To test whether any of these untested relationships actually exist, I was going to use a full model, `Y ~ A + B + C`. However, I am also considering an incremental, model building method where I have the following four models: - Y ~ A - Y ~ A + B - Y ~ A + C - Y ~ A + B + C I will check whether model assumptions are being satisfied by all these models, and report all the estimates and confidence intervals as well. The idea behind this model building method lies in developing a narrative that conveys information about simpler models first and then about complex models, describing any significant relationship between `Y` and `B` or between `Y` and `C` while at the same time confirming if previously reported relationship between `Y` and `A` exists in my data. The articles about hypothesis testing or estimation I have read often asks to consider and report a full model instead of using any variable selection method. In light of such readings, I have felt uncomfortable about the model building method described above. Could you share your opinions about whether such model building method makes sense or not?
Is model building without variable selection a sensible idea?
CC BY-SA 4.0
null
2023-04-23T09:03:15.533
2023-04-23T09:03:15.533
null
null
298817
[ "hypothesis-testing", "estimation", "modeling", "multiple-comparisons" ]
613817
1
613821
null
1
25
Say that we want to model a complex-valued signal using the RV $S$, where $S$ can be expressed by it's real and imaginary part, i.e. $S = X + iY$, where $X$ and $Y$ are real-valued random variables. Given that $X$ and $Y$ are both Gaussian, have zero mean, are independent and have the same variance. Is the joint PDF $f_{X,Y}(x,y)$ complex-valued? My intuition says that the joint PDF is real-valued, since the joint distribution can be viewed as a joint PDF of two orthogonal Gaussian RVs (Re{S} = X, Im{S} = Y), and thus, we can view $f_{X,Y}(x,y)$ as a real-valued bivariate Gaussian distribution, where $f_{X,Y}(x,y)=f_X(x)f_Y(y)$. Is this line of thinking correct or is my intuition about the problem completely wrong?
Understanding of bivariate Gaussian distributions in connection with complex random variable
CC BY-SA 4.0
null
2023-04-23T10:23:23.067
2023-04-23T11:02:32.677
2023-04-23T11:02:32.677
247165
386325
[ "distributions", "normal-distribution", "bivariate", "complex-numbers" ]
613818
1
null
null
0
19
It's well known that the MLE $\hat{\theta}$ maximizes $f(y|\theta)$ and under regularity conditions has asymptotic distribution $$N\left(\theta, \frac{I(\theta)}{J^2(\theta)} \right)$$ Where $I(\theta)=Var(E[\partial\ell'(y_i,\theta)])$ and $J(\theta)=-E[\partial^2\ell(y_i,\theta)]$(for simplicity I treat the single parameter case). the MAP estimate gives $\theta$ a prior density and seeks to maximize $f(\theta|y)$. It can also be shown that $f(\theta|y)$ in the limit converges to $$N(\hat{\theta}, -\partial^2\ell(\theta|y))|_{\theta=\hat{\theta}})$$ where $\hat{\theta}$ denotes the mean of the posterior distribution, ie the MAP. Can I use this result to attain the asymptotic distribution of the MAP, similarly to the MLE? I thought it might have something to do with conjugate distributions (about which I know little)? It would already be very helpful if this were true in a special case, ie the normal case.
Confusion about asymptotic distribution of the MLE and of the MAP
CC BY-SA 4.0
null
2023-04-23T10:26:19.983
2023-04-23T10:26:19.983
null
null
371599
[ "bayesian", "maximum-likelihood", "asymptotics", "fisher-information", "point-estimation" ]
613819
1
null
null
0
21
The Savage - Dickey ratio is an equivalent, but more useable, form for the Bayes factor for two model, model $M_{0}$ and model $M_{1}$. The definition for the Bayes Factor $BF_{ij}$is as follows: $BF_{ij} = \frac{p(X | M_{0})}{p(X | M_{1})} = \frac{p(\delta = \delta_{0} | x, M_{1})}{p(\delta = \delta_{0} | M_{1})}$ Problem: Consider a model with likelihood $L(\theta | x) = 1_{|\theta - x| < 1/2} (\theta)$, prior being $\pi(\theta) = U(\theta | -w/2, +w/2)$. Let $x = 0, w > 1$. What is the Bayes Factor for the two hypotheses $\theta = 0, \theta \neq 0$? Attempt: Based on the above, the likelihood and prior can be explicitly stated to give the model $M_{1}$: $L(\theta = 0 | x) = 1_{|- x| < 1/2} (\theta = 0)$, $\pi(\theta = 0) = U(\theta = 0 | -w/2, +w/2)$ so that $M_{1} = 1_{|- x| < 1/2} (\theta = 0) U(\theta = 0 | -w/2, +w/2)$. Any hints to allow me to continue this would be greatly appreciated.
application of the Savage - Dickey ratio
CC BY-SA 4.0
null
2023-04-23T10:36:32.267
2023-04-23T16:16:01.633
2023-04-23T16:16:01.633
56940
109101
[ "self-study", "bayesian", "likelihood", "likelihood-ratio" ]
613820
2
null
613489
1
null
If studies relate to a random effect between different studies, then the ['effective degrees of freedom'](https://en.m.wikipedia.org/wiki/Degrees_of_freedom_(statistics)#Regression_effective_degrees_of_freedom) will be different, and also relate to the number of studies asside from the sum of sample sizes. It can be illustrated intuitively with an extreme case where the random effect of the study is large. Imagine you have three studies with each 100 observations. [](https://i.stack.imgur.com/BVVDa.png) The mean of the points is around -2.087, and if you assume that all those point come from the same normal distribution, then the standard error is ~0.143 based on the sample size of 300. But, these three samples are obviously not from a single normal distribution, and the effective sample size looks more like 3 (the number of studies) and the standard error should be ~1.55. --- The studies above might be represented in a table like: ``` study id mean standard error 1 -3.025 0.090 2 0.932 0.098 3 -4.167 0.104 ``` And the mean effect could be estimated like $$\hat{m} = \frac{-3.025+0.932-4.167}{3} = -2.087$$ and the standard error is (wrongly) computed as combining the standard errors of the different studies $$SE(\hat{m}) = \sqrt{\frac{0.90^2+0.098^2+0.104^2}{9}} = 0.056$$ This relates to the question: [Statistics question: Why is the standard error, which is calculated from 1 sample, a good approximation for the spread of many hypothetical means?](https://stats.stackexchange.com/questions/549501/) We can use the variance within a sample to estimate the standard error of the estimate of the population mean from which the sample is taken. However, the standard errors of the studies are a measure for the variance within the individual studies and not for the variance between the studies. --- > I know this can happen in random effects models, but often one of the reason for using random effects models is to gain power to detect effects. I am not sure how random effects can increase power in comparison to fixed effects. But possibly this relates to the introduction of interaction terms that reduce the degrees of freedom on the one hand, but can improve the model fit and reduce the residuals on the other hand. In this case the improvement in power relates not to the cases 'random effect' versus 'fixed effect' bit to 'additional effect' versus 'no additional effect'.
null
CC BY-SA 4.0
null
2023-04-23T10:37:54.500
2023-04-23T13:46:58.647
2023-04-23T13:46:58.647
164061
164061
null
613821
2
null
613817
1
null
There is nothing complex about the pair $(X,Y)$, so yes, the density is real-valued. As the density of a complex-valued $S$ is defined as the density of $(X,Y)$, this is real-valued, too. The subtlety here is that according to the standard definition obviously a density of a 2-d real-valued random variable is real valued. The standard definition does not cover complex-valued RVs $S$, so it has to be explicitly defined that $f_S=f_{(X,Y)}$, but this is what is done. [(Wikipedia - complex random variable)](https://en.wikipedia.org/wiki/Complex_random_variable)
null
CC BY-SA 4.0
null
2023-04-23T10:40:24.287
2023-04-23T10:40:24.287
null
null
247165
null
613822
1
613832
null
9
690
Suppose $\Sigma$ is a covariance matrix $P$ is its corresponding correlation matrix. Let $\lambda_1, \dots, \lambda_p$ and $\tau_1, \dots, \tau_p$ denote the ordered eigenvalues of $\Sigma$ and $P$, respectively. - Is there any known relationship between $\lambda_i$ and $\tau_i$? - Is there any known relationship between their corresponding eigenvectors? At first glance I assumed that the eigenvectors of $\Sigma$ and $P$ must be the same, but this turns out to be not true. This can be seen by the axes of orientation of the resulting ellipses corresponding to $\Sigma$ and $P$: ``` library(ellipse) Sigma <- matrix(c(2, 1.9, 1.9, 5), nrow = 2) P <- cov2cor(Sigma) plot(ellipse(Sigma), type = 'l') lines(ellipse(P), col = "red") ``` [](https://i.stack.imgur.com/jE8vR.png) The most I've been able to conclude about the eigenvalues is by using the covariance-correlation decomponsition: $$ \text{det}(\Sigma) = \left(\prod_{i=1}^{n} \sigma_i^2\right) \text{det}(P)\,, $$ where $\sigma_i^2$ i the $i$th diagonal of $\Sigma$.
Eigenvalues/Eigenvectors of Correlation and Covariance matrices
CC BY-SA 4.0
null
2023-04-23T10:56:49.670
2023-04-24T12:10:14.527
null
null
31978
[ "correlation", "multivariate-analysis", "covariance-matrix", "eigenvalues" ]
613823
2
null
613768
3
null
I think we should always keep in mind the distinction between formal modelling and reality. This affects at least two aspects of the problem given here. - Does the Cauchy model appropriately model the real situation? Chances are it doesn't, as both the casino and the player will have upper bounds on the amounts they can pay out, so in fact both players may find out that they cannot do what is required in case of certain results. The problem would then change into a problem with a truncated Cauchy, in which obviously expected values are well defined if you know where the truncation is. - How do we model "rationality"? The expected value is a popular candidate of course, but one can well consider other candidates. For example, one could well argue (ignoring problems with item 1) that the problem is symmetric around 100, so the fair price would be 100 because this makes the payout distribution identical for both players. However, considering potential consequences (which would involve reality again, i.e., what exactly would happen if the payout would be very high; also how is utility related to amount of money for both involved), a player may not consider rational playing at all due to a nonzero bankrupt probability and maybe also considering the utility of unbounded gain in terms of money in fact bounded. So one would arguably need to translate money payouts into actual utilities, and these may not be symmetric, and may again depend on the specific situation of everyone involved. > On one hand, there definitely should be a way to value this game. Wishful thinking, maybe? You may define one, see above, however this means that you enforce your definition of rationality on the situation at hand. There is no "meta-objective/meta-rational" way to ultimately argue in what unique way rationality should be formally defined.
null
CC BY-SA 4.0
null
2023-04-23T10:57:08.557
2023-04-23T10:57:08.557
null
null
247165
null
613824
1
null
null
1
29
I've run into the following problem that I can't seem to find good sources for: I have a 2d binary matrix, mapping various combinations of a group, in other words, every row is a group with every `1` in the row meaning that a person is present in that group. Columns obviously correspond to the specific people. Given this data, I now want to predict the strength of those groups, which I have in the form of a continuous variable for every group in the dataset. What regression model would work best for such a task? The problem when researching myself arises with the keyword "Binary" giving me mostly sources either suggesting logistic regression which is used a lot more for binary classification (and while I'm aware it could be implemented in a way to give continuous values, this type of regression seems very wrong for the task) or just in general on sources about binary output. A good alternative seemed Random Forest Regression at first, however with the team size being relatively small, the data matrix is quite sparse, meaning that if I were to pick random features for every tree, most trees would end up getting built either on zeros only or a fraction of a team. I can imagine barely any trees getting exactly those features that correspond to multiple full teams, in which case, how good of a prediction can the tree even make? --- Is there something that would work best for this type of task, or is my only option taking five different regression algorithms and just seeing what works the best?
Regression model for binary input
CC BY-SA 4.0
null
2023-04-23T11:02:07.453
2023-04-23T17:05:22.043
null
null
386128
[ "regression", "binary-data" ]
613825
2
null
613489
1
null
The standard regression without random effect assumes that all observations in all studies can be used to estimate the same main effect. The effective sample size for estimating this main effect then is the number of observations overall. In a model with a random effect, there is an underlying study mean, which is the main effect plus the random deviation of the specific study from the main effect. Each observation contributes to estimating what I call the underlying study mean, so observations in different studies estimate different things. The overall main effect can only be estimated bringing the different study effects together. So the effective sample size would be the number of studies (in fact less than this because a study with a low number of observations will estimate it's own study mean potentially very imprecisely), which is of course lower than the number of observations overall. This means that the estimation of the main effect will be much less precise under the model involving random study effects. Be wary though of taking this as an argument for using the model without random effect, as this model may not capture the realistic uncertainty appropriately, so the larger precision you get may well be deceptive. It is not appropriate to compare the behaviour of a standard regression assuming that no random effect exists with a model with random effect assuming that a random effect exists. Obviously each model will do better in a situation in which its model assumptions are fulfilled, but the random effects model is more general, i.e., even if the random effect variance is in fact zero, i.e., the random effect does not exist, the random effect model will do an OK job, whereas the standard regression may well not, in case a random effect exists. If I understand things correctly, this by the way may play out both in terms of the test level, i.e., the model without random effect may reject a null model too easily in case a random effect exists, and in terms of power, or in only one of these respects, probably depending on the number of studies, and how the within-study variance relates to the variance between studies. (My intuition is that anticonservativity, i.e., too large rejection probability under the null hypothesis, will likely be the dominating issue here, rather than potential loss of power.) Fun fact: If a single study is run, this will never be modelled involving a random study effect, as such effect cannot be identified based on one study only. This means that if we consider a single study, the effective sample size will be the number of observations, say, 200. Now let's say three further studies are run, and somebody meta-analyses these using a random effect. This additional information reduces the effective sample size to 4, the number of studies! We become far less precise by observing more. The explanation is that any potential problem causing the first study to in fact deviate on average from the overall main effect cannot be detected as long as further studies do not exist, i.e., we rely on a model that makes stronger assumptions not because there is any more reason that this is true, but rather because this is our only chance to say anything. PS: I add something after having seen the added Orthodont data example in the question. In fact two things happen when adding a random effect. One is the change of the effective sample size, already discussed above. The other one is that the random effect will explain some of the variation in the data, and for this reason the fixed effect will account for less variation and can in principle be estimated more precisely. Both of these point in opposite directions, so the estimated precision of the fixed effect estimators may go up as well as down when adding a random effect. Factors for having an estimated standard error of the fixed estimator rather larger when adding a random effect are (1) the number of clusters is very small compared to the number of observations within clusters (as often happens in meta analysis) and (2) the variance of the random effect is rather low, compared to the within-clusters variance.
null
CC BY-SA 4.0
null
2023-04-23T11:16:26.197
2023-05-01T09:09:30.067
2023-05-01T09:09:30.067
247165
247165
null
613826
1
null
null
0
18
I've got time-varying predictor sampling events nested under each outcome sampling event (also sampled at multiple time points). So, I've got time structure at level-1 (outcomes), but also at level-2 (time-varying predictor). Level-0 is subject ID. How can I handle the time structure at both levels 1 & 2 in a hierarchical longitudinal model? The basic longitudinal model, using the outcome `troponin` sample times `troponin_time` with the time-varying predictor `glucose`, without considering predictor time structure: ``` lme(troponin~troponin_time*glucose,random=~1|ID,data=d,na.action=na.omit) ``` Here, `glucose` is sampled at multiple times defined by `glucose_time`. How should I make use of `glucose_time` in this model?
In a hierarchical longitudinal model, is it possible to define a time-varying covariate on its own time frame, nested into the top-level time frame?
CC BY-SA 4.0
null
2023-04-23T12:07:43.347
2023-04-23T12:07:43.347
null
null
45752
[ "regression", "lme4-nlme", "multilevel-analysis", "time-varying-covariate" ]
613827
2
null
613822
7
null
Expanding on my comment: [Since](https://math.stackexchange.com/a/198280/652310) $P = \text{diag}(\Sigma)^{-1/2} \Sigma \text{diag}(\Sigma)^{-1/2}$, where $\text{diag}(\Sigma)$ is the diagonal matrix obtained by considering only the diagonal entries of $\Sigma$, then $P$ and $\Sigma$ are [congruent](https://en.wikipedia.org/wiki/Matrix_congruence). Then, according to [Sylvester's law of intertia](https://en.wikipedia.org/wiki/Sylvester%27s_law_of_inertia), $P$ and $\Sigma$ have the same number of positive, negative, and zero eigenvalues. because both $P$ and $\Sigma$ have the same rank, then they have the same number of zero eigenvalues. Moreover, because they are both positive semi-definite, they will also have the same number of positive eigenvalues (thanks to @whuber for pointing this out). One more thing: if we let $P = Q_P\Lambda_PQ_P^T$ and $\Sigma = Q_\Sigma\Lambda_\Sigma Q_\Sigma^T$ be the eigendecompositions of $P$ and $\Sigma$ respectively, we can relate their eigenvalues as follows: \begin{align} P &= \text{diag}(\Sigma)^{-1/2} \Sigma \text{diag}(\Sigma)^{-1/2} \\ Q_P\Lambda_PQ_P^T &= \text{diag}(\Sigma)^{-1/2} Q_\Sigma\Lambda_\Sigma Q_\Sigma^T \text{diag}(\Sigma)^{-1/2} \\ \Lambda_P &= Q_P^T \text{diag}(\Sigma)^{-1/2} Q_\Sigma\Lambda_\Sigma Q_\Sigma^T \text{diag}(\Sigma)^{-1/2} Q_P \\ \Lambda_P &= B\Lambda_\Sigma B^T \end{align} where $$B = Q_P^T \text{diag}(\Sigma)^{-1/2} Q_\Sigma$$
null
CC BY-SA 4.0
null
2023-04-23T12:25:03.453
2023-04-23T16:03:59.310
2023-04-23T16:03:59.310
296197
296197
null
613828
2
null
140925
0
null
Y Nehc, > You have a population of {1, 2, 3, 4, >5, 6, 7, 8, 9, 10}. Which of the >following methods would give you a >better estimate of the population's >true maximum? > Collect one large sample of n = 10. >Then, find max. Collect two smaller samples of n = 5. >Find max for each sample. Then, >calculate the average. One of your premises is not valid. You are saying that your ideal sample IS your population. Doesn't this premise completely negates the rest of your subsequent statements and thus the conclusion? The concept of sampling exists only when it is impractical to study the entire population. If it were practical to examine the entire population there would be no need for sampling at all. Your option 2, thus, would never exist when you are able to go for option 1. The problem here is what do we do when it is not practical to analyze the population?
null
CC BY-SA 4.0
null
2023-04-23T12:26:44.157
2023-04-23T12:26:44.157
null
null
386330
null
613830
1
null
null
0
37
## Background I am working with posterior probability distributions for parameters obtained from a Bayesian binomial generalised linear model with a logit link function. The parameters returned by the model are the log-odds intercept (a) and slope (b, also known as the logistic rate k). The logistic equation for these models can thus be written as $f(x) = \frac{1}{1 + e^{kx + a}}$ or $f(x) = \frac{1}{1 + e^{k(x + µ)}}$ where µ = $\frac{a}{k}$ is the inflection point of the sigmoid. I prefer the second form because µ is often more meaningful than a. ## Specific problem Rather than just working with central tendencies, I would like to estimate the entire probability distribution of µ to calculate probability intervals etc. I tried doing this by dividing the posterior a by the posterior k. However, the result is a strange angular distribution, the mean of which is not at all similar to the quotient of the means of a and k. Here is a MRE in R: ``` a <- rnorm(1e4, 3, 1) k <- rnorm(1e4, -0.2, 0.1) µ <- a/k mean(µ)-mean(a)/mean(k) # means are very different require(ggplot2) ggplot() + geom_density(aes(a)) ggplot() + geom_density(aes(k)) # distributions for a and k look fine ggplot() + geom_density(aes(µ)) + coord_cartesian(xlim = c(-300, 400)) # distribution for µ is angular ``` [](https://i.stack.imgur.com/6vIog.png) I know for a fact that the mean of µ estimated as the quotient of the means of a and k is correct while the mean of µ estimated as the mean of the quotient of a and k is incorrect, since inserting the former into $f(x) = \frac{1}{1 + e^{k(x + µ)}}$ corresponds with the model prediction in probability space (p) derived from the posteriors a and k. ## Question Why is the quotient of two posteriors angular and leads to wrong inference? How can the mean of a quotient distribution be different to the quotient of the means of the dividend and divisor distributions? Would specifying µ rather than a as a parameter in the model make any difference?
Why is the quotient distribution of two probability distributions angular?
CC BY-SA 4.0
null
2023-04-23T12:45:14.173
2023-04-24T02:24:12.180
2023-04-24T02:24:12.180
303852
303852
[ "r", "distributions", "logistic", "binomial-distribution", "posterior" ]
613831
1
613835
null
2
60
Terminology: The terms permutation and randomisation tests often seem to be used interchangably, but in some cases they have distinct meanings - with the meanings not always consistent. In this post by randomisation test I mean a test where we randomly permute the observations in our sample(s) to recompute some test statistic, with the ultimate goal of comparing our observed statistic with the distribution of recomputed statistics and obtaining a simulated p-value. It will not be possible to compute every permutation so in practice we randomly shuffle our data a 'large' number of times. Summary of question(s): I would like to know - What the "null hypothesis" of a randomisation (permutation) test should be, and how the results of the test should be interpreted? In particular focussing on an example where the two populations have the same mean, but different variances. Can we use this test to test for specific things like a difference in means, or can we only test for whether the distributions are different? - How does the choice of null hypothesis and statistic affect exchangability, if at all? - I would appreciate a walkthrough of how a randomisation test could be applied to the example I give below. Motivating Example: (See related:[1](https://stats.stackexchange.com/questions/405665/papers-about-permutation-version-of-welchs-t-test) [2](https://stats.stackexchange.com/questions/473076/permutation-tests-and-exchangeability)) Two populations with the same mean but different variances. Group A, 30 people taken from $N(0,1)$. Group B 10 people from $N(0,4)$. We use a randomisation (permutation) test to obtain a simulated p-value for the difference in means. We observe that the simulated p-value is less than 0.05 about 33% of the time. Python Code ``` import numpy as np np.random.seed(0) # Let's shuffle the labels of the groups and calculate the new difference in means. # We will discuss later whether this is valid and how to interpret this # We will count how many times we get a "p-value" less than 0.05 small_pval = 0 # We will simulate taking samples from our two populations many times, and running a randomisation test on them. N=1000 for i in range(N): # Take samples from our two distributions A = np.random.normal(loc=0,scale=1,size=30) B = np.random.normal(loc=0,scale=6,size=10) observed_diff = np.mean(A)-np.mean(B) # Combine into one list combined = np.concatenate((A,B),0) differences_in_means = np.array([]) for i in range(1000): np.random.shuffle(combined) diff = np.mean(combined[:20]) - np.mean(combined[20:]) differences_in_means = np.append(differences_in_means, diff) # How often do we see a difference as extreme as the one we observed initially # Take absolute value for 2 tail but I think this is not very significant - :D simulated_pvalue = np.mean(np.abs(differences_in_means) >= np.abs(observed_diff)) if simulated_pvalue < 0.05: small_pval+=1 print("We found simulated p-values less than 0.05 ", 100*small_pval/N, "% of the time") ``` Running the code above, we would find simulated p-values <0.05, about 33% of the time. Now, the statistic we have 'tested' is difference in means, however we know that both distributions have the same mean. The first thing I am trying to grasp, is what does the simulated p-value actually mean, in particular what is the associated null hypothesis for these p-values? I could say something like: My null hypothesis is that the mean of population A = mean population B, I permute the labels many times, and find that the difference in means I have observed is very unusual (simulated p-value is smaller <0.05) and so I reject my null hypothesis and conclude (wrongly) that the means of the two populations are actually different. As the simulation above shows, if I repeated this, I would make this mistake about 33%, much more than the 0.05% we might expect. I think that part of the problem here is exchangability and the set up of the "null hypothesis". I think that here due to the difference in std deviation, I cannot say that the labelling of group A and B is completely arbitrary and exchange them however I wish. However I think the null hypothesis above is not right for the randomisation test, but I have commonly seen people use the test this way - essentially as a direct replacement for the Student or Welch t-test. What should the null hypothesis be here to use a randomisation test? These quotes from Howell [3](https://www.uvm.edu/%7Estatdhtx/StatPages/Randomization%20Tests/RandomizationTestsOverview.html) seem relevant: > "Our null hypothesis has nothing to do with parameters, but is phrased rather vaguely, as, for example, the hypothesis that the treatment has no effect on the how participants perform. That is why I earlier put "null hypothesis" in quotation marks. This is an important distinction. The alternative hypothesis is simply that different treatments have an effect. But, note that we haven't specified whether the difference will reveal itself in terms of means, or variances, or some other statistic. We leave that up to the statistic we calculate in running the test. That might be phrased a bit more precisely by saying that, under the null hypothesis, the score that is associated with a participant is independent of the treatment that person received" > ""I need to say something about exchangeability. It applies to the null hypothesis sampling distribution--in other words, data are exchangeable under the null. Phil Good, for example, is a big fan of this term. He would argue that if the scores in one group have a higher variance than the other, then the data are not exchangeable and the test is not valid. BUT, if the hypothesis being tested is that treatments have no effect on scores, then under the null hypothesis why would one set of scores have a higher variance other than by chance? The problem is that we have to select the statistic to test with care. We normally test means, or their equivalent, but we also need to consider variances, for example, because that is another way in which the treatment groups could differ. If we are focussing on means, then we have to assume exchangeability including variance. But we need to be specific. So much for that little hobby horse of mine." Howell also [shares](https://www.uvm.edu/%7Estatdhtx/StatPages/Randomization%20Tests/null_hypotheses.html) the following extract from Edgington (1986). > "Just as the reference set (read as "sampling distribution" for now) of data permutations is independent of the test statistics, so is the null hypothesis. A difference between means may be used as a test statistic, but the null hypothesis does not refer to a difference between means. The null hypothesis, no matter what test statistic is used, is that there is no differential effect of the treatments for any of the subjects. ... Thus the alternative hypothesis is that the measurement of at least one subject would have been different under one of the other treatment conditions. Inferences about means must be based on nonstatistical considerations; the randomization test does not justify them." (p. 531)" Based on this, I wonder, if instead my "null hypothesis" was that the two populations are from the same distribution and then I have a small simulated p-value I could conclude that because the difference in means we have observed is so unusual the distributions of the two populations must be different. This also seems strange to me, we have made the correct conclusion that the populations are from different distributions where the statistic we are using is difference in means, even though the means are actually the same. I note also if this was the null hypothesis, then in this case we have very low power here (33%), and wonder if that is precisely because the statistic we are using is the difference in means. If the 'correct' null hypothesis for a (two sample) randomisation test is "The two populations are the same" and the alternative is then "the two populations are not the same" would this mean: - I have more freedom with respect to exchangability, for example we now have exchangability in the example above. - The power (and type 1 error rate) of the permutation test may be quite sensitive to whatever statistic I decide to use (say difference in means or Welch's t). - If we find a significant result using some statistic, say difference in means, all we can conclude from this is the two populations are different, but we actually cannot conclude anything more (like the means are different). In the post here [1](https://stats.stackexchange.com/questions/405665/papers-about-permutation-version-of-welchs-t-test) , an example very similar to the above is considered. It is mentioned that the type 1 error rate is inflated, and this can be remedied by using the Welch t statistic instead of the difference in means. The problem that is mentioned is lack of exchangability. - Why does the Welch t-statistic have a lower type 1 error rate than just looking at the difference in means? - If the null hypothesis is "The means of these two distributions are the same", then does using Welch t-statistic mean we have exchangability, if so why? The answer of BruceET here says > "A permutation test with the Welch t statistic as metric treats samples with unequal variances as exchangeable (even if data may not be normal)" 3) Am I right in thinking if we want to specifically conclude something about the means, then we need to set up the null hypothesis carefully and choose our statistic carefully (to ensure exchangability), but if we just want to conclude the samples are from different populations then we have more freedom? --- A few references to help highlight my confusion The answer to [this](https://stats.stackexchange.com/questions/482392/is-this-an-appropriate-application-of-a-permutation-test) similar question: > "Strictly speaking, the null hypothesis is that the distributions are the same, not just that they have the same means. (If they had same means but difference variances, the test would have the wrong Type I error rate.)" whereas the answer to [this](https://stats.stackexchange.com/questions/405665/papers-about-permutation-version-of-welchs-t-test) question phrases the null hypothesis of the permutation test in terms of means. In [this](https://greenteapress.com/thinkstats2/html/thinkstats2010.html) copy of ThinkStats the permutation test is introduced, it seems to me that text is saying we can choose whatever statistic we are interested in, and see if there is a significant difference in it, and then make a conclusion about that statistic. However we have seen from the example above that is not right, we found that we had an 'extreme' difference in means even though the means of the two groups are the same. > "One way to model the null hypothesis is by permutation; that is, we can take values for first babies and others and shuffle them, treating the two groups as one big group..." > "Choosing the best test statistic depends on what question you are trying to address. For example, if the relevant question is whether pregnancy lengths are different for first babies, then it makes sense to test the absolute difference in means, as we did in the previous section. If we had some reason to think that first babies are likely to be late, then we would not take the absolute value of the difference; instead we would use this test statistic" [This](https://stats.stackexchange.com/questions/482580/how-to-properly-interpret-the-hypotheses-results-of-a-permutation-test-based-on) question, which seems relevant but I do not think addresses my exact questions, it does not mention exchangability nor how the test should be “set up” with a null and alternate hypothesis. [This](https://stats.stackexchange.com/questions/60129/does-permutation-test-change-the-null-hypothesis-of-the-original-test) question is closest. The asker posts a comment on the top answer which is one of the things I am getting at, but their comment is unanswered. > Thanks! I read the paper. Do I understand correctly that permutation test applied to a location test doesn't test the null of location well, but rather tests whether the distributions of the two groups are the same in the sense of their locations? --- I am a novice statistician. As you can no doubt tell, I am having trouble even articulating my questions. If I have misrepresented anything in any of the links or quotes provided that is not my intention, it is just due to my own confusion, I would gladly accept any edits/comments. I have also tried to link to other relevant questions, I do not think my question is a duplicate, if I have overlooked something relevant I would be glad to have it brought to my attention.
What is the null hypothesis for a randomisation (permutation) test, how should the p-value be interpreted and what conclusions can we draw?
CC BY-SA 4.0
null
2023-04-23T12:45:49.653
2023-05-21T16:54:24.767
null
null
358991
[ "hypothesis-testing", "statistical-significance", "t-test", "permutation-test", "randomized-tests" ]
613832
2
null
613822
7
null
If $\Sigma$ is diagonal (with arbitrary eigenvalues) then $P$ is just the unit matrix (all eigenvalues equal to one), so there cannot be any general relation between the eigenvalues of $\Sigma$ (alone) to those of $P$. Also notice that if $\Sigma$ is 2-dimensional then $P$ has the form $$ P = \begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix} $$ whose eigenvectors are always $(1,1)$ and $(1,-1)$ regardless of $\rho$: $$ \begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix} \begin{pmatrix} 1 \\ 1 \end{pmatrix} = (1+\rho) \begin{pmatrix} 1 \\ 1 \end{pmatrix}$$ $$ \begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix} \begin{pmatrix} 1 \\ -1 \end{pmatrix} = (1-\rho) \begin{pmatrix} 1 \\ -1 \end{pmatrix}$$ so clearly the eigenvectors of $\Sigma$ cannot be related to those of $P$ either. (Of course given both the eigenvalues and eigenvectors of $\Sigma$, one can determine $\Sigma$, and therefore $P$, and therefore the eigenvalues and eigenvectors of $P$).
null
CC BY-SA 4.0
null
2023-04-23T12:54:32.013
2023-04-23T12:54:32.013
null
null
348492
null
613833
1
null
null
0
13
I have two sets of questions in a survey. The first set is 6 Likert scales for how important a particular trait is for someone to be an elected official (How important is it that an elected representative be ambitious, for example). The second set is the same 6 Likert scales for how well the respondent thinks the trait describes them (Are you ambitious, for example). What I would like to know is the best way to determine, statistically, what proportion of the respondents consider themselves "qualified" to be an elected official (say, but saying 3/4ths of the traits they said were important are traits that they say describe them well.). Hopefully that makes sense. I'm sure there is something simple I am overlooking, but I just can't quite wrap my mind around how to approach this.
Test to Compare Two Sets of Likert Questions
CC BY-SA 4.0
null
2023-04-23T12:55:15.383
2023-04-23T12:55:15.383
null
null
290978
[ "survey", "likert" ]
613834
1
null
null
0
13
this is a small portion of the data...... total around 700 Ano, Sexo, Bips, APT, Peso, Altura, IMC 7,00 1 46,00 AP 38,20 1,52 16,53 7,00 1 47,00 AP 39,20 1,56 16,11 7,00 1 61,00 AP 49,90 1,62 19,01 7,00 2 52,00 AP 51,60 1,60 20,16 7,00 2 50,00 AP 54,20 1,51 23,77 7,00 1 58,00 AP 39,20 1,54 16,53 7,00 2 54,00 AP 32,90 1,54 13,87 7,00 1 45,00 AP 39,00 1,59 15,43 12,00 1 30,00 AP 41,80 1,51 18,33 11,00 1 61,00 AP 47,70 1,56 19,60 7,00 2 54,00 AP 52,90 1,63 19,91 7,00 2 15,00 I 67,70 1,60 26,45 7,00 2 52,00 AP 45,40 1,60 17,73 8,00 2 72,00 I 47,80 1,69 16,74 Knowing that Ano is the grade of a student, Sexo the sex 1 for Males 2 for Females, Bips the number of laps on a track, Peso is the weight of a student, Altura is the height of a student, IMC is the body mass index, APT the fitness of a person AP-fit I-not fit and AT-athlete. What type of statistics can I do (like correlation,Anova, etc...) knowing this [](https://i.stack.imgur.com/bMXqQ.png)
spss, what type of statistics can I do knowing the test of normality
CC BY-SA 4.0
null
2023-04-23T13:18:06.383
2023-04-23T13:19:19.620
2023-04-23T13:19:19.620
386332
386332
[ "anova", "spss", "descriptive-statistics" ]
613835
2
null
613831
1
null
It makes sense to separate the discussion of the null hypothesis to test from the discussion of the test statistic chosen. The null hypothesis is generally the probability model based on which the test level, i.e., the rejection probability of the null hypothesis, is computed. The permutation test in this example will create a situation in which you have the same distribution in both samples. The permutation principle will simulate the correct rejection probability if both distributions are assumed the same. So this is the null hypothesis. This obviously implies (but is not equivalent with) equality of the means (assuming they exist). Note that there are other permutation tests where the null hypothesis is different (e.g., independence between two variables). The test statistic will define how you measure deviations from the null hypothesis, so it will implicitly define the alternative against which you are testing. For example, if you choose the mean difference, you will run the test in such a way that it will be particularly sensitive (i.e., have large power) against situations in which the means are different. So, on one hand one could say "you can use any test statistic you want" (if it is only about simulating a valid p-value by the permutation procedure), but on the other hand, you may want to choose a test statistic that has good power for detecting the specific deviations from equality of distributions you care about in the application of interest. So you are basically testing equality of distributions against differences in means with the mean differences statistic. If the truth is in neither of these classes (such as different variances but same mean), anything can happen. (This somehow explains why there is a temptation to say that even the null hypothesis here is "equal means", but it isn't, as for equal means if distributions are different, the permutation test cannot control the level properly.)
null
CC BY-SA 4.0
null
2023-04-23T13:26:46.580
2023-04-23T13:26:46.580
null
null
247165
null
613837
1
null
null
1
11
My research endeavors to explore the relationship between conflict exposure and attitudes towards international institutions. Specifically, my hypothesis posits that individuals who have experienced higher levels of conflict are more likely to exhibit lower levels of trust in international institutions. The data for my study was gathered through a survey of 1100 citizens from 75 different counties within a single country. I anticipate that individuals living in counties that experienced more frequent bombing during a conflict that took place in the 1970s, but with the survey conducted in 2015, are more likely to hold negative attitudes towards international institutions. However, the cross-sectional nature of my data presents an issue, as it is not balanced. While I have a significant sample size of 150 individuals from one county, other counties have much smaller sample sizes (ranging from 7 to 20 respondents). How can I address this imbalance in my data?
Unbalanced cross-sectional data
CC BY-SA 4.0
null
2023-04-23T13:53:12.653
2023-04-23T13:53:12.653
null
null
370256
[ "sampling", "cross-section" ]
613838
1
null
null
0
24
It's [said](https://stats.stackexchange.com/questions/171093/adaboost-why-does-test-error-decrease-even-after-training-error-hits-zero) that continuing to boost after reaching zero training error can improve test error in AdaBoost. However, I don't see how the algorithm can continue at all if it is reaching zero training error; $\alpha_i$ will become +inf if the error is zero. [](https://i.stack.imgur.com/QMPnI.png)
How does AdaBoost continue to work when its training error reaches zero?
CC BY-SA 4.0
null
2023-04-23T13:56:55.333
2023-04-23T13:56:55.333
null
null
212190
[ "boosting", "adaboost" ]
613839
2
null
613824
0
null
You didn't find anything specific on this topic because any regression model would suffice for dealing with binary features. You can use ordinary least squares regression, regularized regression, random forest, $k$NN regression, or any other model. There's [no single best one](https://en.wikipedia.org/wiki/No_free_lunch_theorem), if there was, we wouldn't have multiple possible models. Different models would work for different datasets. So yes, to pick the model you can either do this empirically, based on their performance, or other properties (e.g. ease of interoperability, etc).
null
CC BY-SA 4.0
null
2023-04-23T14:42:29.840
2023-04-23T17:05:22.043
2023-04-23T17:05:22.043
35989
35989
null
613840
1
613849
null
4
220
I am analyzing two within-subject categorical variables (Factor A and Factor B) in R. Using linear mixed effects, I got a significant interaction. When I start to analyze the simple effect, I firstly used `t.test`, and then used the `emmeans` package. However, I got different results. I don't know which one I should trust. Particularly, I want to compare B1 and B2 on the level of A1. The following is the code I have in R: ``` emm1 = emmeans(model, ~ A * B) emm1 pairs(emm1, simple = "B") ``` Results: ``` structure = A1: contrast estimate SE df t.ratio p.value B1 - B2 0.451 0.140 395 3.230 0.0013 ``` For the `t.test`, I firstly subset the data, and run the `t.test`: ``` datasubset = data[data$A == "A1", ] datasub.t=t.test(dv~ B, data= datasubset) ``` Results: ``` t = 1.8013, df = 188.59, p-value = 0.07325 ``` So I got different results, and which one should I trust? Or which step that might be incorrect in the code leads to the different results?
Contradiction between emmeans and t.test in R
CC BY-SA 4.0
null
2023-04-23T15:09:33.857
2023-04-24T07:30:18.627
2023-04-24T07:30:18.627
53690
383846
[ "regression", "t-test", "interaction", "lsmeans" ]
613841
1
null
null
0
16
In order to determine the parameters b1, b2 and b3, this chapter simplifies the writing using the "deviation form" (red part). I can't understand why the term b1 disappears. Here is the chapter mentioned above. [](https://i.stack.imgur.com/dJnU1.png) [](https://i.stack.imgur.com/tepQj.png)
Multiple (3 variable) regression model
CC BY-SA 4.0
null
2023-04-23T15:13:34.933
2023-04-23T16:52:44.987
null
null
386338
[ "regression", "multiple-regression", "regression-coefficients" ]
613843
1
null
null
1
19
I have just been reading that Quadratic Programming can be used to solve the Support Vector Machine optimization. My solver can minimize this typ of problem $$\text{J}_{min} = \frac{1}{2}x^TQx + c^Tx$$ Where it subject to: $$Ax \leq b$$ $$x \geq 0$$ All I need to do is to create the $Q, A$ matrices and $b, c$ vectors. Assume that we have our data and the labels. Notice that this example is for two classes only. One class have the label $1$ and the other has the label $-1$. ``` X = [1, 2; 2, 3; 3, 3; 2, 1; 3, 2]; labels = [-1; -1; 1; 1; 1]; ``` And we are using a polynomial kernel with the second degree ``` K = @(x1,x2) (1 + x1*x2').^2; ``` And then I create my symmetric $Q$ matrix ``` m = size(X, 1); Q = zeros(m,m); for i = 1:m for j = 1:m Q(i,j) = labels(i)*labels(j)*K(X(i,:), X(j,:)); end end ``` The objective function $c$ is simple just $-1$ as a vector ``` c = -ones(m, 1); ``` And the constaints are a matrix $A$ and the vector $b$ ``` A = labels'; b = 0; ``` Now I can solve this optimization problem by using [quadprog](https://github.com/DanielMartensson/CControl/blob/master/src/CControl/Sources/Optimization/quadprog.c). See both MATLAB-code and C-code. ``` [x, solution] = quadprog(Q, c, A, b) ``` And the solution $x$ was: ``` 0.36538 1.4423 2.6154 2.1346 -2.9423 ``` Or for the lazy one who have GNU Octave's internal QP-solver. This will give the same results of $x$. ``` [x, ~, e] = qp([], Q, c, [], [], [], [], [], A, b) ``` Question: What does $x$ mean here? How should I use $x$ now? Is $x$ quadratic boundaries?
How does quadratic programming solve the Support Vector Machine problem?
CC BY-SA 4.0
null
2023-04-23T15:30:28.847
2023-04-23T15:30:28.847
null
null
275488
[ "optimization", "svm", "constrained-optimization" ]
613844
1
null
null
0
17
I have implemented the basic sGarch model using the code below: ``` spec <- ugarchspec(variance.model = list(model = "sGARCH", garchOrder = c(1, 1)), mean.model = list(armaOrder = c(1, 1)), distribution.model = "norm") ARMA_GARCH.models <- list() for (i in 1 : length(vol)) { ARMA_GARCH.models[[i]] <- ugarchfit(spec = spec, data = log_r1[[i]] %>% filter(time <= 480) %>% pull(log_return), solver = 'hybrid') } ``` However, my issue is that when I change the model to eGARCH, I get the error `"Error in as.xts(log_r1[, -1], order.by = log_r1$time) : could not find function "as.xts".` If I could get some view as to why this error is occurring, I would highly appreciate it. Thanks!
Error when using eGARCH but not sGARCH in rugarch
CC BY-SA 4.0
null
2023-04-23T15:39:00.480
2023-04-23T18:58:19.820
2023-04-23T18:58:19.820
53690
386340
[ "r", "time-series", "garch", "error-message", "volatility" ]
613845
1
null
null
0
5
I surveyed 400 participants and each participant received 8 images to analyze, each image with the manipulation of 3 factors (A, B, C, AB, AC, BC, ABC, Control - no changes). Then, they answered the same question for each image, choosing one option of 4 (W, X, Y, Z). I am analyzing the data in SPSS. I used the Repeated Measures Mancova, including my IVs (A, B, C) as dummy variables (0 - not applied, 1 - applied). Then, I included my DVs (A_W, B_W, C_W, AB_W, AC_W, BC_W, ABC_W, Control_W...repeating for X, Y, and Z) also as dummy variables (0 - option not chosen and 1 - option chosen). My covariates are age, gender, and duration. I would like to know if this is a good model to analyze this data.
Is this a good model for Repeated Measures Mancova?
CC BY-SA 4.0
null
2023-04-23T16:15:41.360
2023-04-23T16:15:41.360
null
null
382816
[ "repeated-measures", "categorical-encoding", "mancova" ]
613847
1
613859
null
3
63
Here is an exercise in the book of author Achim Klenke. Let $(X_n)$ be iid non-negative random variables. By using Borel-Cantelli lemma, show that: $$ \limsup_n \dfrac{X_n}{n} = 0 \text{ a.s} $$ if $\mathbb{E}(X_1) < \infty$. Otherwise, show that $\limsup_n \dfrac{X_n}{n} = \infty$ a.s As suggested by the problem, I tried to express the event $\{\limsup_n X_n/n = 0\}$ as the limsup of events as follows: $$ \begin{align*} \left(\limsup_n \dfrac{X_n}{n} = 0 \right) &= \bigcap_{n \in \mathbb{N}} \bigcup_{m \in \mathbb{N}} \bigcap_{k \ge m} \left(\dfrac{X_k}{k} \le \dfrac{1}{n}\right) \end{align*} $$ Thus, $$ \mathbb{P}\left(\limsup_n \dfrac{X_n}{n} = 0\right) = 1 - \lim_{n \rightarrow \infty} \mathbb{P}\left(\limsup_m \left[\dfrac{X_m}{m} > \dfrac{1}{n}\right]\right) $$ From here, I want to show that $$ \sum_{m = 1}^\infty \mathbb{P}\left(\dfrac{X_m}{m} > \dfrac{1}{n}\right) < \infty \ \forall n $$ However, I can only prove this if we add the condition $\mathbb{E}(X_1^2) < \infty$, then $$ \sum_{m = 1}^\infty \mathbb{P}\left(\dfrac{X_m}{m} > \dfrac{1}{n}\right) = \sum_{m = 1}^\infty \mathbb{P}\left(\dfrac{X_m^2}{m^2} > \dfrac{1}{n^2}\right) \le n^2 \mathbb{E}(X_1^2)\sum_{m = 1}^\infty \dfrac{1}{m^2} < \infty $$ Without the assumption $\mathbb{E}(X_1^2) < \infty$, I'm pretty much stuck, so any hints for other ways are appreciated. Thanks
$\limsup_n \dfrac{X_n}{n} = 0$ if $\mathbb{E}(X_1) < \infty$?
CC BY-SA 4.0
null
2023-04-23T16:36:25.200
2023-04-23T18:29:39.103
2023-04-23T18:29:39.103
20519
350550
[ "probability", "random-variable", "convergence", "measure-theory" ]
613848
2
null
247871
2
null
Tongue slightly in cheek - the root cause of the class imbalance problem is calling it the class imbalance problem, which implies that the class imbalance causes a problem. This is very rarely the case (and when it does happen the only solution is likely to be to collect more data). The real problem is practitioners (and algorithm developers) not paying attention to the requirements of the application. In most cases it is a cost-sensitive learning problem in disguise (where the degree of imbalance is completely irrelevant to the solution, it depends only on the misclassification costs) or a problem of a difference in the distribution of patterns in the training set and in the test set or operational conditions (for which the degree of imbalance is again essentially irrelevant - the solution is the same as for balanced datasets). We should stop talking about class imbalance being a problem as it obscures the real problems (e.g. cost-sensitive learning) and prevents people from addressing them.
null
CC BY-SA 4.0
null
2023-04-23T16:40:16.280
2023-04-23T16:40:16.280
null
null
887
null
613849
2
null
613840
8
null
You said your conditions are within-subject but you did an independent samples t-test. If you do a paired t-test (i.e., setting `paired = TRUE` in the call to `t.test()`), the results will be closer, but still not the same. This is because your repeated measures ANOVA (what I assume you did, but you didn't show the code for it) uses the residual sums of squares across all conditions, whereas the t-test only uses the data from the slice you selected. You should use `emmeans` and not the t-test if you want accurate results. EDIT given comments: Because your model has two random effects, a t-test, paired or otherwise, is not appropriate to test your slice hypothesis. Again, `emmeans` was specifically designed to test these hypotheses, so use it.
null
CC BY-SA 4.0
null
2023-04-23T16:43:57.300
2023-04-24T05:06:40.253
2023-04-24T05:06:40.253
116195
116195
null
613850
2
null
613841
0
null
Just above equation 2.10, the author defines $x_i$ and $y_i$ as $X_i- \bar X$ and $Y_i- \bar Y$, respectively. This is what they by talking about the "deviation form", i.e., using the deviations of the variables from the means rather than the raw variables themselves. When both the predictors and outcome are centered, the [intercept is equal to zero](https://stats.stackexchange.com/q/43036/116195), so $\beta_1$ can be removed from the model.
null
CC BY-SA 4.0
null
2023-04-23T16:52:44.987
2023-04-23T16:52:44.987
null
null
116195
null
613851
2
null
613341
1
null
This seems unnecessarily convoluted. As I understand your code, you sample some simulated survival times based on a fixed Weibull model, put the sampled times together for a Cox model, then sample from the joint distribution of the Cox model coefficients to get "new" survival times from another Weibull model. (For that last step, it's not immediately clear that the Cox model coefficients from which you sample are related to the `shape` and `scale` of `qweibull()`,although I might be missing something there.) If you want to generate simulations to illustrate the vagaries of fitting a parametric Weibull model, cut out the intermediate Cox model. Sample from the joint distribution of `shape` and `scale` values from a parametric Weibull fit and show the corresponding variability in Weibull-based survival curves. If you want to illustrate the vagaries of fitting a Cox model, you would be better off with a resampling approach as the baseline hazard over time is fixed, estimated from the entire data sample. Sampling based on the variance-covariance matrix of Cox-model coefficients thus would not take into account the variability in estimating the baseline hazard. Repeating the Cox modeling on multiple bootstrapped samples of the data and showing the corresponding survival curves would take into account both the variability in coefficient estimates and the variability in the estimates of the baseline hazard.
null
CC BY-SA 4.0
null
2023-04-23T16:53:46.373
2023-04-23T16:53:46.373
null
null
28500
null
613852
2
null
613415
1
null
Had the interaction coefficient been "statistically significant," your interpretation would be correct. You wouldn't even need to do all of the other calculations to come to that conclusion, as a positive `abuse1:depression1` interaction coefficient means that the predicted values of the outcome are larger than you would have predicted based on the individual `abuse1` and `depression1` coefficients. The other calculations and plots, however, are certainly useful for demonstrating the practical implications of the model. In frequentist statistical analysis, however, you can't claim that there is any moderation by depression status, as the interaction coefficient didn't pass your pre-specified significance threshold. That probably says more about the limitations of frequentist analysis than about the interaction. You seem to have a very large data set and only a few predictors, so it's possible that a more flexible model might have been a better choice. For example, you only have a single linear term for `age`, while strictly linear associations are seldom correct. A regression spline for `age` might have been a better choice. The binary `depression` predictor also could be limiting your power. If you have some continuous measure of `depression` that would be better and could also be modeled as a spline. See [this page](https://stats.stackexchange.com/q/230750/28500) for the problems introduced by categorizing a potentially continuous predictor. See Frank Harrell's [Regression Modeling Strategies](https://hbiostat.org/rmsc/) for further advice on building and testing regression models; Chapter 2 discusses ways to build flexible models with continuous predictors.
null
CC BY-SA 4.0
null
2023-04-23T17:12:47.317
2023-04-23T17:12:47.317
null
null
28500
null
613854
2
null
613238
1
null
If your understanding of the subject matter indicates that a `baseline` score of -1 represents "impaired" individuals, then what you propose is correct (given the ordering of the coefficient estimates in your `model_fit`). A display of the continuous association of outcome with `baseline` score as a function of treatment might be more useful to your readers. The [emmeans package](https://cran.r-project.org/package=emmeans), for example, simplifies this type of post-modeling calculation without your having to keep track of coefficient numbering. Although outside my expertise, this situation seems like it might benefit from some joint modeling of `baseline` scores among "impaired" and "non-impaired" individuals along with your hypothesis that the treatment effect is restricted to the "impaired."
null
CC BY-SA 4.0
null
2023-04-23T17:34:44.807
2023-04-23T17:34:44.807
null
null
28500
null
613856
1
null
null
0
18
I have caluclated the first differences of a process and now created an ACF and PACF plot and I honestly don't know how to interpret it. I thought that it is an AR(1) process but I see some significant lags later then the first lag, but they look like they are close to the edge of significance. What should be the order of this process? [](https://i.stack.imgur.com/mZLfB.png) [](https://i.stack.imgur.com/2hZZP.png)
What do you do when the lags in ACF are on the edge of significance? How would you descibe this ACF and PACF plot?
CC BY-SA 4.0
null
2023-04-23T18:08:57.570
2023-04-23T18:57:20.593
2023-04-23T18:57:20.593
53690
386345
[ "time-series", "forecasting", "arima", "model-selection", "acf-pacf" ]
613857
1
null
null
1
43
[Stochastic gradient descent](https://en.wikipedia.org/wiki/Stochastic_gradient_descent) is a useful approach to improving iteration time by giving up some rate of convergence. For a parameter $w$, learning rate $\eta$, and smooth objective function $Q$ the update rule is: ![](https://wikimedia.org/api/rest_v1/media/math/render/svg/4dec506d9a4c822ef0a4519d823ccd80ad8b79bc) The term containing "stochastic" reminded me of stochastic processes and their role in [SDEs](https://en.wikipedia.org/wiki/Stochastic_differential_equation) and numerical algorithms like [stochastic Runge-Kutta](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_method_(SDE)). The idea of having an iterative algorithm which adds instances of a stochastic process seems like a possibility that someone would have explored by now for gradient descent. One could begin with modifying vanilla gradient descent by adding elements from a stochastic process to it: $$W_{n+1} = W_n - \eta \nabla Q(w) + X_n$$ where $X_n$ is the $n$th element of a stochastic process. Not all choices of stochastic process would result in the update rule having some form of [convergence](https://en.wikipedia.org/wiki/Convergence_of_random_variables) in the parameter random variable. Indeed, on the face of it including noise like this could seem like a horrible idea since you'll get inferior rate of convergence and makes the final parameter estimate noisier. The main benefit I see is to allow jumps in the parameter coordinate to help avoid to getting stuck in local minima. Naturally, there are other approaches like adaptive learning rates to tackle this. Maybe such alternatives are generally better, but for me this other "stochastic gradient descent" is an under-examined possibility. I imagine the approach isn't popular for the inefficiencies it introduces, but I can also imagine that for sufficiently gnarly parameters spaces (e.g. lots of local minima) that its ability to make jumps could be helpful. Is this idea a dead end that was already explored in the literature? Or is it a fruitful niche that exists in the literature? Or is it still off people's radar?
What happened to the "other" stochastic gradient descent?
CC BY-SA 4.0
null
2023-04-23T18:10:21.387
2023-04-23T20:43:13.473
null
null
69508
[ "stochastic-processes", "stochastic-gradient-descent" ]
613858
2
null
613591
3
null
A properly constructed contingency table lets a chi-square test takes the overall sample size into account. For your situation, it's a cross-tabulation based on severe/not in one dimension and mutation present/absent in the other. In your first example, with 8000 total observations, 1000 with severe disease, 30 with the mutation and 27 of those having severe disease, a contingency table looks like this: ``` mat1 <- matrix(c(27,3,(1000-27), (7000-3)), byrow=TRUE, nrow=2, dimnames=list(mutation=c("present","absent"), severity=c("severe","not"))) mat1 # severity # mutation severe not # present 27 3 # absent 973 6997 chisq.test(mat1)$p.value ## warning omitted # [1] 2.62529e-36 ``` If you do similar tests based on 20% of those with a mutation having severe disease, however, you will not find a "significant" result if there are only 30 with the mutation. If there are 300 with the mutation and the same fraction with severe disease, then you will get a highly significant association. There are nevertheless a few problems with your approach. First, if you have many mutations then you must deal with the [multiple-comparisons problem](https://en.wikipedia.org/wiki/Multiple_comparisons_problem). Second, from your last paragraph it seems that you are undertaking a regression model or similar analysis for severity. In that case you should be incorporating the model's predictors into a binary (e.g., logistic) regression model instead of looking at the raw contingency tables from the original data set. Omitting any outcome-associate predictor from a binary regression model tends to [bias the magnitudes of estimated association downward](https://stats.stackexchange.com/q/113766/28500) for the included predictor. But that's just what you are doing if you ignore the other predictors and just look at mutation and disease status as you propose. Third, this one-mutation-at-a-time approach means that you will miss potentially important associations that depend on combinations of mutations. You might consider a principled approach like LASSO that can evaluate multiple genes together.
null
CC BY-SA 4.0
null
2023-04-23T18:13:13.230
2023-04-23T20:29:44.403
2023-04-23T20:29:44.403
28500
28500
null
613859
2
null
613847
4
null
You have shown that the goal is to prove for any $\epsilon > 0$ (which is equivalent to $n^{-1}$ in your post), it holds that \begin{align} \sum_{m = 1}^\infty P(X_m > m\epsilon) < \infty. \end{align} By the i.i.d. assumption (in fact, just "identically distributed" condition would suffice), this is equivalent to prove \begin{align} \sum_{m = 1}^\infty P(X_1 > m\epsilon) < \infty. \tag{1} \end{align} Now the inequality (the first equality below uses $X_1$ is nonnegative) \begin{align} E[X_1] &= \int_0^\infty P[X_1 > t]dt = \sum_{m = 1}^\infty \int_{(m - 1)\epsilon}^{m\epsilon}P[X_1 > t]dt \geq \sum_{m = 1}^\infty \epsilon P[X_1 > m\epsilon] \end{align} and $E[X_1] < \infty$ imply that $(1)$ holds. This completes the proof. The same trick can be used to prove $\limsup_n n^{-1}X_n = \infty$ almost surely given $E[X_1] = \infty$. But for this to hold, now we need the independent assumption as well (as opposed to the $E[X_1] < \infty$ case which only needs $X_1, X_2, \ldots$ are identically distributed) to deploy the second Borel-Cantelli lemma.
null
CC BY-SA 4.0
null
2023-04-23T18:22:22.463
2023-04-23T18:27:56.790
2023-04-23T18:27:56.790
20519
20519
null
613860
2
null
613285
0
null
A couple of thoughts. First, with two separate outcomes, this might best be handled by "[multivariate ANOVA](https://en.wikipedia.org/wiki/Multivariate_analysis_of_variance)," in which "multivariate" means "multiple outcomes." This [Penn State course](https://online.stat.psu.edu/stat505/lesson/8) explains what's involved. Although the point estimates for each outcome's associations with predictors will be the same as for your separate models, multivariate analysis has the advantage of directly evaluating outcome correlations, which is what you seem to be interested in. Second, this type of problem comes down to the tradeoffs that you are willing to make. Multivariate analysis is likely to be more powerful than throwing away a type of outcome, but perhaps there are business reasons for only wanting to use one. You can evaluate those tradeoffs directly by resampling the data. For example, you could do repeated cross-validation, building models on subsets of the data while testing against the held-out observations, to see whether one or the other outcome is more reliable and what you lose by throwing away the other.
null
CC BY-SA 4.0
null
2023-04-23T18:33:15.423
2023-04-23T18:33:15.423
null
null
28500
null
613861
2
null
613710
1
null
The question comes down to details of your data structure. If you only have 1 temperature value for all observations at each locality, then you will have a strict dependence between temperature and locality. You can model either temperature or locality, but not both. If you think that the localities are equivalent except for temperature, precipitation, etc. then you could build a model of associations of outcome(s) with those variables that doesn't include locality as a fixed predictor. If you have multiple observations over time and thus you have multiple temperature and precipitation values within each of the localities, then there should be no problem at all. There might be a problem if, for some reason, you still end up with a strict linear dependence between the localities and some combination of predictor values. The alternatives that you suggest won't help with the problem that concerns you. Correlations on their own can be very misleading, as they ignore the influences of other outcome-associated predictors. ANOVA is just a name for a particular type of regression model of categorical predictors, but you are interested in continuous predictors. Multiple regression, ideally in a way that takes the multiple outcome types into account, is a good choice.
null
CC BY-SA 4.0
null
2023-04-23T18:47:34.870
2023-04-23T18:47:34.870
null
null
28500
null
613862
2
null
612933
1
null
Formally speaking no, but we can fake it. In particular we can make use of capabilities of the `fit` method to continue training an old model. As such we do the following: - We make $X_{\text{whole}}$ to store the data from dataset $X_1$ as well as $X_2$. $X_1$ features are available data at the start of the project. $X_2$ features are unavailable so we explicitly sets them a fixed value (say 42). It is important that all of them are fixed because in that way: a. they don't covary with any of the known features in $X_1$ and b. any splitting algorithm will ignore them as no splitting can ever result in a reduction to our loss. - We train for the first $k$ iterations normally using $X_{\text{whole}}$ such that $X_{\text{whole}} = [X_1 , X_2]$. Our booster up until that iteration uses (by necessity) only information from $X_1$. - After $k$ iteration have passed/move to new environment, etc. $X_2$'s feature information are now available. We update our $X_{\text{whole}}$ such that it now holds the relevant $X_1$ features it did before but now $X_2$ contains the newly available information. - We train for the rest $k+1$ to whatever total number of iteration we expect using the updated $X_{\text{whole}}$. The first $k$ trees are preserved doing splits only with information from the features available in $X_1$ but the subsequent tress starting from $k+1$ can using features from $X_1$ or $X_2$ depending on the feature's informativeness. To my knowledge all three major Python GBM implementations have training continuation capabilities in their `fit` methods. [XGBoost](https://xgboost.readthedocs.io/en/stable/python/examples/continuation.html), [LightGBM](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.train.html) and [CatBoost](https://catboost.ai/en/docs/concepts/python-reference_train) suggest that in their documentation. (Links provided, albeit with slightly different ways of achieving this - LightGBM & CatBoost use a `init_model` argument, while XGBoost straight-up retrains an existing booster.)
null
CC BY-SA 4.0
null
2023-04-23T19:04:33.270
2023-04-23T19:04:33.270
null
null
11852
null
613863
2
null
608300
0
null
I suspect that part of the problem, at least, is your having restricted your modeling to linear associations of outcome with the continuous predictors (or linear in log-predictor). I'd recommend that you look at Chapter 2 of Frank Harrell's [Regression Modeling Strategies](https://hbiostat.org/rmsc/genreg.html) for better ways to model continuous predictors. If you omit a true non-linear association of a continuous predictor with outcome, you have a type of omitted-variable bias. With correlated predictors, you might end up with the coefficients of other predictors picking up some of that omitted non-linearity. In any event, what's most important is to make sure that the model as a whole is OK. If your model with the log-transformed `Acq_Size_MV42` performs more poorly overall than when that predictor is not transformed, then there isn't any need to worry about the results with its log transformation. Careful evaluation of model quality is the key.
null
CC BY-SA 4.0
null
2023-04-23T19:05:57.590
2023-04-23T19:05:57.590
null
null
28500
null
613864
1
613865
null
0
39
[This post](https://stats.stackexchange.com/questions/377686/linear-regression-assume-residual-mean-is-zero) answers for when you fit your data with a line, but my teacher's notes at university claim that the mean of residuals is zero for all linear (in its parameters) models, such as the parabolic model, while no proof is given for this general case. [Wikipedia](https://en.wikipedia.org/wiki/Linear_regression) seems to agree that linear regression is just lines and does not include parabolas (my teacher mixes up linear and linear in its parameters...)
Is the average of residuals zero in a parabolic regression?
CC BY-SA 4.0
null
2023-04-23T19:08:12.680
2023-04-23T19:42:09.967
null
null
386347
[ "regression", "linear-model" ]
613865
2
null
613864
1
null
Let $x_1$ denote the raw values of your variable, and let $x_2$ be some other feature. The model below sure looks linear. $$ y_i =\beta_0 + \beta_1x_{i1}+ \beta_2x_{i2}+\varepsilon_i $$ Indeed, such a model is linear, and the usual theorems about linear regression apply. Next, define $x_2$ by $x_2=x_1^2$. You now have a parabolic model written in a form that is known to satisfy the usual properties of a linear model.
null
CC BY-SA 4.0
null
2023-04-23T19:42:09.967
2023-04-23T19:42:09.967
null
null
247274
null
613866
2
null
386466
1
null
The null hypotheis $H_0$ of the test is the likelihood $f(Y|\theta)$ is right. You also assume you prior $\pi(\theta)$ is "right". The first condition "if it depends only on observed data" is saying: $T(Y)$ cannot depend on $\theta$, which is the unknown parameters in the likelihood $f(Y|\theta)$. The second condition "its distribution is independent of the parameters of the model" is as follows. Let $Y^{obs}$ be your observed data. Under $H_0$, there is a density for $Y^{obs}$, which is $\int f(Y^{obs}|\theta)\pi(\theta)d\theta$, and a density for $T(Y^{obs})$. Let's call this density as $g(T(Y^{obs}))$. What you want is $g(T(Y^{obs}))$ does not depend on $\theta$. That being said, the notation you used "$T(y)|\theta$" is not entirely right as we do not condition on a $\theta$ but integrate it out from $\int f(Y^{obs}|\theta)\pi(\theta)d\theta$. Regarding your follow-up question, the practical insight of both paragraphs is, if you couldn't a $T(Y)$ that can test the specific feature of interest and satisfies the above two conditions simultaneously, the posterior predictive p-value you will get does not follow uniform(0,1) but a dome-shaped density. This non-uniform issue is not regarded as an issue by Gelman himself [http://www.stat.columbia.edu/~gelman/research/published/ppc_understand3.pdf](http://www.stat.columbia.edu/%7Egelman/research/published/ppc_understand3.pdf) but a number of statisticians find it unsatisfying as this condition is a key for hypothesis testing, e.g, [https://www.tandfonline.com/doi/abs/10.1080/01621459.2000.10474310](https://www.tandfonline.com/doi/abs/10.1080/01621459.2000.10474310) [https://www.sciencedirect.com/science/article/pii/S0167947314001522](https://www.sciencedirect.com/science/article/pii/S0167947314001522)
null
CC BY-SA 4.0
null
2023-04-23T19:57:47.137
2023-04-24T13:58:02.283
2023-04-24T13:58:02.283
386348
386348
null
613867
1
null
null
0
40
I would like to fix the states identified by depmixS4 in a certain order to prevent label switching. This question was asked here ([Fix state labels of HMM in depmixS4](https://stats.stackexchange.com/questions/525105/fix-state-labels-of-hmm-in-depmixs4)), and the answer given recommended the ?depmixS4::fit documentation for how to do this. However, the documentation only has an example of how to use conrows to establish an equality constraint, not a constraint that one state's mean has to be higher than the other. I have been running the example code given in the documentation using the "speed" data and trying to modify it to fix the slow controlled state as state 1, but nothing works. I first fit the model without the constraint and replicate the tutorial results: ``` mod1 <- depmix(list(rt~1,corr~1),data=speed,transition=~Pacc,nstates=2, family=list(gaussian(),multinomial("identity")),ntimes=c(168,134,137)) # fit the model set.seed(3) fmod1 <- fit(mod1) ``` To switch the order of the states so that the longer RT state is first, I have tried: ``` conr <- matrix(0,1,18) conr[1,11] <- 6 conr[1,15] <- 4 mod1h <- setpars(mod1,pars) fmod3h <- fit(mod1h,conrows=conr) # using free defined above ``` and ``` conr <- matrix(0,1,18) conr[1,11] <- 1 conr[1,15] <- 0 fmod3h <- fit(mod1,conrows=conr) # using free defined above ``` and several other variations on this approach, But so far I am gettig only nonsensical results, plus the warning: solnp--> Solution not reliable....Problem Inverting Hessian.
inequality constraints in mixture models using conrows in depmixS4
CC BY-SA 4.0
null
2023-04-23T20:07:23.707
2023-04-23T20:07:23.707
null
null
386253
[ "r", "hidden-markov-model" ]
613868
2
null
613778
2
null
In principle, yes. Except that you can only interpret the change in new variable as a precentage if the old variable is a proportion. That does not seem to be the case. Instead you need to do some linguistic gymnastics to find a true and clear representation of what your unit now means. I think the effort is worth it, but it is still an effort.
null
CC BY-SA 4.0
null
2023-04-23T20:08:53.570
2023-04-23T20:08:53.570
null
null
23853
null
613869
2
null
272958
2
null
Here are three possible approaches: - You could compute a median of means, counting clusters as single points by taking the mean of clusters as a single point. - You could compute the empirical distribution function along with a confidence interval. Since your measurements have errors, you could also estimate the error in the estimated median with a Monte Carlo approach. - (Possibly simpler) You could compute a mean of means, which allows an easier estimation of the error.
null
CC BY-SA 4.0
null
2023-04-23T20:09:07.873
2023-05-25T07:40:38.087
2023-05-25T07:40:38.087
225256
164061
null
613870
2
null
613857
0
null
Within the deep learning and adjacent communities, the biggest advantage of SGD is that there is an implementation that is an order of complexity faster than per iteration than standard gradient descent. The basic idea is that taking a sub-sample of data (potentially even 1 row!), you can show that in many cases, you have the expected gradient plus unbiased noise. This is a version of SGD, and it's also one in which the computational cost per iteration does not scale with the sample size of your data. For problems in which there are billions (or trillions!) of examples, clearly this is a make-or-break feature. The fact that SGD may or may not dodge local minima is still of interest, but as an example, a version of SGD that required computation of exact gradient first with noise added on after would be a non-starter for most use cases, even if it were better at dodging local modes. However, there are methods like [Adam](https://arxiv.org/abs/1412.6980) that are no more expensive than "vanilla SGD" but tame the step sizes to help stabilize algorithm. In a sense, you could think of these methods as one in which we alter the form of the noise in order to have more desirable properties of the algorithm. Likewise, the general class of methods with momentum components are believed to help "blow past" local minimum.
null
CC BY-SA 4.0
null
2023-04-23T20:19:09.060
2023-04-23T20:43:13.473
2023-04-23T20:43:13.473
76981
76981
null
613871
1
null
null
4
1083
The question "The dean randomly selected 25 students who took the calculus final exam and looked at their scores on it. In this sample the mean was 85.7 and the standard deviation was 4.2. The population of exam scores has a distribution that is approximately normal. Construct a 90% confidence interval for the mean score of all students who took this exam. Round your answers to 2 places after the decimal point, if necessary. If it is not appropriate to construct a confidence interval in this situation, then enter "0" in both answer boxes below. 90% confidence interval: (84.26) (87.14) this answer is correct. If you press the button "Show Detailed Solution" it says this In this scenario we have the following given information: n = 25 x with line above = 85.7 s = 4.2 (standard deviation symbol, sigma) is unknown The population is approximately normally distributed. 90% is the desired confidence level of the interval. it is much faster and easier to use the TInterval function on our calculators: TInterval Inpt: Stats "x" with a line above: 85.7 Sx: 4.2 n: 25 C-Level: 0.9 Answer: (84.263, 87.137) My question is why does the explanation say the (standard deviation symbol) is unknown when the standard deviation is clearly stated in the problem?
Why does the question state standard deviation is unknown, when it is written in the question? See problem Below
CC BY-SA 4.0
null
2023-04-23T20:19:28.390
2023-04-24T20:24:08.430
null
null
386349
[ "confidence-interval", "standard-deviation", "t-confidence-interval" ]
613872
2
null
613391
1
null
When you ask for `expected` or `survival` predictions from a Cox model and don't specify `newdata`, then you get results directly from the original data and model. You can see what's going on by typing `survival:::predict.coxph` at the R command prompt. Here's the key code: ``` if (type == "expected") { if (missing(newdata)) pred <- y[, ncol(y)] - object$residuals ... } ``` Here `y` is the `Surv` object of the original data, whose last column is the set of event markers, and `object$residuals` are the corresponding martingale residuals, the differences between the observed number of events (here, 1 or 0) and the model-predicted values. [Therneau and Grambsch](https://www.springer.com/us/book/9780387987842) have an accessible explanation of how martingales and martingale residuals play roles in the counting-process approach to Cox models. Each `pred` value in the above code is thus the model-predicted number of events for that case at the corresponding observation time. (The code for `clogit` sets `time=1` for all observations.) If you specify `type="expected"`, that's what `predict.coxph()` returns directly. As the `predict.coxph` help page says, "The survival probability for a subject is equal to `exp(-expected)`." That's a straightforward result for a continuous-time survival model, as the `expected` number of events at time $t$ is the [cumulative hazard](https://en.wikipedia.org/wiki/Survival_analysis#Hazard_function_and_cumulative_hazard_function) $H(t)$, in turn related to the survival probability by $S(t)=\exp(-H(t))$. For your first question, you aren't quite right. What you get is the predicted survival probability if you repeated the experiment again (effectively to `time=1`). Cox models can make no predictions beyond the last observation time, and the way that `clogit` is coded all observation times are set to a value of 1. For your second question, the event probability when there can be at most 1 event is 1 minus the survival probability. I'd recommend using that. For your third question (related to part of your second question about using "risk"), you are correct that Cox models of "risk" are always relative to something. The choice of reference is thus critical. The `Details` section of the help page for `predict.coxph` explains the default choices, why they were made, and other options. In your application, where stratification is a programming trick to fit a conditional logit model via a Cox model routine and you presumably care more about the regression-coefficient estimates than using them to make within-stratum predictions, I'd be reluctant to use the "risk" type of predictions at all. If you do want such predictions, read the help page very carefully first.
null
CC BY-SA 4.0
null
2023-04-23T20:21:22.060
2023-04-23T20:21:22.060
null
null
28500
null
613873
1
null
null
0
5
I am trying to model predictors of infections for a group of patients. The rates of infection vary by calendar time and so i was attempting to use a calendar time scale Cox model. However patients entered the study starting in 2022 and were observed until the the end of 2022. So some patients enter as late of april of 2022. Given the staggered entry date and calendar time dependent outcome of interest, what options do i have for modeling the data? Could i change the outcome to a rate and use poisson regression?
time varying event
CC BY-SA 4.0
null
2023-04-23T20:26:55.040
2023-04-23T20:26:55.040
null
null
386351
[ "regression", "time-series" ]
613874
1
null
null
0
13
I am in need of a ZI genpois model in glmmTMB for my analysis (modelling nested annual bird counts ~ weather covariates). A random slope/intercept glmmTMB model (1 + year | site) cannot resolve the temporal autocorrelation in my time-series data, but a site-varying smooth term on year in mgcv can. Since mgcv does not have a generalized Poisson family, I am trying to import the year smoother into glmmTMB. However, the approach that I am using to do so does not work in the end, as with 31 unique sites, the final glmmTMB is far too overparameterized (>100 parameters for all the basis dimensions of each site-specific smooth, but <400 observations in the input data). Is there an easier way to convert my s(year, by=site, m=2, bs="cr") smooth in mgcv::gam into a non-penalized smoother (for given, but different edf/knots per site) in glmmTMB?
Is there a way to reassemble smooth terms from mgcv in glmmTMB?
CC BY-SA 4.0
null
2023-04-23T20:29:14.857
2023-04-23T20:29:14.857
null
null
286723
[ "generalized-additive-model", "mgcv", "glmmtmb" ]
613875
1
null
null
1
14
Suppose I have a population of N elements. For the sake of example, imagine these elements take either a value of 1 or a value of 0. My goal is to estimate the sum of the N elements. To do so, I wish to set a maximum standard error threshold, rather than set a fixed sample size, as is typically done in SRS. The idea is to sequentially select one element at a time, using SRS of n=1 for each iteration, estimate the standard error, and continue until the standard error threshold is met, then stop. This of course is not SRS itself, since the number of collected elements will be random. The standard error made at each iteration is probably not unbiased for the true standard error either. Despite these shortcomings I see it as a potentially advantageous way to collect a random sample with little prior knowledge of the population standard deviation. I am curious if this strategy has a name or has been studied before?
Sample until a desired condition is met
CC BY-SA 4.0
null
2023-04-23T20:32:58.490
2023-04-23T20:32:58.490
null
null
139793
[ "sampling" ]
613876
2
null
613251
2
null
As it turns out, this model is known. It is either called the 3-parameter lognormal distribution, or sometimes TPLN (Three Parameter LogNormal). It appears to be used in hydrologic analysis. Unfortunately, [there is no global MLE](https://digitalcommons.fiu.edu/cgi/viewcontent.cgi?article=1677&context=etd) (or rather, the global MLE is useless, converging towards the trivial case of infinite standard deviation, negative-infinite mean, and an offset of the smallest order statistic of the sample.) There are however several local MLEs that appear to work well enough. I can't answer the question of if there is a more appropriate model to use for this sort of data, however.
null
CC BY-SA 4.0
null
2023-04-23T20:33:54.457
2023-04-23T20:33:54.457
null
null
100205
null
613877
1
null
null
0
50
My question is very much the one of the statistical nature. I have two groups of data about surface temperature: - from the past, on surface temperature of the land cover class - from the present times, on surface temperature of the same land cover class as in the past The area of the classes changed through the years so the length of the data differs, the data are also having not normal distribution. I wanted to check the significance of the differences between those two groups and their medians. For now I used a "Wilcoxon rank sum test with continuity correction" : ``` wilcox.test(trees_1980$LST1980,trees_2021$LST2020) ``` The results gives me like overall significance of the difference between those two groups of different lengths and non normal distribution if I'm getting it right? (I'm not a statistician.) And as in the title - how could I check the significance of the differences of medians of these two groups for it to be statistically correct.
Checking the statistical significance of the difference between medians of two groups with non normal distribution and different length
CC BY-SA 4.0
null
2023-04-23T20:00:06.777
2023-05-21T23:55:37.823
2023-05-21T23:55:37.823
11887
386352
[ "r", "hypothesis-testing", "median" ]
613878
1
null
null
1
26
I've been playing around with random forest algorithm to classify a binary Y vector using classification and regression trees. Classification trees output class probability and regression trees an average number between 0 and 1. By setting a threshold of 0.5 for either probability (classification trees) or average (regression trees) I get very similar performance in terms of classification accuracy and similar ranking of feature importance. My question is, is it OK to use regression trees for classification tasks? If so, do you have any reference or example of such use?
Can you use regression trees for classification tasks in random forest?
CC BY-SA 4.0
null
2023-04-23T20:43:14.900
2023-04-23T20:47:45.923
2023-04-23T20:47:45.923
22311
131079
[ "classification", "references", "random-forest", "cart" ]
613879
1
614127
null
1
57
I have done WGCNA based on the module that showed association with the trait I narrowed down key modules from which I proceeded with cox regression model So the steps what I did in R as such ``` fit.coxnet <- glmnet(x_tr, y_tr, family = "cox",alpha=.95,lambda = lambda_seq) plot(fit.coxnet,xvar="lambda") plot(fit.coxnet) cv.coxnet <- cv.glmnet(x_tr,y_tr, family="cox", type.measure="C", alpha = .95) plot(cv.coxnet) co_mat1 <- as.matrix(coef(cv.coxnet,s="lambda.min")) co_mat1[co_mat1[,1]!=0,] ``` Where my x_tr and y_tr are both training I get the `co_mat1` output as such for Module1 ``` structure(list(gene = c("ENSG00000006659", "ENSG00000008853", "ENSG00000070808", "ENSG00000084734", "ENSG00000086570", "ENSG00000091490", "ENSG00000100228", "ENSG00000105672", "ENSG00000106772", "ENSG00000108179", "ENSG00000116985", "ENSG00000124196", "ENSG00000125740", "ENSG00000127412", "ENSG00000127946", "ENSG00000128578", "ENSG00000138622", "ENSG00000140836", "ENSG00000149635", "ENSG00000153208", "ENSG00000159228", "ENSG00000160999", "ENSG00000164086", "ENSG00000166866", "ENSG00000166922", "ENSG00000171885", "ENSG00000177138", "ENSG00000185338", "ENSG00000186510", "ENSG00000205856", "ENSG00000213214"), `co_mat1[co_mat1[, 1] != 0, ]` = c(0.111315518531421, 0.000571117486479822, 0.0891201630635949, 0.0598057712435711, -0.131144750854546, 0.182391613168578, 0.19326085436214, -0.191567837804389, 0.00796721001734388, 0.085953634941934, 0.00554035926198626, 0.0776288760670583, -0.187116328081864, -0.0327269478695253, 0.216471914721977, -0.176956226796014, 0.0230560752481754, -0.109709077697175, 0.170102961363829, -0.00023210509664439, 0.275551962171425, -0.0235573355772408, 0.389779353352752, -0.0143858241673411, 0.00550239038776184, -0.102658476410967, -0.0673222763256406, 0.0582474104970146, 0.0386658549097694, 0.0852155443694458, 0.0923302099305247)), row.names = c(NA, -31L), class = "data.frame") ``` The above is for one module which I subsequently tested on test data set which i had made split from the main datasets and I also tested on the validation datasets. Now I same process I followed for another module I get something like this. So the coefficients are as such Module2 ``` structure(list(gene = c("ENSG00000080546", "ENSG00000087589", "ENSG00000110002", "ENSG00000131482", "ENSG00000137843", "ENSG00000144821", "ENSG00000149218", "ENSG00000157388", "ENSG00000170522", "ENSG00000174500", "ENSG00000178104", "ENSG00000188522"), `co_mat1[co_mat1[, 1] != 0, ]` = c(0.0554274278043017, 0.120477299631229, 0.088005988354015, 0.366288261878325, -0.0369065377791416, 0.425559760303767, -0.0949215442527281, -0.157374794133298, -0.165132912096827, 0.0604885449805194, 0.405129979244698, 0.357046644274689)), row.names = c(NA, -12L), class = "data.frame") ``` So far what I read and understood is I can compare these using ROC curve The doubt I have how do i compare which one is better using which dataset is it the test data set or the validation datasets? For the ROC part I have tried something like this ``` library(ROCR) library(rms) library(pROC) #fit.cph <- cph(Surv(OS_MONTHS, Status)~ turqoise_module, data=df2, # x = TRUE, y = TRUE, surv = TRUE) fit <- coxph(Surv(OS_MONTHS, Status) ~ turqoise_module, data = df2) fit pred_scores <- predict(fit, newdata = df1, type = "risk") aa <- as.matrix(y_te) ab<- as.data.frame(aa) roc_obj <- roc(ab$status, pred_scores) ggroc(roc_obj, legacy.axes = TRUE, title = "ROC Curve for Cox Model") ``` where df2 is the data which is using [solution](https://stats.stackexchange.com/questions/610125/using-multiple-genes-building-gene-signature-and-survival-analysis) which was suggested so I take the coefficient and the categorize them based on median of that genes. Now this gives me one ROC curve. Now If I have to do for two model comparison what is the statistical way and R way to tell Module1 is better than Module2 or vice-versa. Fundamental doubt ``` pred_scores <- predict(fit, newdata = df1, type = "risk") ``` The above code works only for the data which I had split for training and testing data set, I tried with validation set it failed due to dimension issues. My question am I doing it the right way?
How to compare two signature for prediction models?
CC BY-SA 4.0
null
2023-04-23T21:37:32.657
2023-04-27T13:11:27.050
null
null
334559
[ "survival", "cox-model" ]
613880
1
null
null
1
24
is stock fixed effects necessary for a panel data where 80% of the firms are small firms and only 20% firms are medium to large? I suspect there would be little to no variation in majority of the sample in such case and adding a stock fixed effects would just kill the results. This is what I appear to understand from the paper ([https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3699777](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3699777) . In my scenario, I would probably go with Industry fixed effects and a time (year fixed effects) then. A bit confused. Any advice is appreciated. Thanks. Note: I have panel data for 15 years.
Fixed effects for panel data consisting mostly small stocks
CC BY-SA 4.0
null
2023-04-23T21:38:56.130
2023-04-23T21:38:56.130
null
null
263776
[ "fixed-effects-model" ]
613881
1
null
null
1
14
What results exist for computing (or approximating) the entropy of a mixture of von Mises-Fisher distributions?
What is the entropy of a mixture of von Mises-Fisher distributions?
CC BY-SA 4.0
null
2023-04-23T21:55:34.533
2023-04-23T21:55:34.533
null
null
62060
[ "circular-statistics", "von-mises-distribution" ]
613882
1
null
null
1
23
Consider the following two random variables: - A random vector $S$ with law $h$ and support $\mathcal{S}$ and, - a random vector $X$ with law $c$ and support $\mathcal{X}$. Assume - $\text{Dim}(\mathcal{S}) \geq \text{Dim}(\mathcal{X})$ - $\mathcal{S}, \mathcal{X}$ are finite, and - $|\mathcal{S}| \geq |\mathcal{X}|$. I want to find a surjective function $T: \mathcal{S} \to \mathcal{X}$ such that $c(x) = \sum_{s \in \tau_x} h(s)$ for all $x \in \mathcal{X}$ where $\tau_x = \{s: T(s) = x\}$. If such a surjective function does not exist, I want to find the closest approximation, e.g., one that minimizes $\left (h(s) - \sum_{s \in \tau_x} h(s) \right )^2 $. Is there a name for such a problem? Are there problems related or similar to this?
Transforming one distribution into another with different support
CC BY-SA 4.0
null
2023-04-23T21:58:31.747
2023-04-23T22:06:05.723
2023-04-23T22:06:05.723
228809
228809
[ "probability", "distributions", "optimization", "variational-bayes" ]
613883
2
null
613871
15
null
It could certainly have been written more clearly though the distinction you need to make here should already be clear in your mind by the time you're tackling this sort of question, or you're sure to get very muddled. They mean that the population standard deviation ($\sigma$) is unknown. It can of course be estimated from the data, e.g. by a sample standard deviation, ($s$) $-$ note the change of symbol, this change marks the distinction between the parameter and the estimate [1]. However if you're correctly reproducing the question and answer, it does muddle this important distinction itself at one point, which is bordering on the egregious in something that's supposed to be clarifying what's going on. Such slapdash writing in a detailed solution is not okay. The noise in such an estimate has an effect on the behavior of the statistic you'd use if you had the population standard deviation so the distinction matters. This will be explained in detail in a reasonable textbook. $[$Feel free to skip this parenthetic digression. I think a better answer would be along the lines of 'we don't actually know for certain whether a t confidence interval is appropriate, because there's insufficient information to judge whether the $90\%$ coverage could be attained sufficiently closely in these circumstances. If we assume that we did have such information, we would compute as follows [...]'. They clearly intend that the $n=25$ should be dispositive, but it is not, since the behavior of the interval could, for example, be impacted by substantial skewness in the population of scores -- imagine a test with mostly very easy questions but one or two very hard ones that were beyond almost every student; most students will score just below some very high score (like, oh, some value in the mid-80s, say, representing near to $100\%$ on the very easy questions), but there could be a pretty long tail to the left and a very short one on the right. In that case, $n=25$ (or indeed even substantially more) might not be quite sufficient for our specific purposes, on which we have been given no detail. It's an assumption we're making $-$likely reasonable, but an assumption nonetheless$-$ that something like this is not the case. If we wanted a one-sided interval such considerations become much more important, because the error in the other tail won't be there to help 'balance out' the error in the tail we use in a one-sided interval. It's less of an issue with a two-sided interval, but the potential problem remains. As a check, I was able to simulate plausible looking scores, with mean and standard deviation like those in the question, where in samples of size $25$ the coverage of "$90\%$" intervals was around $80\%$; lower coverage values would have been quite simple to obtain.$]$ --- [1]: conventionally, Greek letters are used for population parameters, Latin letters (normally lower case) for estimates.
null
CC BY-SA 4.0
null
2023-04-23T22:00:10.220
2023-04-24T05:15:38.063
2023-04-24T05:15:38.063
805
805
null
613884
1
null
null
1
14
> Question: Suppose there is a job training program rolled out in Georgia from 2011 to 2015 . The aim of the job training program is to provide work experience for a period of 12 to 18 months to individuals who had faced economic and social problems prior to enrollment in the program. There are 60 out of 159 counties in Georgia implemented this program over the five years, and each year there are 5 counties starting this program. The program will continue as long as the county started the program. Suppose you have data on county-level employments from 2010 to 2020, and you run a two-way fixed effects event-study model to estimate the average treatment effect of the program on the treated (i.e., ATT). Will this give you an unbiased estimate of the ATT? If your answer is no, how would you estimate the ATT? Two way Fixed effect works when effects are homogenous across units and across time periods, and the effect is good for newly treated units v.s control and bad for newly treated v.s already treated. Since the data only contains employments, there can be bias in effect, i.e. different age groups. Also we do not know whether employed people are already treated or not. So two way fixed effect does not work, and we should do propensity score matching first and evaluate using this formula: $$ A T E=E\left[\frac{\left(T_i-\mu_i\right) y_i}{\mu_i\left(1-\mu_i\right)}\right], \quad A T T=E\left[\frac{\left(T_i-\mu_i\right) y_i}{P\left(T_i=1\right)\left(1-\mu_i\right)}\right] $$ where $\mu_i \equiv \mu\left(x_i\right) \equiv P\left(T_i=1 \mid x_i\right)=E\left(T_i \mid x_i\right)$ is the propensity score. Is my reasoning correct? Please let me know if I need to provide more information. Thanks.
About two way fixed effect
CC BY-SA 4.0
null
2023-04-23T22:58:19.527
2023-04-23T22:58:19.527
null
null
386358
[ "econometrics", "propensity-scores" ]
613885
1
613892
null
2
62
Suppose I have a time series process $\{X_t\}$ that is strictly stationary in the sense that the joint distribution of $[X_{t_1},...,X_{t_k}]$ and $[X_{t_1+a},...,X_{t_k+a}]$ are the same for any set of integers $t_1,...,t_k$ and any integer $a$. If in addition, I know that this process have mutually independent observations. Does this imply that $\{X_t\}$ is an i.i.d. process? My guess is that it is. Because strict stationarity implies that every term in this time series have the same distribution as $X_1$, and thus when independence also hold, it is an i.i.d. process. Does this look correct?
If a strictly stationary process is also independent, does this imply i.i.d.?
CC BY-SA 4.0
null
2023-04-23T22:59:27.437
2023-05-06T09:10:07.700
2023-04-23T23:19:49.957
224576
224576
[ "time-series", "independence", "stationarity", "iid" ]
613886
2
null
126110
0
null
There are two methods of showing $\operatorname{Var}(a'\theta^*) \geq \operatorname{Var}(a'\hat{\theta})$, where $\hat{\theta} = (X'\Sigma^{-1}X)^{-1}X'\Sigma^{-1}y$ is the weighted-least squares estimate of $\theta$ with the "theoretical" weights $\Sigma$, and $\theta^* = (X'WX)^{-1}X'Wy$ is an arbitrary weighted-least squares estimate. The first method links the problem to an OLS problem and then applies the Gauss-Markov theorem (as @Danilo attempted but he did not clearly finish the argument). The second method is a brutal-force evaluation of the difference $\operatorname{Var}(a'\theta^*) - \operatorname{Var}(a'\hat{\theta})$. #### Method 1 Rewrite the linear model $y = X\theta + \epsilon$ as $y_0 = X_0\theta + u$, where $y_0 = \Sigma^{-1/2}y$, $X_0 = \Sigma^{-1/2}X$, $u = \Sigma^{-1/2}\epsilon$. The latter representation then corresponds to an OLS problem as the error $u$ is homoscedastic in view of $\operatorname{Var}(u) = \Sigma^{-1/2}\Sigma\Sigma^{-1/2} = I_{(n)}$. The Gauss-Markov theorem then applies: since $a'\theta^* = a'(X'WX)^{-1}X'Wy = a'(X'WX)^{-1}X'W\Sigma^{1/2}y_0$ is an unbiased linear estimate of $a'\theta$ (i.e., $E[a'\theta^*] = a'\theta$), it follows that \begin{align} \operatorname{Var}(a'\theta^*) \geq \operatorname{Var}(a'(X_0'X_0)^{-1}X_0'y_0) = \operatorname{Var}(a'\hat{\theta}). \end{align} This completes the proof. #### Method 2 Since $\operatorname{Var}(a'\theta^*) - \operatorname{Var}(a'\hat{\theta}) = a'((X'WX)^{-1}X'W\Sigma WX(X'WX)^{-1} - (X'\Sigma^{-1}X)^{-1})a$, if we can show that the matrix $(X'WX)^{-1}X'W\Sigma WX(X'WX)^{-1} - (X'\Sigma^{-1}X)^{-1} \geq 0$ (i.e., the difference is a positive semi-definite matirx), the result then follows. To this end, note that \begin{align} & (X'WX)^{-1}X'W\Sigma WX(X'WX)^{-1} - (X'\Sigma^{-1}X)^{-1} \\ =& (X'WX)^{-1}[X'W\Sigma WX - (X'WX)(X'\Sigma^{-1}X)^{-1}(X'WX)](X'WX)^{-1} \\ =& (X'WX)^{-1}X'W[\Sigma - X(X'\Sigma^{-1}X)^{-1}X']WX(X'WX)^{-1} \\ =& (X'WX)^{-1}X'W\Sigma^{1/2}[I_{(n)} - \Sigma^{-1/2}X(X'\Sigma^{-1}X)^{-1}(\Sigma^{-1/2}X)']\Sigma^{1/2}WX(X'WX)^{-1}, \end{align} hence it suffices to prove $I_{(n)} - \Sigma^{-1/2}X(X'\Sigma^{-1}X)^{-1}(\Sigma^{-1/2}X)' \geq 0$, which follows from the matrix ("hat matrix") $H := \Sigma^{-1/2}X(X'\Sigma^{-1}X)^{-1}(\Sigma^{-1/2}X)'$ is symmetric and idempotent. This completes the proof.
null
CC BY-SA 4.0
null
2023-04-23T23:01:53.600
2023-04-25T22:17:01.353
2023-04-25T22:17:01.353
20519
20519
null
613887
1
null
null
4
69
I'm interested in the following question. It seems pretty elementary but I don't know where to actually find reference on it. Suppose we have a scale (one parameter) distribution family $\{\mathcal{F}_{\sigma}, \sigma>0\}$. And further assume $\{X_i\}$ with $i\ge 1$ are random variables from this family, such that $X_i\sim \mathcal{F}_{\sigma_i}$ (with scale parameter $\sigma_i$). If the paramter $\sigma_n\xrightarrow{n\to\infty}\sigma$, do we have $X_n\xrightarrow[weak]{n\to\infty} X$ (convergence in distribution), where $X$ follows $\mathcal{F}_{\sigma}$. Note that the commonly seen distribution family: $N(0,\sigma)$, $Laplace(location=0,\ scale=\sigma)$ or discrete distribution $Geom(p)$, etc. all have this satisfied.
Convergence in parameter implies convergence in distribution
CC BY-SA 4.0
null
2023-04-23T23:14:42.500
2023-04-25T03:24:23.697
2023-04-24T01:41:27.520
20519
353714
[ "probability", "convergence", "location-scale-family" ]
613888
1
null
null
0
50
I come up with this problem and I am sure if my solution is corrected. > $X, Y \sim \text{Exp}(\lambda)$. Find pdf of $W = X/(X+Y)$. Please see my try bellow: [](https://i.stack.imgur.com/A44gG.png) [](https://i.stack.imgur.com/HsS8q.png) At Eq(17) I don't know if my result is corrected or not.
Find pdf of $W = X/(X+Y)$ - Result check
CC BY-SA 4.0
null
2023-04-23T23:33:13.137
2023-04-24T01:01:06.960
2023-04-24T01:01:06.960
20519
182835
[ "probability", "distributions", "mathematical-statistics", "density-function" ]
613889
1
null
null
1
14
A popular measure of "closeness" between probability distributions $\vec{p_1}$, $\vec{p_2}$ is the Bhattacharyya coefficient $\sum_j \sqrt{p_{1,j} p_{2,j}}$. Consider two statistical models $p(X|\Theta)$, $q(X|\Theta)$ with $\Theta\in \\\{1,\ldots, k\\\}$ finite, i.e. conditional probability distributions of some observation $X$ in dependence of the underlying parameter $\Theta$. Consider the $k\times k$ matrices of Bhattacharyya coefficients of these probability distributions. As I understand, matrix entries are related (by some inequalities) to the failure probability when we try to distinguish different values of $\Theta$ by sampling $X$. But is there a statement that this matrix is, in some sense, actually sufficient information to judge how much observing $X$ is helpful to learn something about $\Theta$? In other words, that the Bhattacharyya coefficient matrices of $p$ and $q$ are equal if and only if the maximum success probabilities/best achievable figures of merit for some class of distinguishing/learning tasks are equal, possibly asymptotically?
Is there a sense in which collections of probability distributions with the same matrix of Bhattacharyya coefficients are "essentially the same"?
CC BY-SA 4.0
null
2023-04-23T23:39:42.610
2023-04-23T23:39:42.610
null
null
386365
[ "experiment-design", "signal-detection", "bhattacharyya" ]
613890
2
null
161876
1
null
In Keras the posterior distribution is specified explicitly. If multimodality of posteriors is expected, it has to be specified. Here is one example [http://ezcodesample.com/Multimodal.html](http://ezcodesample.com/Multimodal.html), showing how to change unimodal posterior into multimodal. That means Keras or tensorflow provide you best parameters for the distribution that you specify in your model, not the one that actually is. It is your responsibility to specify it correctly.
null
CC BY-SA 4.0
null
2023-04-23T23:53:03.257
2023-04-23T23:53:03.257
null
null
386366
null
613891
2
null
613887
1
null
Yes, this is correct. By convention, in probability "$\mathcal{F}$" is reserved to denote $\sigma$-fields rather than distribution functions. For this reason, I will replace $\mathcal{F}$ in your post with $F$ below. Recall that by definition, $\{F_\tau: \tau > 0\}$ is a scale family if $F_\tau(x) = F_1(\tau^{-1}x)$ for all $x \in \mathbb{R}$, where $F_1$ is a fixed distribution function. Let $x_0$ be a continuity point of $F_\sigma$, i.e., $F_\sigma(y) \to F_\sigma(x_0)$ as $y \to x_0$. It then follows by $\sigma_n \to \sigma$ as $n \to \infty$ that $\sigma\sigma_n^{-1}x_0 \to x_0$, whence \begin{align} F_{\sigma_n}(x_0) = F_1(\sigma_n^{-1}x_0) = F_1(\sigma^{-1}\sigma\sigma_n^{-1}x_0) = F_\sigma(\sigma\sigma_n^{-1}x_0) \to F_\sigma(x_0), \end{align} which shows $F_{\sigma_n} \to_d F_\sigma$ as $n \to \infty$. This completes the proof.
null
CC BY-SA 4.0
null
2023-04-24T01:38:21.723
2023-04-25T03:24:23.697
2023-04-25T03:24:23.697
20519
20519
null
613892
2
null
613885
3
null
Yes, that is correct. Strict stationarity implies a common marginal distribution for the variables in the series, which is the ID part in IID. If you combine this with an assumption of independence you then get IID.
null
CC BY-SA 4.0
null
2023-04-24T02:52:39.463
2023-05-06T09:10:07.700
2023-05-06T09:10:07.700
173082
173082
null
613893
1
null
null
1
20
I am trying to estimate confidence bounds for prediction metrics of a binary-outcome logistic regression model. In particular, the AUC and the Brier score. I've looked at many other posts on this website, but each seems to have differing and sometimes conflicting ways of computing CIs using either bootstrapping or cross validation. I decided to use the `rms` package and the `validate` method to estimate these metrics. The `validate` method uses optimism-corrected bootstrap resampling to compute metrics. However, it does not provide confidence intervals. A [comment](https://discourse.datamethods.org/t/confidence-intervals-for-bootstrap-validated-bias-corrected-performance-estimates/1990/10) on a post on another website by the author of the package indicates the following method of obtaining confidence bounds: ``` Instantiate metric vector v Loop 1...n: Perform bootstrap resampling of original data d, store in d' Compute optimism-corrected bootstrap metrics of model w.r. to d' using B bootstrap samples Accumulate in v Compute CIs using quantiles on v ``` However, due to the larger size of my dataset (~350K samples), this method of calculating CIs takes a long time to run. I'm trying to replace parts of the bootstrapping process with cross-validation. Would it be valid to replace the computation of optimism-corrected bootstrap metrics with CV as follows? ``` Instantiate metric vector v Loop 1...n: Perform bootstrap resampling of original data d, store in d' Compute averaged K-fold CV metrics of model w.r. to d' Accumulate in v Compute CIs using quantiles on v ``` Or further, is it also appropriate to replace the outer most bootstrap sampling with CV as well? ``` Instantiate metric vector v Loop 1...n: Compute averaged K-fold CV metrics of model w.r. to d Accumulate in v Compute CIs using quantiles on v ``` --- I've seen other posts such as [this](https://stats.stackexchange.com/questions/305804/get-ci-and-p-values-for-cross-validated-performance-measures-auc-rho?rq=1) showing how to compute confidence intervals and a t-test statistic. However, I wasn't sure if this would work for bounded metrics such as AUC and the Brier score. I've also seen another [post](https://stats.stackexchange.com/questions/388941/am-i-allowed-to-average-the-list-of-precision-and-recall-after-k-fold-cross-vali) that explains that recall metrics can't be just averaged across CV folds which lays some doubts about the `Compute averaged K-fold CV metrics ...` step. Meanwhile for yet another [post](https://stats.stackexchange.com/questions/69831/confidence-intervals-for-cross-validated-statistics) I'm unable to figure out what the commenter meant by `bootstrapping of the resampled mean` and the corresponding paragraph in their paper. Appreciate any help!
Using the Bootstrap in conjunction with Cross Validation for Confidence Intervals
CC BY-SA 4.0
null
2023-04-24T03:23:11.220
2023-04-24T03:23:11.220
null
null
341873
[ "logistic", "confidence-interval", "cross-validation", "bootstrap", "auc" ]
613894
2
null
613169
2
null
Consider the case of a Poisson GLM. In this case, the variance is directly equal to the mean $$\mathbb{E}[Y\,|\,X]=\text{Var}(Y\,|\,X)=\mu_{i}$$ For example: [](https://i.stack.imgur.com/wovkN.png) Obviously for higher $\mu_{i}$, the model admits larger deviations away from $\mu_{i}$ with higher probabilities. Depending on how you define prediction error, those larger deviations result in lower error for larger $\mu_{i}$. Essentially, the model is happier with seeing large deviations from the prediction for increasing $\mu_{i}$. This is a property of the chosen model. In practice, this might not be something you want for your model. You may want the model to have a variance that is homogenous across the support of the distribution (homoscedasticity).
null
CC BY-SA 4.0
null
2023-04-24T03:32:59.487
2023-04-24T03:32:59.487
null
null
102399
null
613895
1
null
null
1
25
I'm experimenting with non-normal innovations standard GARCH(1, 1) model $$\epsilon_t = \sqrt h_t z_t$$ $$h_t = \omega + \alpha \epsilon_{t-1} + \beta h_{t-1}$$ Where $E[z_t] = 0$, $E[z_t^2] = 1$, but $z_t$ could be non-normal. I've worked through Bai, Russell, and Tiao's 2003 paper "Kurtosis of GARCH and stochastic volatility models with non-normal innovations", which provides a helpful theorem (2.1) on the various moments of the variables in the GARCH process. I took the time to verify these results on my own, so I'm fairly sure they're correct. To test the moments, I generate an ensemble of 200,000 samples, where each sample is an array of $N$ realizations of $\epsilon$. So I end up with a 200,000 by $N$ array. This allows me to quickly compare various sample moments with their theoretical values. For example, I can take the mean of a column, which should be near the theoretical $E[e_t] = 0$. This in particular works without issues. However, on some of the moments, something is happening that I can't formally explain. For example, consider the first autocovariance of the squared realizations $$\gamma_1 = E[(\epsilon_t^2 - \mu)(\epsilon_{t-1}^2 - \mu)]$$ where I've defined $\mu = \omega / (1 - \phi)$, and $\phi = \alpha + \beta$. By using the ARMA representation of $\epsilon_t^2$ (eq 2 in Bai et al) along with some tricks from Chapter 3 of Hamilton's "Time Series Analysis" (his 3.3.18 in particular), I was able to derive: $$\gamma_1 = \alpha \frac{1 - \phi^2 + \alpha \phi}{1 - \phi^2} Var(u_t)$$ where $u_t = \epsilon_t^2 - h_t$. Since I used this result later to derive Bai et al's (15), I'm pretty confident it's correct. But when I check this via simulation, comparing the sample covariance of adjacent columns to this theoretical $\gamma_1$, I only get an approximately correct result if I calculate the sample covariance among the last few columns of the array. If I instead use some of the first few columns, the sample covariance can be less than half of the value of the theoretical $\gamma_1$. I suspect this has to do with how I'm initializing the simulation. Following [this discussion](https://stats.stackexchange.com/questions/133286/initial-value-of-the-conditional-variance-in-the-garch-process), I'm setting the initial value of $h$ to be $\mu$, its unconditional expectation. I wonder if there's a more informed choice I can make, unfortunately, literature seems to be quite sparse on this topic. In "Small sample properties of GARCH estimates and persistence", Hwang and Pereira simulate a GARCH. They mention that "the first 1000 observations are not used to avoid any problems from initial values". But what are these problems, formally? Is there any way I can avoid having to generate 1000 extra samples, only to discard them?
Inaccuracies due to initial values in GARCH(1, 1) simulation
CC BY-SA 4.0
null
2023-04-24T03:46:02.413
2023-04-24T03:46:02.413
null
null
386376
[ "time-series", "arima", "monte-carlo", "garch", "kurtosis" ]
613896
1
613918
null
1
27
After having read about VEC model (VECM), I thought that cointegration and VECM loading forces were strongly, not to say numerically correlated. I thought that the more we have cointegration, the less is the coint. $p$-value, the higher are the loading forces $\alpha_i$. To confirm that, I plotted a point cloud of 100 pairs of stocks showing one of their 2 cointegration $p$-values < 0.03. There are 2 cointegration tests per pair because it depends on the variable you set as as endogenous. Each point in this plot is the couple `(x_axis=avg(coint p-value), y_axis=average(abs(vecm_forces))` [](https://i.stack.imgur.com/lrsR0.png) We see a little bit of a relationship: the higher the $p$-value, the less is likely the pair is to be cointegrated, the smaller are the $\alpha_i$s. However, I find the correlation very low compared to what I expected. Especially, those 2 things mean basically the same thing, right? Whether or not the pairs are returning fast to the equilibrum. With this said, how to interpret a pair that has a low coint $p$-value but a low force? Do I do well when I average the 2 loading forces and the 2 coint $p$-values? Maybe I should take respectively max and the min of them? Should I compare the loading forces themselves, or would their $t$-values be more easy to interpret ($t$-value of the loading force shows a stronger correlation with coint. $p$-value)? I want basically to find a strong correlation between both things so that the second dimension disappear, because they mean the same thing (unless I'm wrong and I'm missing some interpretation?). The goal is in the end to sort the pairs with a certain homogenous unity that represent the "speed" at which they go back to their equilibrium in a context of pair trading strategy.
Cointegration $p$-value and VECM loading forces show low correlation
CC BY-SA 4.0
null
2023-04-24T03:49:51.690
2023-04-24T14:45:40.537
2023-04-24T14:45:40.537
372184
372184
[ "cointegration", "vector-error-correction-model" ]
613897
2
null
300399
1
null
The problem is rooted in not importing the train/test data properly. As you can see in hamza's answer, the imported data is parsed as float data. therefore you just need to add `astype("float")` while importing the dataset ``` Xtr.reshape((50000, 3, 32, 32)).transpose(0, 2, 3, 1).astype("float") Xte.reshape((10000, 3, 32, 32)).transpose(0, 2, 3, 1).astype("float") ```
null
CC BY-SA 4.0
null
2023-04-24T03:54:29.410
2023-04-24T03:54:29.410
null
null
386380
null
613898
1
null
null
1
9
I have a retrospective study evaluating the effectiveness of an intervention in reducing COVID infections on subgroup of patients. Using a control and intervention group, I was hoping to set a time-based calendar scale to avoid having to account for the variation in local covid infection rates(as noted by this paper [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9933854/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9933854/) ). However, this intervention was offered on a rolling basis in the first half of the year, and so I don’t have a uniform start date for each patient. Would I be able to apply a time dependent covariate only analyze the at risk population on that date at the time of the event. I have a little under 1000 patients but here is a sample of the data. With time of entry and time of last visit along with date of covid infections.
Can I apply a time dependant covariate to a cox regression with calendar-based time scale using data with staggered entry
CC BY-SA 4.0
null
2023-04-24T04:00:13.720
2023-04-24T13:52:12.637
null
null
386351
[ "regression", "survival", "censoring", "dependent-variable" ]
613900
1
null
null
3
47
Suppose that I was tasked to find if the mean difference between the current and starting salary of employees is greater than 15,000. How will my null and alternative hypotheses be? Can it be: Ho: Current - Starting is less than or equal to 15,000. Ha: Current - Starting is greater than 15,000. Will it be a right-tailed distribution? Also, does the order of difference matter? (current minus starting and starting minus current). I was told that paired t-tests should always have a null of equal to zero so I'm confused.
Null Hypothesis in Paired t-test
CC BY-SA 4.0
null
2023-04-24T04:26:40.113
2023-04-24T21:04:48.320
2023-04-24T20:12:27.780
56940
370191
[ "hypothesis-testing", "distributions", "t-test", "paired-data" ]
613901
1
null
null
0
15
I was trying to fit the Poisson rate regression and predict the rate by using StanMoMo in R. However, when I tried to extract mufor(The predictions were saved in this variable) from the result. Few values were greater than zero. It was very strange. Since I tried to use the sample dataset that the author provided, there was no such problem. So I am wondering why is this the case. Please give me some suggestions, Thank you! I may not be available to provide the original dataset. I am still trying my best to generate a reproducible example.
Is it possible to have rate predictions which are greater than 1 with Poisson regression in r?
CC BY-SA 4.0
null
2023-04-24T05:40:38.323
2023-04-24T05:40:38.323
null
null
368723
[ "r", "poisson-regression", "stan", "offset" ]
613902
1
null
null
1
760
Can we transform the variables of a regression in MLR to sine , cosine, tan Then how to interpret the results if I get a good $R^2$ and good adjusted $R^2$
Can we do sine , cosine , tan and cot transformation in regression?
CC BY-SA 4.0
null
2023-04-24T05:43:00.827
2023-04-25T13:42:47.443
2023-04-25T06:18:31.383
53690
325928
[ "regression", "data-transformation", "feature-engineering", "trigonometry" ]
613903
1
613904
null
2
8
Let's say I have a list of lists of integers, all of a large variance in length, and I want to get the weighted mean of each record. How would I go about doing that? For example: ``` record_a = [1]; record_b = [8, 7, 5, 6, 7, 2, 9]; record_c = [8]; record_d = [8, 4, 5]; record_e = [8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,] ``` There will always be under 10,000 records in this list To clarify, for instance if I want to compare the mean of record E and record A there is a large difference in data and I am not sure how to weight to account for that.
how to weight means based on data set size
CC BY-SA 4.0
null
2023-04-24T05:47:09.497
2023-04-24T06:12:08.320
2023-04-24T05:48:27.243
386384
386384
[ "distributions" ]
613904
2
null
613903
1
null
Use the mean and standard error for each record. For example, record b: the mean is 6.29 and the standard error ($=\frac{\text{sample standard deviation}}{\sqrt{n}}$) is 0.87. Contrast this to record d: the mean is 5.67 and the standard error is 1.3. So, if we wanted to compare them, we can say we are a lot more certain about the win rate for team b, compared with that for team d. There is a lot of available information on standard errors on the Internet.
null
CC BY-SA 4.0
null
2023-04-24T06:12:08.320
2023-04-24T06:12:08.320
null
null
369002
null
613905
1
614146
null
1
76
Suppose I have an AR(1) process $X_t=aX_{t-1}+e_t$, where $e_t$ is a white noise with zero mean and finite variance. Under what conditions do I have $\{X_t\}$ being strictly stationary in the sense that the joint distribution of $[X_{t_1},...,X_{t_k}]$ and $[X_{t_1+a},...,X_{t_k+a}]$ are the same for any set of integers $t_1,...,t_k$ and any integer $a$?
When is an AR(1) process strictly stationary?
CC BY-SA 4.0
null
2023-04-24T06:31:48.913
2023-04-25T22:18:29.543
null
null
224576
[ "stationarity", "autoregressive", "joint-distribution" ]
613907
1
null
null
0
15
I am trying to retreive rotation matrix from a rotated 3D point cloud covariance matrix, using SVD decomposition (as done in [SimNet](https://arxiv.org/abs/2106.16118) and [MVTrans](https://arxiv.org/abs/2302.11683)). Here how I computed the covariance matrix from 3D point cloud (CAD model of the rotated object): ``` def cov_mat(self, rot, pos): """Returns the 3x3 covariance matrix of the point cloud in camera coordinates""" point_cloud = self.raw / 1000.0 points_cam = np.dot(point_cloud, rot.T) centered_points_cam = points_cam - np.mean(points_cam, axis=0) cov = np.cov(centered_points_cam, rowvar=False) return cov ``` Here code function to retrieve rotation from covariance: ``` def _solve_for_rotation_from_cov_matrix(self, cov_matrix): assert cov_matrix.shape[0] == 3 assert cov_matrix.shape[1] == 3 U, D, Vh = np.linalg.svd(cov_matrix, full_matrices=True) d = (np.linalg.det(U) * np.linalg.det(Vh)) < 0.0 if d: D[-1] = -D[-1] U[:, -1] = -U[:, -1] # Rotation from world to points. rotation = np.eye(4) rotation[0:3, 0:3] = U return rotation ``` Using this implementation, I do not get the correct (ground truth ) true rotation matrix, can you expolain me the reason ?
Rotation Matrix from Covariance of 3D point-cloud
CC BY-SA 4.0
null
2023-04-24T07:01:54.447
2023-04-24T07:01:54.447
null
null
386226
[ "covariance", "eigenvalues", "svd", "rotation" ]
613908
2
null
515227
1
null
You cannot use an IV that is time-invariant in a fixed effects approach. Citing [Wooldridge](https://www.statalist.org/forums/forum/general-stata-discussion/general/1618845-can-i-use-time-invariant-instrument-variable-and-country-fixed-effects-in-pooled-cross-sectional-analysis): > It's clear that a time-constant IV cannot be used in fixed effects, so one shouldn't try. As Sebastian noted, Stata will drop collinear variables, but not always the one that it should. Whenever one does fixed effects manually, this can happen.
null
CC BY-SA 4.0
null
2023-04-24T07:14:14.483
2023-04-24T07:14:14.483
null
null
288885
null
613910
2
null
613902
13
null
Yes. In particular, sine and cosine are not just convenient but often natural as ways of handling predictors in problems with periodic structure, in which at least some variation can be related to time of day (clock), time of year (calendar), or spatial direction (compass). See for example [this paper](https://journals.sagepub.com/doi/pdf/10.1177/1536867X0600600408) as an introductory review. Interpretation is often straightforward, so that with compass direction in particular, sines and cosines are associated with East-West and North-South effects respectively, assuming that direction is measured conventionally with $0^\circ \equiv 360^\circ$ as North. If you are modelling e.g. more or less direct or indirect effects of time of year, reflecting climatic variations, you may be lucky to find that even one (sine, cosine) pair of predictors works very well or at least helpfully. Other way round, modelling say effects of seasonality in socio-economic data may require several (sine, cosine) pairs, thus resisting simple interpretation. For that and other reasons people in economics or business may be more inclined to reach for a bundle of indicator variables to match religious or civic holidays, vacation times, and so forth. Time of week, although often obvious in effect, also tends to call for indicator variables, not sines and cosines. Tangent and cotangent are much less common as transformations in regression in my experience. I would be especially wary about working with tangents if arguments were even close to odd multiples of $\pi/2$ radians or its equivalent. There is a similar pitfall with the cotangent for arguments that are even multiples of $\pi$ radians.
null
CC BY-SA 4.0
null
2023-04-24T08:00:24.090
2023-04-25T08:43:07.920
2023-04-25T08:43:07.920
22047
22047
null
613911
1
null
null
0
12
I have read in [this paper](https://vwo.com/downloads/VWO_SmartStats_technical_whitepaper.pdf), about how to do Bayesian AB testing for conversions in digital advertising. One can start with uniform priors, and update them to Beta distributions based on `num_trials_A`, `num_successes_A`, `num_trials_B` and `num_successes_B`. However it seems that the authors take the conversion rates to be constants there. What if there is some seasonality, or a trend in the rates over time? Is there something about modifying the test for this in the literature? My intuition is to check whether I can get this seasonality/trend to go away by taking the difference between the two sample rates.
Bayesian AB testing for Bernoulli Events with Time Patterns
CC BY-SA 4.0
null
2023-04-24T08:09:15.860
2023-04-24T08:09:15.860
null
null
186119
[ "probability", "hypothesis-testing", "bayesian" ]
613912
2
null
586244
1
null
I stumbled upon this question by looking for something similar; here's what I know: Your question highlights two different types of forecasting: - Point Forecast: In this interpretation, a forecast horizon of 30 days means predicting the value of the time series on the 30th day in the future. This approach is used when you only care about a specific point in time. - Multi-Step Forecast: In this interpretation, a forecast horizon of 30 days means predicting the values for all 30 days in the future. There are two main types of multi-step forecasting: Direct Multi-step Forecast: Separate models are created for each step in the forecast horizon, e.g., one model for predicting the value one day ahead, another for two days ahead, and so on, up to 30 days ahead. Iterative (Recursive) Multi-step Forecast: A single model is used to iteratively predict future values. The model predicts the value one step ahead, and then this predicted value is fed back into the model to predict the value two steps ahead, and so on. Both interpretations are correct and correspond to different types of forecasting. The choice of approach depends on your specific use case and requirements. Indeed, there are several other multi-step forecasting methods that may be relevant depending on your problem and dataset. You can find more information on various multi-step forecasting techniques in this article: [https://towardsdatascience.com/6-methods-for-multi-step-forecasting-823cbde4127a](https://towardsdatascience.com/6-methods-for-multi-step-forecasting-823cbde4127a) I hope this helps!
null
CC BY-SA 4.0
null
2023-04-24T08:14:35.240
2023-04-24T08:14:35.240
null
null
141021
null
613913
1
null
null
1
19
I am currently working with longitudinal data using a linear mixed model (`lme4` R package). The linearity plot (fit with `check_model` function from `performance` R package) indicates an upward trend. Would one consider this violation as serious, as the trend is monotonic and does not show a funnel shape etc. ? For linear regression it is recommended to examine whether fixed terms or interactions between the terms are missing. In the case of an LMM, could the plot additionally indicate that the model is lacking a random effect? [](https://i.stack.imgur.com/s7qw5.jpg)
Upward trend in Residuals vs. Fitted - Violation of linearity?
CC BY-SA 4.0
null
2023-04-24T08:50:18.410
2023-04-24T08:55:49.233
2023-04-24T08:55:49.233
277811
277811
[ "regression", "lme4-nlme", "residuals", "assumptions", "linearity" ]
613914
2
null
613341
0
null
In the below code I reflect the advice in the 2nd paragraph of EdM's answer, "to generate simulations to illustrate the vagaries of fitting a parametric Weibull model" by removing the intermediate Cox model shown in the OP. The code below takes into account the joint distribution of shape and scale values from the parametric Weibull fit by using the `mvrnorm()` function from the `MASS` package to simulate from a bivariate normal distribution with mean and covariance matrix equal to the estimates obtained from fitting the Weibull distribution. I use the `vcov()` function to extract the covariance matrix from the Weibull fit. The code generates new survival times using the simulated shape and scale parameters. Code: ``` library(survival) library(MASS) # Simulate survival data n <- 100 cens <- rbinom(n, size = 1, prob = 0.2) x1 <- rnorm(n) x2 <- rnorm(n) data <- data.frame(cens, x1, x2) # Estimate parameters of Weibull distribution t <- rexp(n, rate = 1/5) fit <- fitdistr(t, "weibull") shape <- fit$estimate[1] scale <- fit$estimate[2] # Generate new survival times based on estimated parameters mean_est <- c(shape, scale) cov_est <- vcov(fit) new_params <- mvrnorm(n = n, mu = mean_est, Sigma = cov_est) new_t <- rweibull(n = n, shape = new_params[,1], scale = new_params[,2]) # Fit survival curve fit <- survfit(Surv(new_t, cens) ~ 1, data = data) sfit <- summary(fit) tseq <- seq(min(sfit$time), max(sfit$time), length = 100) surv <- survfit(Surv(new_t, cens) ~ 1, data = data.frame(new_t = tseq)) haz <- -diff(log(surv$surv))/diff(surv$time) # Plot survival and hazard curves, and histogram of simulated survival times par(mfrow=c(1,3)) plot(surv, xlab = "Time", ylab = "Survival Probability", main = "Survival Function") plot(surv$time[-1], haz, type = "l",xlab = "Time", ylab = "Hazard Function", main = "Hazard Function") hist(new_t, breaks = 20, xlab = "Survival Time", ylab = "Frequency", main = "Simulated Survival Times") ```
null
CC BY-SA 4.0
null
2023-04-24T08:59:19.183
2023-04-27T13:02:42.940
2023-04-27T13:02:42.940
378347
378347
null
613915
1
null
null
1
17
I used the multiple imputation method to fill in my missing data points in a big dataset. My dataset now contains values for 5 imputations. I know there is an option to analyze with the pooled value of the 5 imputations, but how do I create a dataset with the pooled values of the multiple imputations?
How to create one pooled datafile after (5) multiple imputations in order to fill in the missing values in SPSS?
CC BY-SA 4.0
null
2023-04-24T09:08:07.780
2023-04-24T12:22:09.037
2023-04-24T12:22:09.037
237901
383653
[ "spss", "missing-data", "multiple-imputation" ]
613917
2
null
613047
1
null
## What are we testing in measurement invariance? The first theorem of [Meredith (1993) (p. 528)](https://link.springer.com/article/10.1007/BF02294825) states that a random variable $X$ is measurement invariant with regards to selection on $V$ if and only if they are locally independent when conditioned on $\eta$, for every $\eta$ in the sample space. Consider now that $X$ are your depression scale items, $\eta$ is your latent depression construct score and $V$ is sex. What measurement invariance implies is that, in the diagram $V \rightarrow \eta \rightarrow X$, conditioning on the mediator $\eta$ should block any effect that $V$ could have on $X$, i.e. there is no direct effect from $V$ to $X$. The only way that sex ($V$) can influence depression item scores ($X$) is through a difference in the latent construct score ($\eta$). In scalar invariance (or strong factorial invariance), both the slopes of the effect of the latent common scores on the item scores and the item intercepts are assumed equal across groups. Under this assumption, if girls report higher average scores in the observed items $X$, it must be due to higher average latent depression scores $\eta$. ## Your question Note that you incorrectly state > This does not hold for my analysis. Because girls report more depression, their factor intercepts are higher. Thus, this measure fails the measurement invariance test. Girls having higher latent factor intercepts does not violate measurement invariance nor scalar invariance. As we saw in the diagram, there is no problem with sex influencing directly on the latent factor score ($V \rightarrow \eta$). The issue is when sex would influence the item scores directly without reflecting a change in the latent factor ($V \rightarrow X$). Then, your question is no longer a statistical one, but at least a psychological one if not an epistemological one. What you would like to know is whether the average increase in item scores for girls is reflecting an increase in actual latent depression score, or just a difference in social desirability for reporting in contrast to boys. Also note that you say that girls over-report depression, which already reflects some unwarranted implicit assumptions (i.e. it is not boys that under-report depression). Without further assumptions which would strongly depend on domain knowledge (that might not even exist), you cannot turn this into a testable statistical problem. For instance, what specific functional form would you expect for the effect of desirability on reporting? Should it affect all sub-scales or items in the same way? Would only a subgroup of girls be affected? etc.
null
CC BY-SA 4.0
null
2023-04-24T09:17:15.420
2023-04-25T13:54:22.607
2023-04-25T13:54:22.607
180158
180158
null
613918
2
null
613896
1
null
> I thought that the more we have cointegration, the less is the coint. $p$-value, the higher are the loading forces $\alpha_i$ First, cointegration is a discrete YES/NO phenomenon. Either a system of integrated variables is cointegrated or it is not. It is not a gradual phenomenon, thus we cannot have more or less of it. Second, there is a distinction between effect size and statistical significance. You can have cointegrated systems with their VECM representations with fast error correction (a large loading coefficient, a short half-life) or slow error correction (a small loading coefficient, a long half-life). Depending on the sample size and the error variance, you could have small or large $p$-values in each case. All else being equal, faster error correction would also yield smaller $p$-values* for a given sample size, but "all else" is not always equal. *As also more generally, larger effect sizes yield smaller $p$-values, ceteris paribus.
null
CC BY-SA 4.0
null
2023-04-24T09:18:09.020
2023-04-24T09:18:09.020
null
null
53690
null
613919
1
null
null
0
27
Im fitting a linear regression between two variables and to reduce the problem of heteroskedasticity I have log-transformed the outcome variable y. However, this makes it difficult for me, a non statistician, to understand how I should backtransform to the MAPE (mean absolute percentage error) as it is a metric that is wanted for the comparison. I have understood that in general, back transformations should be done on the final values, and not partial calculations. But I dont know the form this transformation should take between: T(mean absolute percentage log error)= MAPE of y. Any help is appreciated. edit: had mistakenly wrote percentile instead of percentage, now changed.
regression question: backtransforming MAPE for log(y)
CC BY-SA 4.0
null
2023-04-24T09:35:50.660
2023-04-24T10:21:04.923
2023-04-24T10:21:04.923
379186
379186
[ "regression", "logarithm", "mape" ]
613921
2
null
596182
0
null
This seems like a multi-class analogue of the problem addressed by [King and Zeng (2001).](https://www.jstatsoft.org/article/download/v008i02/904) As is mentioned in the comments, class imbalance is much less of a problem than many believe. After all, if an event is rare, the predictive model should be skeptical of such an event occurring. However, when it comes to collecting data, it can become a nightmare to have to slug through so many cases to get to a point where you have enough observations of the minority classes to do quality work with reasonable estimates. That’s where King and Zeng (2001) come in. Their paper addresses how to sample in situations where events are rare so that you do not waste your time on the majority cases; then you account for the artificial balancing. King and Zeng (2001) address the binary case, but this philosophy makes a lot of sense and might lead somewhere useful. (A crucial difference between King and Zeng (2001) and other artificial balancing like downsampling is that King and Zeng (2001) operate at the phase of data collection instead of data modeling. In that regard, they treat class imbalance as an issue for experimental design and how to be efficient in collecting data that might be time-consuming and/or expensive to acquire, rather than discarding data like downsampling does once imbalanced data have been collected.) REFERENCE King, Gary, and Langche Zeng. "Logistic regression in rare events data." Political analysis 9.2 (2001): 137-163.
null
CC BY-SA 4.0
null
2023-04-24T10:13:02.917
2023-04-24T10:13:02.917
null
null
247274
null
613922
1
614453
null
4
101
This question arose in a true Argentine card game called [truco](https://en.wikipedia.org/wiki/Truco). Sometimes we need to choose who will play because we are too much people, so we deal cards and the first to get the king, out of 4 players, will join the group and play. During the game, I was wondering if this way of selecting the player is fair enough. So this is not homework at all! There is a stack of 40 cards containing 4 kings. 4 participants receive cards, each after another, and we stop as soon as a participant gets a king and wins. Each card that is dealt stays in the table, it is not returned to the stack. For each participant, what is the probability of winning? I think the solution is as follows (let me know if I am right): For the first round of dealing the cards: First participant: 1/10 of getting a king Second participant: (36/40) * (4/39) Third participant: (36/40) * (36/39) * (4/38) Fourth participant (36/40) * (36/39) * (36/38) * (4/37)
Players take turns to draw cards until the winner gets the first King, does it help to go first?
CC BY-SA 4.0
null
2023-04-24T10:29:56.690
2023-05-09T14:21:08.230
2023-04-28T16:54:44.823
22228
386398
[ "probability", "games" ]
613923
1
null
null
0
7
The problem I'm facing is as follows: There are 42 regions in a health system, each which administers 5 hospitals. 12 of these regions have an active intervention that is applied by all hospitals in the region, to try and reduce readmission rate. I want to assess association of the intervention with readmissions at a point in time. My mixed effects model specifies the intervention variable (0 vs 1) for individual hospitals, adjusts for fixed effects (continuous and categorical covariables) at individual hospital level, and clusters hospitals into regions allowing for random effects at the region level. Average readmission rate is the outcome variable. I have concerns that the intervention, as it is designed at the level of regions, might be correlated with any regional random effects, and potentially make the model hard to interpret. Is this a valid concern (or are there any other issues with my approach). If so, what might be a good alternative method? Many thanks in advance!!
Mixed vs fixed effects for analysing clustered data where interventions are specified at cluster level but applied to individuals?
CC BY-SA 4.0
null
2023-04-24T10:51:21.683
2023-04-24T10:51:21.683
null
null
386400
[ "regression", "mixed-model" ]
613924
1
null
null
0
33
I need help with the following. Using our alternate data for external clients, we have built a model for identifying fraudulent customers (classification). we used the auto-ml package to arrive at the best model and gave the results back by scoring the client's holdout data that was without the target. Now, the client wants us to check on model stability by performing 50 bootstrapping and we are not sure how to benchmark the stability after the bootstrapping. Any guidance on the same would be appreciated. Note : the client is from a banking institution.
Evaluating the model stability using bootstrapping
CC BY-SA 4.0
null
2023-04-24T10:55:29.293
2023-05-01T17:03:36.793
null
null
386401
[ "machine-learning", "bootstrap", "uncertainty" ]
613925
1
null
null
0
9
I'm working on a project where I need to model continuous positive data. I found that a Tweedie distribution with a variance power around 1.2 and a dispersion parameter just above 1 fits my data quite well, except for the high probabilities of exact zeros. There are no zeros in the training data since actual zeros would be missing (i.e., such values are not missing at random). However, there could be actual zeros among the missing values, but I estimate that they should be at most around half a percent. Is there any other model or approach that allows me to fit a distribution that has a variance structure similar to Tweedie with a variance power of 1.2 (i.e., variance increases not much more than linearly with the mean) but doesn't produce excess zeros? Profile likelihood suggests a variance power of 1.6, which would be more reasonable regarding the number of zeros, but when looking at the data (after grouping by estimated expected values) it seems that the variance increases less rapidly than that, closer to a variance power of 1.2. I'd also be fine with not allowing any zeros at all. However, Gamma and inverse Gaussian distributions have even higher variance powers than the Tweedie with a variance power of 1.6, which already looked too high. Any advice or suggestions on alternative models, transformations, or approaches that could help with this problem would be really helpful! Background information: - One goal of my analysis is to estimate confidence intervals of aggregate totals using multiple imputation. - I'm currently using Tweedie GLM with cubic splines as predictors. - In my data, there is higher variance in the group with the smallest expected values than for the following group, but after this, the variance appears to increase with the expected values.
Modeling continuous positive data with variance structure similar to Tweedie with 1.2 variance power, but without excess zeros
CC BY-SA 4.0
null
2023-04-24T10:55:29.340
2023-04-24T10:55:29.340
null
null
141256
[ "generalized-linear-model", "gamma-distribution", "multiple-imputation", "tweedie-distribution" ]
613926
1
null
null
0
25
I'm attempting to find out if there's any correlation between a dependent variable and lagged independent variables. Simple question yet I can't find any answers to this - should the dataset be trimmed by the longest lag or filled with 0? What would be the rationale behind it? Thanks.
Lagged variables for Correlation analysis - fill or trim?
CC BY-SA 4.0
null
2023-04-24T11:03:45.373
2023-04-24T11:03:45.373
null
null
386402
[ "correlation", "lags" ]