Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
616484
1
null
null
0
22
In a crossover- trial: Outcome: continuous: cholesterol level Factor 1: categorical: two levels: within subject repeated measure [treatment A (standard) and treatment B (new)]. So same subject received randomly A and B or B and A treatment in two sessions after considering a washout period. Factor 2: categorical: two levels: between subject measure (smoker/non-smoker) Covariates: age, sex, race, bmi If I want to see: Whether there is any difference between smoker and non-smokers in terms of cholesterol reduction in response to new treatment compared to standard treatment? N = 30 (for now, it will increase to 90 at the end of study)
What test is appropriate to analyse this data?
CC BY-SA 4.0
null
2023-05-21T14:45:39.317
2023-05-21T14:45:39.317
null
null
388485
[ "repeated-measures", "crossover-study" ]
616485
1
null
null
0
13
I am writing my thesis, and I am trying to test the relationship between transnational counterterrorism and state repression in countries of intervention. My independent variable is counterterrorism aid (continuous) and my dependent variable is human rights violations (ordinal). I have unbalanced panel data, where there is information on the CT. aid that a country received in a certain year and the human rights violations perpetrated by the state. I want to found out whether counterterrorism aid increases state repression in the countries that receive this assistance. How can I conduct a ordered logistic regression in R for panel data? Is that even possible? If not I would appreciate very much suggestions for other models.
How can I code an ordered logistic regression for panel data in R?
CC BY-SA 4.0
null
2023-05-21T14:55:12.600
2023-05-21T14:55:12.600
null
null
375799
[ "r", "panel-data", "ordered-logit" ]
616486
1
null
null
4
260
Suppose I have two groups of users, A and B. In case it is relevant, B is much smaller than A. I have a feature of products purchased across two groups. I want to find items whose popularity is different between the groups and ultimately classify group membership based off this for unlabelled data. Example: Suppose the feature is “product”. My data might look like: Product, Percentage Group A, Percentage Group B, Apple, 90%, 80% Banana 50%, 49% Chicken 0%, 90% Lamb 1%, 60% Kale 50%, 20% … more stuff So 90% of people in group A purchased an apple. Nobody in group A purchased chicken. (I also have the raw counts if that is useful). Data notes: - The lists are very long, Group A may buy products that do not appear in Group B and vice versa, like chicken above. - In the real use case the product names may not carry a semantic meaning. - There are many products which only one user has purchased. So far I am just dropping items which only a “few” users have purchased. Goal: I am trying to find differences in the purchases between the two groups and any related insights, ultimately to determine likely group membership based on products purchased. For example above we could observe that chicken is very popular in group B but does not appear in group A. What I am particularly interested in is any stand out differences, like chicken in the example above, where if someone purchased chicken I would be much more likely to think they belong to group B instead of group A. What have I tried: One thing I have tried is to calculate the difference in ranking between the two groups and look for items where the difference is large. Domain experts tell me there are practical differences between the groups. Above it might turn out there is a statistically significant difference between the two groups in purchasing bananas, but the difference between 50 and 49 percent is so small, I doubt my stakeholders would be that interested, compared to the insight about the chicken.
Comparing frequencies or proportions between two groups to find differences
CC BY-SA 4.0
null
2023-05-21T15:25:56.580
2023-05-23T00:41:42.297
null
null
358991
[ "machine-learning", "ranking", "anomaly-detection" ]
616487
1
null
null
0
16
(Full disclosure: I'm trained as a physicist, so maybe the answer to this question exists and I just need to be pointed in the right direction). I have been working with a material properties data set for several years now. The data comes in the form pass/fail binary data. A stimulus of varying (numerical) degrees is applied to the material, and the material either passes or fails. The stimulus is numerically continuous, i.e. There is some sampling scheme that is used to explore the stimulus space which purports to do a pretty good job of sampling the relevant stimulus space. For technical reasons, the pass/fail test can only be conducted once per sample, and only a single stimulus can be applied to a given sample. (I think what I've just described is a pretty general problem...) Historically, the data set has been analyzed using a binary probit analysis. Admittedly, I don't know exactly how the algorithm is conducting the fit (probably MLE), but it is, of course, some how fitting the equation: $\theta^{-1}(y;0,0)=mx+b$, where $\theta$ is the normal cdf. One can (and we have reason to) convert m/b to $\mu/\sigma$ in order to recover the typical parameterization of the normal distribution. One could (and I increasingly have) plugged this reparameterization into the equation for a normal pdf and create visualizations that could be interpreted as the distribution of failure thresholds for the material. Since I've been working with this dataset, I've learned about censoring. It has occurred to me that under the assumption that there is some hidden threshold random variable that the pass/fail stimulus test is probing, one can view the pass/fail as completely censored data on a continuous random variable. A pass tells you that failure threshold for that particular sample occurred at a stimulus somewhere to the right, and a fail tells you that the failure threshold for that particular sample occurred at a stimulus somewhere to the left. I've found some algorithms that will fit left/right/interval censored data to a normal pdf and tried fitting these data sets using those algorithms. Perhaps unsurprisingly, I get more or less the same answer as the probit fits. (Maybe there is some theorem somewhere that says there is a mathematical equivalence or something like that...) As the title to this question implies, are there any known advantages or disadvantages to viewing the data this way? I guess I should include that my reasoning for even thinking is this direction is two-fold: (1) intuition of the meaning and (2) injecting more information into the analysis. For number (1), as I mentioned at the top, I'm trained as a physicist, so I'm used to dealing with measurements on continuous variables. It seems much easier to recognize changes in the behavior of the material by looking at pdfs, whose mode or width can very easily be visually assessed for a change, rather than cdfs or linearized probits to try to assess if two measurements represent the same behavior. Using the censored continuous variable ontology motivates and in some sense justifies this visualization beyond just a mathematical trick. For number (2), although I expressed pass/fail as right/left censored data, I think one might be able to leverage techniques for interval censored data to potentially inject more information into the analysis. Although properly speaking a pass only tells me that the fail threshold was to the right (i.e. somewhere between the applied stimulus and positive infinity), I can assert that there is some reasonable stimulus beyond which the material definitely would have failed. This is, of course, an assumption, a guess. But one can perhaps reasonably say that although the exact failure threshold is unknown, it is probably less than some finite boundary. In the case of the failure I'm analyzing, one can probably set that boundary relatively close to stimulus at which the measurement is taking place. I've looked around for any discussion on treating binary data as censored data, and haven't found any. This leads me to believe that it is either a stupid idea on the face of it or that it hasn't been suggested. I'm really hoping it is not former.
Binary and Censored Data: What are the advantages and disadvantages of considering binary data as a completely censored random variable?
CC BY-SA 4.0
null
2023-05-21T15:26:52.227
2023-05-21T15:26:52.227
null
null
388484
[ "binary-data", "censoring" ]
616488
2
null
616456
0
null
The question of how to compute $E[T|K = k]$ has been addressed by Xi'an in the comment. More explicitly, assume $p_k := P(K = k) > 0$ for each $k$, then the conditional density of $T$ given $T = k$ is $f(t|K = k) = \frac{f(t, k)}{p_k}$, where $f(t, k) = \frac{dF_k(t)}{dt}$ and $F_k(t) = P[T \leq t, K = k]$. It thus follows that \begin{align} E[T|K = k] = \int_\mathbb{R} tf(t|K = k)dt = p_k^{-1}\int_\mathbb{R}tf(t, k)dt. \end{align} Your second question about determining $E[K|T = t]$ is much more technical and a complete answer requires measure-theoretic machinery. Compared with the first question, the difficulty lies in that the conditional distribution of $K$ given $T = t$ can no longer be evaluated by the elementary event-based conditional probability formula $P(A | B) = P(A \cap B)/P(B)$ as $P[T = t] = 0$ when $T$ is continuous. To begin with, it is OK to expand $E[K | T = t]$ as \begin{align} E[K | T = t] = \sum_k kP(K = k|T = t). \tag{1} \end{align} Coming to determine $P(K = k|T = t)$, because of the aforementioned technicality, it should be understood as the a.s. value of the measure-theoretic conditional probability (which is a random variable of $T$) $g(T) := P(K = k|T) = P(K = k|\sigma(T))$ on the set $[T = t]$. Exercise 33.16 in Probability and Measure (3rd edition) by Patrick Billingsley connected this value with the joint distribution function $P[T \leq t, K = k]$: \begin{align} & P(K = k | T = t) = \lim_{h \to 0}P[K = k | t - h < T \leq t + h] \\ =& \lim_{h \to 0} \frac{P(T \leq t + h, K = k) - P(T \leq t - h, K = k)}{P(t - h < T \leq t + h)}. \tag{2} \end{align} With the available information you stated, $(1)$ and $(2)$ together should be sufficient to determine $E[K | T = t]$.
null
CC BY-SA 4.0
null
2023-05-21T15:30:49.057
2023-05-21T15:30:49.057
null
null
20519
null
616489
1
null
null
-2
11
Like for instance, Bionomial distribution is a limiting case to Normal Distribution. Can gamma distribution also be a limiting case to the weibull distribution?
Is there any relationship between gamma distribution and weibull distribution?
CC BY-SA 4.0
null
2023-05-21T15:46:13.427
2023-05-21T15:46:13.427
null
null
383804
[ "normal-distribution", "binomial-distribution", "gamma-distribution", "exponential-distribution", "weibull-distribution" ]
616490
1
null
null
-2
12
Like for instance, Bionomial distribution is a limiting case to Normal Distribution. Can gamma distribution also be a limiting case to the weibull distribution?
Is there any relationship between gamma distribution and weibull distribution?
CC BY-SA 4.0
null
2023-05-21T15:46:13.427
2023-05-21T15:46:13.427
null
null
383804
[ "normal-distribution", "binomial-distribution", "gamma-distribution", "exponential-distribution", "weibull-distribution" ]
616491
1
null
null
-1
21
Like for instance, Bionomial distribution is a limiting case to Normal Distribution. Can gamma distribution also be a limiting case to the weibull distribution?
Is there any relationship between gamma distribution and weibull distribution?
CC BY-SA 4.0
null
2023-05-21T15:46:13.427
2023-05-21T15:46:13.427
null
null
383804
[ "normal-distribution", "binomial-distribution", "gamma-distribution", "exponential-distribution", "weibull-distribution" ]
616492
2
null
616468
4
null
First, it's important to distinguish between the Schoenfeld residuals themselves and the score tests for trends of residuals over time that are used (starting with version 3 of the `survival` package) to evaluate deviations from proportional hazards (PH). Second, with respect to those score tests, it might be simpler to think about the tests on individual coefficients or on groups of coefficients (multiple-level categorical predictors, spline coefficients, etc) as special cases of the global test that's effectively done "under the hood" by `cox.zph` whether you ask for it or not. Third, the multiple residuals associated with a single predictor in the model can be combined with their associated regression-coefficient estimates to obtain the corresponding net residual in the linear predictor of the model. Schoenfeld residuals [This page](https://stats.stackexchange.com/q/547078/28500) explains that > The Schoenfeld residuals are calculated for all covariates for each individual experiencing an event at a given time. Those are the differences between that individual's covariate values at the event time and the corresponding risk-weighted average of covariate values among all those then at risk. If there are multiple coefficients associated with a predictor in a model, as in the situations you describe, there are multiple "covariates" in the above sense associated with that predictor, each with its own estimated regression coefficient. The residuals are scaled inversely with respect to their covariances to get scaled Schoenfeld residuals $s^*_{k,j}$ for covariate $j$ at event time $t_k$. [Grambsch and Therneau](https://www.jstor.org/stable/2337123) showed that the expected value of that residual is the difference between the time-fixed Cox-model coefficient estimate $\hat\beta_j$ and a potentially time-varying coefficient value at that event time: $$E(s_{k,j}^*) + \hat \beta_j \approx \beta_j(t_k).$$ If PH holds, $\beta_j$ is constant over time. In that case, there should be no trend of those residuals over time. Score tests for trends Section 6.2 of the [Therneau and Grambsch text](https://www.springer.com/us/book/9780387987842) explains how to test whether there is evidence of time trends in coefficients that would be inconsistent with PH. First, define some transformation of time $g(t)$; that's the `transform` argument to `cox.zph()`. Then, for covariate $j$, evaluate the following regression of the residual-based $\beta_j(t)$ against that function of time: $$\beta_j(t) = \beta_j + \theta_j(g(t)-\bar g_j) $$ where $\bar g_j $ is the mean of the transformed event times. Then, for the set of $p$ covariates: > The null hypothesis of proportional hazards corresponds to $\theta_j \equiv 0, j = 1, ... ,p.$ As Therneau and Grambsch explain, this isn't done with a regression model per se but rather with a set of calculations that leads to a [multi-parameter score test](https://en.wikipedia.org/wiki/Score_test#Multiple_parameters) of that joint hypothesis. The test statistic, evaluated against $\chi^2_p$, is based on the $p \times 1$ score vector $U$ and the $p \times p$ Fisher information matrix $I$ evaluated at the above null hypothesis: the quadratic form $U^T I^{-1} U$. That's the global test. Once you have $U$ and $I$ you can perform tests on subsets of covariates or on individual covariates, by restricting the calculation of the test statistic to the corresponding elements of $U$ and $I$ and evaluating against $\chi^2$ with degrees of freedom equal to the number of coefficients involved. In that sense, the test reported for a multi-coefficient predictor is a special case of the global test based on the subset of coefficients associated with it, and that for an individual coefficient for such a predictor is a special case of the test on the subset. Combined scaled Schoenfeld residuals The (unscaled) Schoenfeld residuals are first calculated for each individual event time and covariate. When `terms=TRUE`, the set of residuals for each event time for a multi-coefficient predictor is then combined as the inner product of the corresponding residuals and Cox model regression coefficients. That puts together all the residuals associated with that predictor, for that individual at that event time, into their net effect on the overall linear predictor of the Cox model. I've annotated below the critical code, which you can see by typing `cox.zph` at the command prompt: ``` ## unscaled Schoenfeld residuals for all covariates from Cox fit ## event times in rows, covariates in columns sresid <- resid$schoen ## if any predictor, indexed by nterm, has >1 covariate if (terms && any(sapply(asgn, length) > 1)) { temp <- matrix(0, ncol(sresid), nterm) for (i in 1:nterm) { j <- asgn[[i]] if (length(j) == 1) ## single covariate temp[j, i] <- 1 else temp[j, i] <- fit$coefficients[j] ## multiple covariates } sresid <- sresid %*% temp ## the inner product ## some lines omitted } ``` That's done before additional calculations including scaling the residuals, so you can't readily reproduce that by a doing a similar calculation on the reported scaled residuals.
null
CC BY-SA 4.0
null
2023-05-21T15:47:35.200
2023-05-22T12:42:32.080
2023-05-22T12:42:32.080
28500
28500
null
616494
1
null
null
0
36
Let $y_t = \Delta{p_t}$ denote a time series of asset log-returns, where $p_t$ are logarithmic prices; $y_t$ is generated by the conditionally heteroscedastic MA(1) process $y_t = \epsilon_t + \theta \epsilon_{t-1}$, where $\epsilon_t = \sqrt{h_t}z_t$ and $z_t\sim i.i.d N (0,1) $ with $|\theta|<1$, $h_t = \omega+\alpha \epsilon_t^2+\beta h_{t-1},\quad \omega>0, \alpha>0 ,\alpha+\beta<1$. - Derive the expressions for the unconditional mean $E(y_t)$, unconditional variance $Var(y_t)$ and autocorrelation function of $y_t, \rho(k),k=1,2,.....$ I computed the unconditional mean being equal to 0 because $E(\sqrt{h_t} z_t)+\theta E(\sqrt{h_{t-1}}z_{t-1})=0 $ for the unconditional variance $Var(y_t)= E(y_t^2)= (h_t z_t^2+\theta^2 h_{t-1} z_t^2)$ = $E(h_t+\theta^2 h_{t-1})$ because $z_t\sim i.i.d N (0,1)$. From now on I am not sure. Should I substitute $h_t$ to be done? Any help would be really appreciated.
Unconditional variance of MA(1)-GARCH(1,1) process
CC BY-SA 4.0
null
2023-05-21T16:21:53.417
2023-05-22T13:52:53.553
2023-05-22T13:52:53.553
362147
362147
[ "time-series", "self-study", "variance", "garch" ]
616495
1
null
null
1
40
I am relatively new to statistics and dealing with a dataset where I’ve studied 3 different treatment groups on color change in bell peppers. I scored the bell peppers every day for one week based on their skin color (score 1, score 2…score 7) and wanted to see if there are differences in the treatment groups on any given day based on the scores? A lower score (score 1) would mean that bell peppers any not ripening. I was thinking of performing a chi-share test (kruskal-Wallis) where I create a frequency distribution table and then use a post-hoc test (Dunn test?) to see which group is significantly different. Is my approach statistically okay? Are there any alternative test that you would recommend? Thanks!
Statistical analysis for sensory datasets
CC BY-SA 4.0
null
2023-05-21T14:39:41.810
2023-05-21T23:06:05.723
null
null
233065
[ "r" ]
616496
2
null
613831
1
null
Nice discussion, and yes, the culprit is the difference in variance for the two populations; that is why the data are not exchangeable. I have often abserved that permutation tests for difference in means react very sensitive to differences in variance if the test statistic used is just the difference in sample means, in particular, when sample sizes are different. Using the Welch t statistic makes the test much more robust against difference in population variance. The problem of testing difference in means when variances are also different, but we are not interested in them, goes under the name Behrens-Fisher problem. Here is a paper by A. Janssen (1997) that shows that the Welch statistic asymptotically gives the correct test size for permutation test, even when variances are different: [Janssen 1997](https://www.sciencedirect.com/science/article/abs/pii/S0167715297000436) You can find more followup papers on the topic on the net.
null
CC BY-SA 4.0
null
2023-05-21T16:54:24.767
2023-05-21T16:54:24.767
null
null
237561
null
616497
1
null
null
0
12
I want to classify diabetic retinopathy grades using SVM. I have 32 extracted features, and those features won't all be used in classification stage. Before entering feature selection, I want to clean the data to improve the correlation and classification accuracy. If i have normal and abnormal data, how do i find and delete the uncorrelated normal data in my dataframe? is there any specific method? thank you!
Cleaning Data Before SVM
CC BY-SA 4.0
null
2023-05-21T17:48:42.230
2023-05-21T17:48:42.230
null
null
387494
[ "svm", "data-preprocessing" ]
616498
1
null
null
0
18
How should I do the p-value adjustment for several post hoc comparisions. Should I adjust them all at once. Or should I adjutment per comparsion. Suppose ``` Anova-pvalue Pvalue:A1-A2 Pvalue:A1-A3 Pvalue:A2-A3 Colostrol: 0.04 0.1 0.03 0.02 Height 0.01 0.01 0.03 0.1 ``` Should I adjust by taking into account all possible multiple comparisons (Post Hoc) (6 tests) or should I adjust just for A1-A2, which would be two tests, and then separately for A1-A3 (two tests) and A2-A3? Thanks for the advice
how to perform the pvalue correction in this analysis
CC BY-SA 4.0
null
2023-05-21T18:00:50.483
2023-05-22T00:59:21.280
null
null
383609
[ "adjustment" ]
616500
1
null
null
0
61
This may be a question of philosophy or metrology, but I'd like to know if there are any methods that are available to differentiate between variance that is caused by a true spread in the population and variance that is as result of imprecise measurements. Example: Imagine you have a population with a property of interest that you have reason to believe is normally distributed. Let's make this concrete and say it is the strength of some manufactured plastic. You have some measurement equipment that characterizes this strength, but it is suspected that the equipment yields only imprecise measurements. The measurement precision tolerance of the equipment is unknown. You know that the plastics are heterogeneous on the same scale as the measurement equipment, so you expect that there will be some variation in the measurement that is due to the individual plastic samples being different. So, you make a series (more than one and enough to fit to a distribution) of measurements, fit them to a normal distribution and extract a sample mean and sample variance. Let's further say you calculate errors on the mean and variance using some robust method. You might suspect that a component of the sample variance is due to issues with error in the measurement and some of it is due to the actual population distribution. Then you take another series of measurements on what you have reason to believe is the same population, i.e. it is the same lot of manufactured plastic or something like that. The result of your second set of measurements is another sample mean and sample variance, and you again calculate errors on the mean and variance. You find that the sample mean or sample variance (or both) differ between the two sets of measurements by greater than the error on the mean and/or variance. That is, you would conclude that there is a significant difference between the two sets measurements. They look like two different populations. Now let's say you get serious and want to figure out what the fundamental equipment precision is. So you get a standard: a material that you've tried to construct so that the different individual samples are as identical as you can possibly make them. You don't know a priori what the measurement of the standard should be, but you have really good reason to believe that the individual samples are as identical as is physically possible. You again do a set of measurements and get sample mean/variance and errors on the parameters. You find that the sample variance on the standard is larger than it is for your heterogeneous, manufactured plastic. That is, according to the test, the standard has a higher variance than your material under test. You do another set of measurements on the standard, again get a sample mean/variance and the parameter errors. And like the two measurements on heterogeneous plastic, you find that the two sample means and/or variances have a statistically significant difference. Again, the sample variance for the standard is larger than that of the heterogeneous plastic. So just to give a visual, the situation might look something like the graphs below: [](https://i.stack.imgur.com/CUUzY.jpg) [](https://i.stack.imgur.com/jN2Eh.jpg) *Note: these are completely made up for the example. In all cases, you've tried to keep your measurement methodology uniform (e.g. same number of samples in each set of measurements, etc.). You've conduct all the measurements on the same equipment. You have no other way to "calibrate" the equipment other than the measurement on the standard (which as mentioned, you do not have an a priori value for what the measurement should yield). Further, there are no alternative "gold standard" tests that you can go to for a better answer. The test that exists is the only one that measures this property and is the de facto best measurement. There are other sets of testing rigs (e.g. physically different pieces of equipment), but they all implement the same type of test. Given these measurements, parameter fits and errors on the parameters, what, if anything, can one conclude about the true population variance of the plastic and the measurement precision of the equipment? Are there any follow-up tests that could be conducted to elucidate more information about the equipment precision or the population variance?
What methods (if any) are available for differentiating between true population variance and variance caused by measurement error?
CC BY-SA 4.0
null
2023-05-21T18:12:08.203
2023-05-22T00:01:40.087
2023-05-21T18:35:24.310
388484
388484
[ "standard-deviation", "measurement-error" ]
616501
1
616528
null
9
1545
Let $X_1,X_2,\ldots,X_n$ be (iid) Random variables and define $Y_n:=\sum_{j=1}^na_jX_j$ with $a_j\in \mathbb{R}$, can we then say that the $a_jX_j$ are independent aswell. Can we express the MGF than in the following way $$M_{Y_n}(t)=\mathbb{E}(e^{t(a_1X_1+\ldots a_nX_n)})=\mathbb{E}(e^{ta_1X_1}\cdot\ldots\cdot e^{t a_nX_n})=M_{X_1}(a_1t)\cdot\ldots \cdot M_{X_n}(a_nt)$$
Are linear combinations of independent random variables again independent?
CC BY-SA 4.0
null
2023-05-21T18:17:46.520
2023-05-23T10:52:24.037
2023-05-22T10:31:50.313
53690
386534
[ "random-variable", "expected-value", "independence", "moment-generating-function" ]
616502
1
null
null
0
7
I am currently using STATA to estimate ARDL. I am wondering if one can interpret the signs of the coefficient in the output? My question arises as they don't make much sense and if I estimate ECM the signs are opposite for some variables.
signs interpretation ardl?
CC BY-SA 4.0
null
2023-05-21T18:23:34.450
2023-05-21T18:23:34.450
null
null
383188
[ "ardl" ]
616503
2
null
616400
0
null
You evaluate the quality of matches after Mahlanobis distance matching exactly the same as you would after propensity score matching: by assessing balance on the covariates and ensuring the effective sample size of the matched sample is adequate. The form of matching has nothing to do with how balance is assessed. The propensity score is not relevant for balance assessment and is only a heuristic for assessing overlap and therefore is not at all necessary for evaluating matches. If your question is about actually how to write a Stata command to do that, that is a programming problem, not a statistics problem, that should be asked on [StackOverflow](https://stackoverflow.com/).
null
CC BY-SA 4.0
null
2023-05-21T18:29:10.843
2023-05-21T18:29:10.843
null
null
116195
null
616504
1
null
null
0
36
In the context of the Central Limit Theorem (CLT), which postulates that the distribution of sample means will approximate a normal distribution given a sufficiently large number of samples and sample size, how can we reconcile this with the following concept presented by Rowntree in "Statistics Without Tears"? He claims that: "Even though we take only one sample, and therefore have only one sample means, we can think of it as belonging to a distribution of possible sample means. And, provided we are thinking of samples of reasonable size, this distribution will be normal"." This concept seems to be in contrast with the binomial distribution resulting from a large single sample of a million coin flips, which does not resemble a normal distribution. But if we consider the means of multiple independent samples of coin flips, they do approximate a normal distribution as per the CLT. So, how can we interpret Rowntree's claim?
Interpreting the Concept of 'Single Sample Normality' in the Context of the Central Limit Theorem
CC BY-SA 4.0
null
2023-05-21T18:53:22.950
2023-05-21T20:22:10.747
null
null
276238
[ "mathematical-statistics", "normal-distribution", "sampling", "mean", "central-limit-theorem" ]
616506
2
null
547514
0
null
Both of these models represent similar processes, they only differ in the distribution of the error (innovation, noise) term. If you were to plot the differences between consecutive observations in realizations of either process, you will observe independent residuals following the corresponding distribution. The definition of stationarity you give here is not necessarily incorrect, as claimed in the comments, you most likely are (perhaps mistakingly) explaining strict stationarity. Strictly stationary processes are indeed defined as having the same joint probability distribution when shifted in time, given that the frame (interval) length is held constant. According to Brockwell and Davis' Introduction to Time Series Analysis and Forecasting (1996), a time series is said to be weakly stationary if i) its mean function is independent of time ii) its covariance function is independent of time and changes only with respect to the interval size. Referring again to Brockwell & Davis (page 17 in my copy), it can be seen that the covariance of the series you gave above changes with respect to time, hence are non-stationary. Furthermore, both of these are unit root processes, as the roots of their corresponding characteristic equations are equal to 1.
null
CC BY-SA 4.0
null
2023-05-21T19:55:24.483
2023-05-21T19:55:24.483
null
null
282477
null
616507
1
null
null
0
12
I've tried to research this question but have had to rely on answers to only somewhat similar scenarios which have led me in different directions. For example, I've been advised to calculate variance using the eigenvalues of the dissimilarity matrix, but elsewhere advised to use eigenvalues of the stress matrix. Can anyone please explain first of all whether R squared is most appropriate or if I would be better using another metric, and how exactly I can obtain variance explained in a way that is suitable for a 2D MDS generated using cmdscale in matlab? Both the dissimilarity matrix and the original data (MUAe from a 64 channel utah array) are available. Thanks.
How is variance explained of a classical mds model calculated (in matlab)
CC BY-SA 4.0
null
2023-05-21T20:03:01.043
2023-05-21T20:03:01.043
null
null
388501
[ "variance", "matlab", "multidimensional-scaling" ]
616508
2
null
246234
1
null
Mixing together the correlation coefficients with the p-values using one dimension (hue) doesn't make sense. If the sample size is the same, then the same correlation coefficient will lead to the same p-value. If the sample size is not the same, then collapsing them both into a single variable is unhelpful and will rely on arbitrary and confusing choices. There's a simple solution: as Ian_Fin suggests, just show both the correlation coefficient and the p-value using separate design elements. Any two of size, shape, colour, and text can be used to convey them separately. The `corrplot` package in R for visualising correlation matrices offers a range of plotting options and can be used for this. Or, since a correlation matrix is symmetric, you could also show the correlation coefficient elements above the diagonal and the p-values below it (or vice versa, as in [this example](https://www.researchgate.net/profile/Naveed-Khan-21/publication/346107399/figure/fig2/AS:964711206113283@1607016483924/Matrix-of-Pearsons-Correlation-Coefficients-r-Left-Triangle-and-p-values-Right_W640.jpg)).
null
CC BY-SA 4.0
null
2023-05-21T20:10:36.750
2023-05-21T20:10:36.750
null
null
121522
null
616509
1
null
null
1
35
I am creating virtual species distributions on a 100x100 grid (for now). Each layer of the grid represents one environmental variable. The "suitability function" defines the probability of a presence (and inversely, an absence) at a given point based on the value of each variable (grid layer). Stratified random sampling is used. Actual probabilities are used (using Bernoulli trials), rather than the threshold approach (where all probabilities above, say 70%, are set to true). The sampling and determination of whether a point is a presence or absence is done in the same step (so detection is perfect). Random forests are used to create a species distribution model (SDM). Is there a way to do a power analysis which tells you what sample size you need to achieve some measurable metric, such as AUC-ROC or Brier score (or other type of metric)?
How to do a power analysis for a virtual species distrubition model which uses random forests
CC BY-SA 4.0
null
2023-05-21T20:13:05.447
2023-05-21T20:13:05.447
null
null
294655
[ "random-forest", "sample-size", "statistical-power" ]
616510
2
null
616504
2
null
It depends on how you are looking at the outcome of a million coin flips. If your register the outcomes 1 or 0, then sure, you get a distribution that concentrates on these two values. But these are then like a million samples of size 1 each. Otherwise, when you look at the mean of a million coin flips, and repeat this, you get results that are extremely close to 0.5 each time, but they still vary randomly. In the CLT, we look at the distribution of the mean $\bar{X}$ from a sample of size $n$ that is multiplied with the $\sqrt{n}$. More precisely, we study the distribution of $Y = \sqrt{n}(\bar{X}-\mu)$, where $\mu$ is the theoretical mean of one outcome -- for a coin flip hopefully $\mu = 0.5$. The multiplication with factor $\sqrt{n}$ makes up for the fact, that the distribution of $\bar{X}$ concentrates more and more around $\mu$ when $n$ increases. When you flip n=4 coins, say, you get possible outcomes for the sample mean 0, 0.25, 0.5, 0.75 and 1, so $Y= \sqrt{n}(\bar{X}-\mu)$ would have possible outcomes -1, -0.5, 0, 0.5 and 1. The probabilities for these outcomes are 1/16, 1/4, 3/8, 1/4, 1/16 respectively. You can graph this: [](https://i.stack.imgur.com/kJyoH.png) Doing the same with n=100 coin flips, the $\bar{X}$ can be 0, 0.01, .. 1, but $Y$ takes values $-5, -4.9, \dots, 5$. [](https://i.stack.imgur.com/n52cC.png) Already for $n=4$, the shape of the distribution was somewhat bell shaped, but for $n=100$, it looks really normal.
null
CC BY-SA 4.0
null
2023-05-21T20:19:36.763
2023-05-21T20:22:10.747
2023-05-21T20:22:10.747
237561
237561
null
616512
2
null
157473
0
null
Answered in comments by Gavin Simpson: > Looks like a biplot, a correlation biplot hence all arrows have unit length. And this could easily be produced from a PCA. > ...By correlation biplot I mean a biplot drawn from results of PCA on the correlation matrix rather than the covariance matrix.
null
CC BY-SA 4.0
null
2023-05-21T20:23:22.157
2023-05-21T20:23:22.157
null
null
121522
null
616516
1
null
null
3
68
I am a bit confused regarding this issue. From my understanding the normal distribution and t distribution we look at the left tail for a one tail test and for the f and chi-squared we look at the right.
The QF function in R does it calculate the area to the right or to the left (like a normal distibution)?
CC BY-SA 4.0
null
2023-05-21T21:28:41.353
2023-05-22T20:47:06.870
null
null
388504
[ "r", "distributions" ]
616517
2
null
616457
1
null
There's both a statistical and a biological aspect to this question. The statistical aspect is reasonably straightforward. You want to find some way to evaluate whether an association of a set of predictors with outcome might just be due to chance. "Significance" is evaluated as whether results are adequately different from what might be expected by chance under a "null hypothesis" of no true association. Start with a single gene set. It doesn't matter how it was identified. If a set of 300 genes seems to be associated with outcome in a statistical model, how do you compare that association against a "null hypothesis" that the association with outcome is just due to chance? There are several ways to do this.* Some ways have a theoretical basis. For example, you can evaluate the "significance" of a logistic-regression model by tests based on the underlying theory of maximum-likelihood estimation. The problem is that when you have a large number of predictors relative to the number of cases you can [overfit](https://stats.stackexchange.com/tags/overfitting/info) the current data in a way that will appear "significant" but won't generalize well to new data. One way the authors dealt with that was to use a type of "penalized" logistic regression, [ridge regression](https://stats.stackexchange.com/tags/ridge-regression/info), to minimize the risk of overfitting. That, however, removes the simple theoretical basis for estimating "significance." The authors chose a different way to evaluate what you might find under the null hypothesis of no association of the gene set with outcome. Any random set of 300 genes (out of ~20,000 total) is unlikely to be truly associated with outcome, but there will be variability among such random sets of genes in their apparent associations with outcome. Multiple sets of 300 randomly chosen genes (these authors used 1000 sets) thus provide the variability of associations you might find under the null hypothesis. The "significance" test is then how unusual it is for a randomly chosen set of 300 genes to be as strongly associated with outcome as the set that you've identified. In your example, only 4 out of 1000 random sets of 300 genes were that strongly associated with outcome, for $p=0.004$. Thus there's a very low probability that the association of the identified set of genes with outcome was simply due to chance. Ranking drugs by the p-values of their altered gene sets is perhaps a bit more problematic, as I suspect that different numbers of genes in the gene sets (and thus in the sizes of the corresponding random gene sets) might lead to extra variability in the p-value estimates. It seems like a reasonable heuristic, however. You might have more confidence in a drug whose gene set that has only 4/1000 chance of a random association with outcome than one that has a 40/1000 chance. There might be some quibbles about details, but the fundamental approach is reasonable. The biological aspect is that having sets of genes or rankings of drugs, as DRIAD provides, is only a first step in investigation. All you can say from DRIAD itself is that sets of genes whose expression levels are affected by some drugs in vitro are also associated with the severity of Alzheimer's, and that some drugs affect expression of genes that are more strongly associated with severity than others. In fairness, the authors don't claim that any drug is itself associated with the disease; the quote that you include near the end of the question is pretty cautious. In the rest of the report, the authors used the DRIAD drug ranks as the basis of further pharmacologic investigation. The drugs were all designed to be inhibitors of [kinases](https://en.wikipedia.org/wiki/Kinase) (enzymes that phosphorylate molecules, typically proteins). Inhibiting kinases is presumably what led to the drugs affecting the expression of their gene sets. Kinase inhibitors, however, typically affect multiple related kinases. So the authors went on to identify the affinities of the top-ranked drugs for multiple kinases (Figure 4) and to examine which specific kinases tended to be inhibited by multiple high-ranking drugs (Figure 5). That led to identifying 10 top kinases that might be most closely associated with the biological pathways that define the difference between early- and late-stage Alzheimer's (Table 1). You are correct to be skeptical of any report that is solely based on gene sets or drug rankings. In this case, however, the gene sets and associated drug rankings by DRIAD just provided a start for further detailed pharmacologic analysis. --- *This has nothing to do with the "predictive power score" that you cite at the end of the question. That's a way to use a very flexible fit to see if there is any association between two variables, an association that might be highly nonlinear and thus missed by a linear correlation or linear regression. The logistic regression used by the authors of this study is just a fit of the log-odds of the binary severity outcome against a strictly linear combination of (log-transformed) gene-expression values.
null
CC BY-SA 4.0
null
2023-05-21T21:31:33.323
2023-05-22T12:31:42.157
2023-05-22T12:31:42.157
28500
28500
null
616518
2
null
616486
5
null
I can think of different ways to do this. I think the most straightforward however is logistic regression, given your requirements (you need which items are most important and at the same time make class assignments for future datapoints). You could fit a logistic regression model to predict whether an individual is a member of group A or of group B, using the number of purchases of each item as features. (I'm assuming you have a wealth of individual-level data here). You can then calculate an odds ratio for each feature (i.e. how much each additional purchase of item X increases the odds of being Group B) and a 95% CI on that odds ratio. This functionality is standard in most software packages you can use to do logistic regression (e.g. in R or in statsmodels or scikit-learn in Python, see this [StackOverflow for example](https://stackoverflow.com/questions/37647396/statsmodels-logistic-regression-odds-ratio)). This therefore simultaneously answers the question posed by your end users (which features are most important) and constructs a predictive model you can use on new data. Since your dataset is imbalanced (significantly more class A than class B datapoints), you'll need to use class weighting. You might also need to use L1 (promotes sparsity) or L2 (promotes small weight values) regularization. L1 regularization can be particularly useful if you want a sparse solution (i.e. want most of the weights to be zero so that only a small subset of the features are used, which I think is what you are after). A regularization technique that may be especially useful for your case is [elastic net](https://en.wikipedia.org/wiki/Elastic_net_regularization), which is a combination of L1 and L2 regularization. I would do some experiments (e.g. [5x cross validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics))) on a small subset of your training data to see if regularization is beneficial and if so what hyperparameter setting you need. Alternatively, you might want to know which items offer the best differentiation between categories and then use only those as inputs to your logistic regression model, in other words, do feature selection before fitting. In this case, you could use a [chi-square test](https://en.wikipedia.org/wiki/Chi-squared_test) at the level of each item (i.e. how many in group A purchased this item or did not and how many in group B purchased this item or did not) or [mutual information](https://en.wikipedia.org/wiki/Mutual_information) to determine which items provide the best distinction between groups. Both of these are implemented as [feature selection methods in the scikit-learn package](https://scikit-learn.org/stable/modules/feature_selection.html) in Python, and statsmodels of course [has the chi-square test](https://www.statsmodels.org/stable/generated/statsmodels.stats.proportion.proportions_chisquare.html). (I guarantee all this can be done in R as well, although I use R less frequently and am therefore less familiar with all of the useful packages). I would be aware of course that if using the chi-square test, be careful about how you interpret the p-value since you are of course performing many statistical tests (since you have thousands of items) and therefore if you decide to use a p-value cutoff as some measure of significance, if you do not perform a [multiple-test correction](https://en.wikipedia.org/wiki/Multiple_comparisons_problem) you have an increased chance of false positives. Also, if you are running cross-validation experiments when fitting your model, or if you assess your model on some validation or test set, you should perform the feature selection on the training data only to avoid over-estimating the performance of your model. There are other ways to model this data, but I think these are probably the simplest given what I think you're trying to do. EDIT: Another possible problem you may encounter involves highly correlated predictors. It is possible that some of the items you're tracking are highly correlated, in other words, people who buy tofu nearly always buy Impossible Burgers and almost never buy chicken, etc. There are a variety of ways you can deal with this. One approach is to do [principal component analysis (PCA)](https://en.wikipedia.org/wiki/Principal_component_analysis) if the data is number of items purchased (if it is instead one-hot encoded yes or no purchase, you can use [MCA instead](https://pypi.org/project/mca/), although it sounds like this is number of items purchased); there are a lot of tutorials on PCA online, it's implemented in scikit-learn and any other common statistical package. In a nutshell, PCA uses the eigenvectors of the covariance matrix as a new basis set for the data. The share of the variance explained by a given principal component (eigenvector) is its associated eigenvalue divided by the sum of all the eigenvalues; so, the principal components can be sorted by eigenvalue and we can use the new features associated with the n largest eigenvalues as input to our model while discarding the rest. This will eliminate the multicollinearity problem (if present), but it's not ideal for your application, because now the odds ratios from logistic regression will be odds ratios for the principal components, each of which is a linear combination of the original items. This will somewhat complicate your analysis when you're trying to explain your insights to stakeholders. (Although maybe you can find a way to use this to give them an explanation that's sufficient for their needs). So, I think what I would suggest instead as a place to start is doing feature selection, e.g. using mutual information, which should give you a greatly reduced number of features. At that point, you can check for correlations among those features (e.g. using the [Spearman's-r correlation coefficient](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient) if the data is the number of an item purchased by a customer). In any case where multiple items are very strongly correlated, you can keep one and eliminate the others as inputs to your logistic regression model. (This doesn't completely eliminate the possibility of multicollinearity but should significantly reduce the likelihood that it is a problem.) This way, when providing an interpretation to stakeholders, you can say "this is the odds ratio for tofu, we find that purchases of impossible meat are strongly correlated with tofu, i.e. customers who purchase the one nearly always purchase the other". Also, elastic net regularization (combined L1 / L2) is likely to be especially helpful if you have multiple correlated features. L1 regularization alone will help with this but it will tend to keep one of the correlated features and drop the others, whereas elastic net can avoid this problem. Again some experiments on a subset of your data might be helpful here.
null
CC BY-SA 4.0
null
2023-05-21T22:25:28.850
2023-05-23T00:41:42.297
2023-05-23T00:41:42.297
250956
250956
null
616519
1
null
null
0
47
I have variable with a range (0;1). I want to do a transformation, so that later I can use this variable as a dependent variable in my multiple linear model. For example, can I use logit transformation? And if I use, what are my Linear model coefficients interpretation with that transformation? I want to transform, because later I will use this variable as a time series object in ARIMA model and if I use original variable I am getting 95% confidence interval starting from negative values, as I know this variable is in range from 0 to 1 all the time, because it is frequency. I am using R software. I am not sure, but I hear something about "scorse space", but I have no idea what it is. The idea WHY I want to use linear regression, cause later I want to assess how independent variables affects Y response and I want to do that quantitavely. Not the sign, but also the coefficient value. Cause later I devide Y response into two dependent variables Y_1 and Y_2 and see which one is affected more by independent variables (X1,X2). You can provide code with any random values from 0 to 1. Thank you for any ideas and thoughts. My data is in range (0;1), but the interval is not closed. The minimum value is above 0 and the maximum is below 1.
Variable (frequency) transformation in R from (0;1) range
CC BY-SA 4.0
null
2023-05-21T22:28:17.693
2023-05-28T23:20:57.557
2023-05-28T23:20:57.557
11887
388508
[ "r", "data-transformation" ]
616521
1
null
null
1
13
There is an interesting post about the connection of lagged exogenous variables and the autoregressive time series model: [Forecasting - Lags vs. AR terms for Exogenous Variables](https://stats.stackexchange.com/questions/462149/forecasting-lags-vs-ar-terms-for-exogenous-variables) Consequently, by using an ARIMA model (I assume that this not only holds for AR models) including a non-lagged exogenous variable you are able to catch the decaying effects of all lagged versions. That sounds quite nice in my opinion. My question now is whether this also holds for a Regression with ARIMA errors? I would say no because in those models there is only an autoregressive part for the error term and no for the dependent time series variable modeled by the help of exogenous variables. Thank you in advance.
Interpreting lagged exogenous variables in ARMAX and regression with ARMA errors
CC BY-SA 4.0
null
2023-05-21T23:08:37.063
2023-05-22T11:08:18.580
2023-05-22T11:08:18.580
53690
386258
[ "time-series", "forecasting", "arima", "interpretation", "lags" ]
616522
1
null
null
1
63
Im looking for the optimal time in which a process should be cancelled before it results on financial losses. Say M_n=X_n*Y_n-c(n) for for n =1 to 12 which is the number of hours the process gets going, and c(n) is the cost of the process on the n-th hour. c(n) is deterministic in nature as opposed to X_n, a positive real number and Y_n, a count random variable. I have no background on time series analysis but maybe I could try and apply a linear regression model with time as the independent variable and see on average where, and if, M_n gets equal to zero. What do you think? Maybe I should treat time as continuous and analyse M_t as a Poisson process?
Average time in which a product random variable becomes zero
CC BY-SA 4.0
null
2023-05-21T23:19:09.437
2023-05-22T00:13:18.627
2023-05-22T00:13:18.627
388424
388424
[ "regression", "loss-functions", "poisson-process", "discrete-time", "optimal-stopping" ]
616526
2
null
616498
0
null
In this context, it appears you are doing two separate 1-way ANOVA (with a single factor with three levels). Thus, standard/conventional protocol would be to do the post hoc adjustment for each triple (if the omnibus ANOVA clears the significance threshold). One additional note, "adjusting all at once" is not actually the best way to describe this process for some protocols (like the Holm-Bonferroni process, which adjusts successive p-values based on their ordered magnitude). While I'm happy to share more on this, I don't believe this is the heart of your question here.
null
CC BY-SA 4.0
null
2023-05-22T00:59:21.280
2023-05-22T00:59:21.280
null
null
199063
null
616527
2
null
584522
0
null
For a meta-analysis of this sort, the idea is to combine the effect size information from your disparate samples into one measure for the effect size. The conventional protocol here would be to estimate the confidence interval for your effect. This would be done separately for each measure. While not ideal, you could then observe how near or far apart your confidence intervals are from each other. Also, as a word of advice, be sure to carefully check the order of subtraction used to calculate the effect sizes (as it is not uncommon to see effect sizes always reported as positive values...where the direction of the difference is meant to be inferred from the context of the problem or journal article).
null
CC BY-SA 4.0
null
2023-05-22T01:05:52.097
2023-05-22T01:05:52.097
null
null
199063
null
616528
2
null
616501
20
null
Yes, for the content of your question. and No, for the title, in general. Yes: Your $a_1,\dots, a_n$ are just some constant numbers. Then the independence of $X_1,\dots, X_n$ implies that the $a_iX_i$ are also independent. In fact, for any functions $g_1,\dots, g_n$ you would find that $g_1(X_1)$, $g_2(X_2)\dots g_n(X_n)$ are independent. From independence it follows that $E\prod g_i(X_i) = \prod E g_i(X_i)$. Your calculations are correct. No: Now look at $Y = \sum_{i=1}^n a_iX_i$ and $Z = \sum_{i=1}^n b_iX_i$. The linear combinations $Y$ and $Z$ of the $X_i$ are in general not independent. They are independent when $b_i=0$ for all $i$ where $a_i\neq 0$, and $a_j=0$ for all $j$ with $b_j\neq 0$. The simplest case where this is not fulfilled is $a_i=b_i$ for all $i$: then $Y=Z$. ###### Normal distributed variables $X_i$ When the $X_i$ are normal distributed, the condition "$a_i\neq 0 \implies b_i=0$" can be relaxed. In that case the linear combinations $Y$ and $Z$ are independent whenever $\sum a_ib_i = 0$, i.e., when the vectors $a=(a_1,\dots, a_n)$ and $b=(b_1,\dots,b_n)$ are orthogonal. This is a fact that contributes to the popularity of the normal distribution for modelling. One of the consequences of this fact is, that estimates $\bar x$ for the mean and $s^2$ for the variance of normal distributed r.v. are independent :-)
null
CC BY-SA 4.0
null
2023-05-22T01:14:25.037
2023-05-23T10:52:24.037
2023-05-23T10:52:24.037
237561
237561
null
616529
1
null
null
-1
22
"Roughly equal" is ambiguous. My professor told me to use the cut off ratio of 1.5 to determine if my two sample sizes are roughly equal (i.e., n1 divided by n2) but I cannot find the citation. Can someone please tell who stated this? thanks in advance.
What does "roughly equal sample sizes" mean?
CC BY-SA 4.0
null
2023-05-22T01:32:44.630
2023-05-22T02:04:56.843
null
null
388465
[ "sample-size" ]
616530
2
null
616501
8
null
If $X_1,...,X_n$ are mutually independent then $a_1 X_1,...,a_n X_n$ are also mutually independent (i.e., using scalar multiples does not get rid of independence). However, the quantity $Y_n = \sum a_i X_i$ is typically going to be dependent with $X_1,...,X_n$. This means that you can indeed express the MGF pf $Y_n$ the way you write it in your post, but if you were to look at the relationship between $Y_n$ and any of the $X_i$ values, you would have to deal with the dependence between these quantities.
null
CC BY-SA 4.0
null
2023-05-22T02:03:49.707
2023-05-22T02:03:49.707
null
null
173082
null
616531
2
null
616529
3
null
It’s just a colloquial expression. There’s no mathematical rigor to be found here.
null
CC BY-SA 4.0
null
2023-05-22T02:04:56.843
2023-05-22T02:04:56.843
null
null
22311
null
616532
1
null
null
1
17
I'm building a model using the `auto.arima` function in R, and it gives the following output: ``` arima(x = revtrend, order = c(2, 1, 2)) Coefficients: ar1 ar2 ma1 ma2 1.1894 -0.1975 0.4387 -0.5613 ``` Can someone help me write this as an actual mathematical model? I believe I'm getting a bit lost in the notation. Thank you!
How to convert R output from auto.arima function into mathematical model?
CC BY-SA 4.0
null
2023-05-22T02:08:50.390
2023-05-23T04:00:15.100
2023-05-22T10:55:41.437
53690
388516
[ "r", "time-series", "arima" ]
616533
1
616534
null
0
39
Consider a random variable $V$ with variance $\sigma_V^2$. Since the covariance between a random variance and a constant is zero, I think, if $\sigma_V^2=0$, the covariance between $V$ and another random variable, say $U$, should be zero. That is, $\sigma_V^2=0 \implies \sigma_{VU}=0$ for any random variable $U$. I am strongly sure. But, I want to check this one more again.
Does zero variance imply zero covariance?
CC BY-SA 4.0
null
2023-05-22T02:16:50.343
2023-05-22T02:51:45.170
null
null
375224
[ "variance", "random-variable", "covariance" ]
616534
2
null
616533
2
null
Yes, this is the consequence of the [Cauchy-Schwarz inequality](https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality#Probability_theory): \begin{align} \operatorname{Cov}(U, V)^2 \leq \operatorname{Var}(U)\operatorname{Var}(V). \end{align}
null
CC BY-SA 4.0
null
2023-05-22T02:34:39.940
2023-05-22T02:51:45.170
2023-05-22T02:51:45.170
20519
20519
null
616535
1
null
null
1
19
[](https://i.stack.imgur.com/3Fst5.png) Sorry my calculus knowledge is extremely rusty - what is the reason that we can flip the bounds from $-\infty\to 0$ to $0\to\infty$ and then also flip the $x$ value to $-x$?
Changing Bounds and Multiply by -1
CC BY-SA 4.0
null
2023-05-22T02:58:32.947
2023-05-22T08:51:26.260
null
null
367762
[ "calculus" ]
616536
1
null
null
1
17
I am trying to find if the age and the sex influence the headache during the heatwaves. I tried to use ``` mclogit::mclogit(Headache ~ Age+Sex, data=Headache) ``` The levels for Headache are " No", "Yes, inside" and "Yes, outside". I changed those in 0 for "no", 1 for inside and -1 for outside. The error I get is "Error in h(simpleError(msg, call)) : error in evaluating the argument 'object' in selecting a method for function 'summary': object of type 'symbol' is not subsettable" How can I solve this? Thank you!
Error message while trying to apply "mclogit"
CC BY-SA 4.0
null
2023-05-22T03:49:06.220
2023-05-22T04:49:47.537
2023-05-22T04:49:47.537
116195
388523
[ "mixed-model" ]
616538
1
null
null
0
27
I want to know an appropriate way to conduct a post-hoc power analysis for several glmer models built in R using the lme4 package. A reviewer asked for this. What I did: - I used the R package simr which conducts power analyses for mixed-effects models using simulation. However, the resulting post-hoc power was around 98%, which seems inflated/inaccurate. Question: Could you suggest a way to accurately run a post-hoc power analysis for glmer models?
power analysis for glmer mixed models
CC BY-SA 4.0
null
2023-05-22T04:32:22.360
2023-05-22T04:32:22.360
null
null
348111
[ "r", "lme4-nlme", "statistical-power" ]
616539
2
null
616516
5
null
Rather than giving a man a fish, maybe it's better to teach a man to fish: Suppose we don't know if the `qf` gives the quantiles measured from the left or right tail, and we can't figure it out from the documentation. One test we might use is to call the function over a range of probability values and see if the quantiles returned are increasing or decreasing. ``` #Call outputs of the function qf PROBS <- seq(from = 0, to = 1, by = 0.05) qf(PROBS, df1 = 4, df2 = 6) [1] 0.0000000 0.1622552 0.2493921 0.3278960 0.4043197 0.4815638 0.5615268 0.6458107 [9] 0.7360243 0.8339843 0.9419133 1.0626959 1.2002700 1.3602830 1.5512907 1.7871545 [17] 2.0924149 2.5164101 3.1807629 4.5336770 Inf ``` Looking at the output, we see that the quantiles are increasing with respect to the probability values, which tells us that the quantile outputs are with respect to the lower tail area. Sure enough, if we read the [documentation for the qf function](https://stat.ethz.ch/R-manual/R-devel/library/stats/html/Fdist.html) we see that there is a logical parameter `lower.tail` that controls this. To round out our checks, let's look at the output when we change the default value of this parameter: ``` #Call outputs of the function qf with lower.tail = FALSE qf(PROBS, df1 = 4, df2 = 6, lower.tail = FALSE) [1] Inf 4.5336770 3.1807629 2.5164101 2.0924149 1.7871545 1.5512907 1.3602830 [9] 1.2002700 1.0626959 0.9419133 0.8339843 0.7360243 0.6458107 0.5615268 0.4815638 [17] 0.4043197 0.3278960 0.2493921 0.1622552 0.0000000 ``` Now we see that the quantiles are decreasing with respect to the probability values, which tells us that the quantile outputs are now with respect to the upper tail area.
null
CC BY-SA 4.0
null
2023-05-22T04:49:18.790
2023-05-22T04:49:18.790
null
null
173082
null
616540
1
null
null
0
7
If three annotators, A, B, and C, label 180 data, In the middle of the labeling process, C is replaced with D, and A, B, and D label 200 data. Can it count and average for three labelers as whole 380 data with three annotators with Fleiss Kappa? How many annotators in the matrix for this case can calculate 380 data according to that situation?
Calculate Fleiss Kappa with annotator replacement
CC BY-SA 4.0
null
2023-05-22T06:15:38.197
2023-05-22T06:15:38.197
null
null
388529
[ "mathematical-statistics", "agreement-statistics", "cohens-kappa" ]
616541
2
null
534510
0
null
Sorry, it seems that I am about 2 years late to this post, However I figured I would give my opinion now in case that someone runs into the same question later (The same way I did today). The short answer is: option 1 is best, because you risk losing information with option 2. Now the long answer: The first thing that must be noted is that you are doing a mean-removal type of normalization (By subtracting the mean out of your sample). This would likely break any time-dependent trends that you may have, because now the data at any timestep is centered around zero (without regards to any other timestep). For example, let's assume that at your first timestep the mean sensor reading of all your samples is 10, then at the second it is 20, and so on following your example array. When we normalize across each time step then your example array will now be just a zero vector. if instead we normalize across the entire feature, then your new array will preserves the rising trend in the data. In the response to the StackExchange that you linked, the responder explains that you should use the first option in general cases; However, in cases where you may want to use option 2, you need to make sure that you have another feature that contains information about your scale (mean and variance) at that timestep. This is where your human judgement must come to the rescue. What information do you think is valuable to the model? If the rising trend of the time series data is not important then perhaps you should not be using a sequence model at all. If the variance of the data at each timestep is significantly different, then perhaps you should make this its own separate feature (as seems to be the case for the Heston model in the linked StackExchange). Feature engineering is often a very case specific problem. Therefore you need to try to understand your data very well, and understand the different factors that affect its trend. Finally, if you are still unsure, it doesn't hurt to train a model for each option for 25 epochs or so. That should be enough training to see significant differences between each option.
null
CC BY-SA 4.0
null
2023-05-22T06:46:37.770
2023-05-22T06:46:37.770
null
null
388530
null
616542
1
null
null
0
13
[](https://i.stack.imgur.com/ek0Xf.png) For the final inequality, could someone explain the intution behind the $M-(N-K)\leq x$? I am struggling to find an intuitive argument for this relationship. Like if we have $N=20$ balls and $M=15$ are red, then the difference between the number of red balls and the number of total balls that remain after choosing $K$ must be less than the number of reds that we chose.
Hypergeometric Simplification Intuition
CC BY-SA 4.0
null
2023-05-22T06:47:32.613
2023-05-22T06:47:32.613
null
null
367762
[ "probability", "mathematical-statistics" ]
616543
2
null
561810
1
null
It's not obvious the usual argument won't go through with two points and no error in $Y$, and the result would be interesting either way. So, let's actually calculate. Two points, say (-1,-1) and (1,1). Each can be moved left or right by 1/2, with equal probability of each direction. There are four possible slopes, given as y difference/x difference - LL: (2/2) - LR: (2/3) - RL: (2/1) - RR: (2/2) That's interesting: the average slope over those four possibilities is 1.16667, which is greater than the true slope of 1. The denominator (the x-difference) averages to 2, but the non-linearity from taking the reciprocal makes the slope averages to more than 2/2 (by Jensen's inequality) -- not less, and not equal to. Is it something special about having just two points? We can try with lots of pairs of points (note: this is lots of pairs, it isn't a sample with equal probability on $\pm 1$) ``` > LOTS<-1000 > x<-rep(c(-1,1), each=LOTS) > z<-x+rbinom(2*LOTS, 1, .5)-1/2 > y<-x > lm(y~x) #check Call: lm(formula = y ~ x) Coefficients: (Intercept) x 2.329e-31 1.000e+00 > lm(y~z) Call: lm(formula = y ~ z) Coefficients: (Intercept) z -0.001197 0.797857 ``` Ok, so with lots of pairs of points we get the expected dilution, so it is something about the number of points. The next step is to try different numbers of pairs: each point on the graph is an average of 1000 replicates. [](https://i.stack.imgur.com/79XrW.png) There is dilution as soon as you have more than one pair, and it settles down pretty fast to the usual value. I think this has something to do with the fixed x values; if $x$ were randomly drawn with 50% probability on $\pm 1$ things would be different. You could then have two $x$ values that were the same and became different only with error, so you'd have a possible horizontal line. However, it's pretty clear that you won't understand the general problem by working with two points this way, because the results are indeed different. This has also been an example of calculations beating handwaving.
null
CC BY-SA 4.0
null
2023-05-22T06:52:25.880
2023-05-22T07:00:46.100
2023-05-22T07:00:46.100
249135
249135
null
616544
1
null
null
1
46
I have a study. I have 150 AI-generated images and 150 human-generated images. I have created a survey where respondents are given 10 images randomized from these 300 images. For each picture, they are asked if they think it's human-generated or AI-generated. I want to see if there is a statistically significant difference in the respondents' ability to distinguish between AI-generated and human-generated memes. How should I go about calculating how many respondents I approximately need for this? Specifically, I wonder if the fact that the respondents are shown only a subset of the entire image set affects the sample size I need. I understand that the size then depends on desired significance level, power level, and effect size.
Sample population for AI Study
CC BY-SA 4.0
null
2023-05-22T07:34:31.853
2023-05-22T07:34:31.853
null
null
350290
[ "hypothesis-testing", "sample-size", "statistical-power" ]
616545
1
null
null
0
10
When calculating the long run multiplier in a dynamic model (ARDL) (no cointegration found so not ECM) should the coefficients that are not statistically significant also be included?
calculate long run dynamic multiplier
CC BY-SA 4.0
null
2023-05-22T07:54:43.577
2023-05-22T07:54:43.577
null
null
383188
[ "dynamic-regression", "ardl" ]
616546
1
616559
null
1
15
I'm not able to found any website or textbook or scientific paper with the detailed derivation of of Multiplicative and Additive Holts & Winter triple exponential smoothing forecasting variance... Any help/reference would be greatly appreciated
Detailed derivation of Multiplicative and Additive Holts & Winter triple exponential smoothing forecasting variance
CC BY-SA 4.0
null
2023-05-22T08:13:19.050
2023-05-22T11:11:07.247
2023-05-22T08:32:56.740
1352
77096
[ "mathematical-statistics", "forecasting", "references", "econometrics", "exponential-smoothing" ]
616547
1
616611
null
3
29
Hi I would like to match a group of treated patients with an untreated group. I have about a million patients in the treatment group and ten times that in the control group. Conventional matching methods and tools can't do this. I'm thinking of methods such as sparse matrix matching. I have seen packages that allow matching on larger databases such as `bigmatch`, `rcbalance`. But I didn't find enough documentation on how to implement them. If not can someone suggest me methods to manage one million treated vs 10 million control. I thought of making several strata based on sex and age then matched on comorbidity score and BMI as a solution. Is there a problem with this approach if it allows me to have a balance? Or will writing my own matching algorithm allow me to match this large number of patients.
What are the possible solutions to do matching on very large dataset
CC BY-SA 4.0
null
2023-05-22T08:29:31.997
2023-05-22T19:49:50.153
2023-05-22T08:35:19.207
269691
269691
[ "r", "matching" ]
616548
2
null
464387
1
null
[@PAF's answer](https://stats.stackexchange.com/a/465884/388535) is good, but I would argue that ellipse's semiaxes correspond to square roots of eigenvalues. Indeed, the contours of Gaussian distribution with zero mean can be defined by the equation $x^\top \Sigma^{-1} x = \alpha,\, \alpha \in \mathbb{R}$. Let us pick $\alpha=1$. As @PAF pointed out, we can decompose $\Sigma=R\Lambda R^\top\Rightarrow \Sigma^{-1}=R\Lambda^{-1} R^\top$, where $R$ is a rotation matrix. Coordinates in the rotated system $\tilde{x}$ relate to the original coordinates as $x=R\tilde{x}$, and the equation $x^\top \Sigma^{-1} x = 1$ in the new coordinates reads as $$ \tilde{x}^\top \Lambda^{-1} \tilde{x} = \frac{\tilde{x}_1^2}{\lambda_1}+\frac{\tilde{x}_2^2}{\lambda_2} = 1. $$ Compare that to the equation of an ellipse with semiaxes $a$ and $b$, $\frac{\tilde{x}_1^2}{a^2}+\frac{\tilde{x}_2^2}{b^2} = 1$, and we conclude $a=\sqrt{\lambda_1},\; b=\sqrt{\lambda_2}$. An additional observation is that the eigenvectors $\nu_1, \nu_2$ of $\Sigma$ (i.e., the columns of $R$) will be the basis vectors of the rotated system. [](https://i.stack.imgur.com/My52i.png)
null
CC BY-SA 4.0
null
2023-05-22T08:31:26.900
2023-05-22T08:31:26.900
null
null
388535
null
616549
2
null
616501
1
null
I appreciate the answers here. If we assume that the random variables $X_1,X_2,\ldots X_n$ are independent, then the CDF $F_{X_1,X_2,\ldots X_n}(x_1,x_2\ldots x_n)$ factors into $F_{X_1}(x_1)F_{X_2}(x_2)\ldots F_{X_n}(x_n)$. The CDF of $Y_n=\sum_{j=1}^{n}a_jX_j$ is nothing but $F_{Y_n}(y_n)=F_{a_1X_1,a_2X_2,\ldots a_nX_n}(a_1x_1,a_2x_2,\ldots a_nx_n)=\mathbb{P}(a_1X_1\leq c_1,a_2X_2\leq c_2,\ldots a_nX_n\leq c_n)=\mathbb{P}(X_1\leq \frac{c_1}{a_1},X_2\leq \frac{c_2}{a_2},\ldots X_n\leq \frac{c_n}{a_n})=F_{X_1}(\frac{c_1}{a_1})F_{X_2}(\frac{c_2}{a_2})\ldots F_{X_n}(\frac{c_n}{a_n})$ by our assumption . So lineair combinations of random variables are random aswell.
null
CC BY-SA 4.0
null
2023-05-22T08:34:31.163
2023-05-22T08:34:31.163
null
null
386534
null
616550
1
616558
null
0
24
I am working on a statsmodels VAR model to forecast some values and want to analyze the created model. In the examples and in some books I read about calculating the autocorrelation of the residuals to see whether the assumptions made are valid or if information is missing. I used the `plot_acorr` function of the `VARResults` class but noticed that it produces a $k\times k$ plot of autocorrelations given $k$ variables. Since autocorrelation is the correlation of a signal with lagged values of itself, I would assume that only $k$ autocorrelations are possible. What exactly are the other autocorrelation graphs?
Statsmodels VAR plot_acorr() amount of plots
CC BY-SA 4.0
null
2023-05-22T08:41:04.223
2023-05-22T10:46:26.740
2023-05-22T10:46:26.740
53690
387556
[ "autocorrelation", "vector-autoregression", "statsmodels", "diagnostic" ]
616551
1
null
null
1
65
I have used three questionnaires in a study that all measure musical training. Each of these three questionnaires (MT1/MT2/MT3) consist of various Likert scales that are averaged to calculate the score of the questionnaire. Now I am interested how musical training is correlated with another variable S (sight-reading ability) in three different conditions. In general my linear mixed model would look like the following, if I had only one variable for musical training: S ~ cond * MusicalTraining + (1|participant) The scores of the questionnaires are not surprisingly correlated: ``` [,1] [,2] [,3] [1,] 1.0000000 0.3615148 0.7092172 [2,] 0.3615148 1.0000000 0.5723699 [3,] 0.7092172 0.5723699 1.0000000 ``` EDIT Moreover, when calculating the reliability across all items from the three questionnaires (MT1/MT2/MT3) assessing musical training they show a Cronbach-Alpha of alpha = .86 95%CI [.8, .91]. EDIT END It seems to me to be too complicated to use all three variables for musical training in the regression and I am also not to sure how to interpret the results of such a model, e.g. S ~ cond * MT1 * MT2 * MT3 + (1|participant). I tried to run a PCA and use the first principal component in the regression. However, I think a PCA is difficult to interpret if then used in a regression. Is there any other way to deal with this issue? Could I just average the different measurements of musical training for each each participant and use them as a single variable?
Multiple predictors that measure the same concept in regression
CC BY-SA 4.0
null
2023-05-22T08:42:35.107
2023-05-23T12:41:33.090
2023-05-23T12:41:33.090
309425
309425
[ "r", "regression", "mixed-model", "multivariate-analysis", "linear-model" ]
616552
2
null
616535
0
null
It's easier to see if you use another variable for the substitution, say $y=-x$. In the integral $\int_{-\infty}^0 e^{\epsilon x} f_X(x)dx$, you need to: - replace $x$ with $-y$ in the integrand; - change the bounds from $(-\infty,0)$ to $(\infty, 0)$, because $y\rightarrow \infty$ when $x \rightarrow \infty$, and $y=0$ when $x=0$; - set $dx=-dy$, because $\frac{dy}{dx}=-1$. Putting everything together gives $$ \int_{-\infty}^0 e^{\epsilon x} f_X(x)dx = -\int_{\infty}^0 e^{-\epsilon y} f_X(-y)dy = \int_0^{\infty} e^{-\epsilon u} f_X(-y)dy $$ Also, the very last integral in your expression should be $\int_{-\infty}^\infty e^{-\epsilon x}f_X(-x)dx$.
null
CC BY-SA 4.0
null
2023-05-22T08:51:26.260
2023-05-22T08:51:26.260
null
null
238285
null
616553
2
null
616494
0
null
So far so good. To obtain an expression for the unconditional variance, you need to work out $\mathbb{E}(\epsilon_t^2) =\mathbb{E}(h_t)$. Call this $\sigma^2_t$. Substituting in $h_t$ as you suggested gives $$ \sigma^2_t = \mathbb{E}(\omega + \alpha \epsilon_{t-1}^2 + \beta h_{t-1}) = \omega + (\alpha + \beta) \sigma^2_{t-1} $$ Now substitute in $\sigma^2_{t-1}$. Can you see what's going to happen?
null
CC BY-SA 4.0
null
2023-05-22T09:12:15.280
2023-05-22T09:12:15.280
null
null
238285
null
616554
1
null
null
0
14
I want to investigate, weather financial news have an influence on the volatility prediction of asset returns (daily data) when including them into the variance model/mean model. I have fit a GARCH/EGARCH/GJR-GARCH (1,1) model with 3 distributions (normal, student's t and GED) for 7 individual assets. One time without exogenous variables and one time including exogenous variables (a) into the mean eq. and (b)into the variance eq. Mean: [](https://i.stack.imgur.com/0I2hk.png) [](https://i.stack.imgur.com/q5lZN.png) Variance: [](https://i.stack.imgur.com/KJgQL.png) [](https://i.stack.imgur.com/ynGpB.png) The exogenous variables are positive news (amount of news multiplied with a sentiment score), negative news, and lagged total amount of news. Including the mean is most of the time insignificant, the external regressors for conditional variance models are most of the time insignificant, the regressors for the mean models are sometimes significant. [](https://i.stack.imgur.com/OAvYl.png) Does including the external regressors into the mean model make the constant mean to a conditional mean? Second, if including the external regressors actually make the volatility forecast better. Therefore, I would compare the MSE of the forecasted (out-of-sample model) with what? Because, obviously there is no conditional variance of the real data, so I am not sure what the MSE is actually computed (see code). Do I only compare the estimates of the e.g. GARCH (1,1) incl. exogenous variables into the mean model with the forecasted estimates of the same model? (That would imply, that I need to have a very well fitted model, otherwise this comparison doesn't make sense). My MSE assumption: [](https://i.stack.imgur.com/X6yxz.png) [](https://i.stack.imgur.com/LZlYZ.png) [](https://i.stack.imgur.com/4bVAT.png) I use the rugarch package and R-studio. Thank you!
How to compare the performance of a volatility forecast like GARCH (1,1) with exogenous variables (MSE?)
CC BY-SA 4.0
null
2023-05-22T09:30:53.633
2023-05-22T11:17:30.907
2023-05-22T11:17:30.907
53690
388543
[ "model-evaluation", "garch", "model-comparison", "mse", "volatility" ]
616555
1
null
null
2
16
I am trying to answer a hypothesis wherein companies with more heterogenous boards are more compliant and less likely to attempt fraud than companies with homogenous boards. So in effect I have a dataset containing hundreds of thousands of companies and details about the people behind each and everyone of them (such as age, sex, income level etc) Now I need to find a way to quantify similarity. The one approach I have tried is to calculate cosine distance from each board member to each board member within a single company and then take an average of these distances. I am, however, not particularly satisfied with this approach. I have tried to scour the internet for different measures to calculate for this kind of problem but haven't come up with any. Clustering indexes don't seem to apply here. Any idea on what to try? I would need the measurement to be standardized to certain range and preferably as robust as possible against differing board sizes.
Measuring group homogeneity
CC BY-SA 4.0
null
2023-05-22T10:17:35.713
2023-05-22T10:17:35.713
null
null
348461
[ "variance" ]
616556
1
616560
null
0
22
The section "Bonferroni Comparisons" in the book Applied Statistics for Engineers and Scientists, Petruccelli at el. (1999), the authors wrote: > Bonferroni intervals are ideal for making a small number of pre-specified comparisons (i.e., comparisons decided on before looking at the data). I don't understand how the comparisons can be made without looking into the data (= it means data snooping?)? I mean without data how one can even construct the interval?
Pre-specified comparisons procedure: How comparisons can be made without looking at the data
CC BY-SA 4.0
null
2023-05-22T10:26:28.557
2023-05-22T11:25:51.787
null
null
383728
[ "multiple-comparisons", "bonferroni" ]
616557
2
null
616551
2
null
If you can reasonably assume that the three questionnaires measure the same dimension/latent variable representing musical training (MT), you could aggregate the three scores and use a single composite variable representing musical training. Alternatively, you could use the three scores as separate indicators of an MT factor (latent variable) in a structural equation model. Before doing that though it may be a good idea to run item-level factor analyses with all MT items to see how many factors you would get at the item level (i.e., is there a strong single factor/dimension of MT or multiple factors at the item level?)
null
CC BY-SA 4.0
null
2023-05-22T10:30:53.767
2023-05-22T10:30:53.767
null
null
388334
null
616558
2
null
616550
0
null
I do not use Python, but here is what I think may be going on. If the VAR model is indeed a statistically adequate model for the conditional mean of a vector-valued time series, conditioning on its past, then the errors should not only have zero autocorrelation for each individual component time series but also zero lagged cross correlations across the components. That is why you would inspect a $k\times k$ matrix of auto- and lagged cross-correlation functions rather than just the $k$ autocorrelation functions.
null
CC BY-SA 4.0
null
2023-05-22T10:46:15.407
2023-05-22T10:46:15.407
null
null
53690
null
616559
2
null
616546
0
null
It's in my 2008 Springer textbook ([https://robjhyndman.com/expsmooth/](https://robjhyndman.com/expsmooth/)), and in this paper: [https://robjhyndman.com/publications/predint/](https://robjhyndman.com/publications/predint/)
null
CC BY-SA 4.0
null
2023-05-22T11:11:07.247
2023-05-22T11:11:07.247
null
null
159
null
616560
2
null
616556
1
null
Yes, prespecified comparison can easily be misunderstood. What is meant here should be seen in a situation where you can make many tests with the same data. The most common case is: you have two samples, and many variables measured on them, and you want to test if the samples come from the same population. Then you could make a separate test on each of the variables. For example, you measure 100 variables like height, weight, bmi, age, ... on people from town A and town B and want to check the null hypothesis, that it does not matter if you come from town A or B. Then it is not correct to make 100 tests and say, if one of them is significant, then I conclude that A and B are different. For example, if you set significance level to 5%, you would on average in such a situation have 5 tests that are significant, even if A and B are in fact the same population. The Bonferroni correction takes care for that by adjusting the significance level for each of the $m=100$ tests. It requires that you have to have at least one test with a p-value of $\alpha/m$ in order to be allowed to say that A $\neq$ B. In our example, one of the 100 tests would need to reach $p \leq 0.0005$! To avoid that you need one test with a very low p-value, it is a good idea to work with fewer tests. Selecting these tests in advance is meant by "prespecified comparisons". One should typically look at the scientific question and decide for which of the 100 variables you would expect a difference at all, if the populations of A and B are not the same. In practice, people often look at the data in advance, but this is not correct methodology. Then the comparison are no longer prespecified.
null
CC BY-SA 4.0
null
2023-05-22T11:25:51.787
2023-05-22T11:25:51.787
null
null
237561
null
616561
1
null
null
0
45
I'm calculating the distribution of the sum of the squares of the components of the MLE $hat{\beta}$ in linear regression with normal errors. We are assuming that $\beta = 0$. The distribution of the MLE is therefore $\hat{\beta} \sim N(0, \sigma^2 (X^TX)^{-1})$ where $X$ is the design matrix of the model and $\sigma^2$ is the variance of the normally distributed errors. I want to show that the distribution of $\frac{||\hat{\beta}||^2}{\sigma^2}$ is $\sum_{i=1}^{n-p}\lambda^{-1}W_i$, where the $W_i$ are $\chi_1^2$ random variables, and the $\lambda_i$ are the eigenvalues of $X$ (in this problem we are given that $X$ is full rank and has an eigendecomposition into $X = UDV^T$). I've already calculated up to $\hat{\beta} \sim N(0, \sigma^2(X^TX)^{-1}) = N(0, VD^{-2}V^T)$ but I don't know how to proceed to find the distribution of $||\hat{\beta}||^2 / \sigma^2$, could anyone help with this?
Calculating the distribution of the sum of the squares of the predictors in linear regression
CC BY-SA 4.0
null
2023-05-22T11:30:01.683
2023-05-22T15:57:54.493
null
null
331303
[ "regression", "distributions", "linear-model", "linear", "multivariate-normal-distribution" ]
616562
1
null
null
1
4
I am performing a study in which I matched patients with treatment A 1:1 to patients with treatment B. Patients were then contacted for follow-up, since not all patients responded, the sample sizes are not the same size (34 in group A and 36 in group B). I planned to use paired t-tests, McNemar tests, and Wilcoxon signed rank tests to compare the characteristics and outcomes between the two groups. However, since the sample sizes are not equal, I cannot use paired tests. Which tests should I use? Is it acceptable to use unpaired tests here or is there another solution?
Which test to use in a matched control study with unequal sample sizes?
CC BY-SA 4.0
null
2023-05-22T11:37:16.113
2023-05-22T11:37:16.113
null
null
388549
[ "paired-data" ]
616563
1
null
null
0
10
I'm running the following mixed model to explore the relationship between `Condition` (3 levels; `Predicted`, `Plausible`, `Unpred/Plaus`) and `HighLow` (2 levels; `High`, `Low`) on `RT` (reaction time): ``` RT_lme<- lmer(RT ~ Condition*HighLow+(Condition|Pt_ID) + (Condition|SentNumb), data = testing_data.df,control=lmerControl(optimizer="bobyqa")) ``` I am using dummy coding for `HighLow` and sum coding for `Condition` to explore the following: Predicted,High-Implausible,High Predicted,High- Plaus/unpred,high Low-High (Implausible,high - Predicted,high) - (Implausible,low - Predicted,low) (plaus/unpred,high - Predicted,high) - (plaus/unpred,low - predicted, low) The contrasts for this are ``` high=0 low=1 Predicted 1 1/3 1/3 Implausible 1 -2/3 1/3 Plaus/Unpred 1 1/3 -2/3 ``` However, I am also interested in exploring Predicted,low-Implausible,low Predicted,low- Plaus/unpred,low predicted,high-predicted,low Can I run the same mixed model with different contrasts to achieve this, or is it only appropriate to conduct post-hoc comparisons?
Running the same mixed model with different contrasts to explore different hypotheses
CC BY-SA 4.0
null
2023-05-22T11:41:27.507
2023-05-22T11:46:31.250
2023-05-22T11:46:31.250
379020
379020
[ "r", "mixed-model", "lme4-nlme", "post-hoc", "contrasts" ]
616564
1
null
null
2
26
I have a dataset of weighted polynomials, i.e. each data point is a polynomial (of variable size/degree) together with a weight vector (of fixed size). Each data point has an integer label that ranges from 0 to 200. My question is: which ML algorithms can I use to try to learn the integer label associated to each (poly, weight) pair, considering len(poly) variable and len(weight) fixed? I tried a simple MLP only using the weights as features and achieved a $R^2\approx65\%$. Edit: The (polynomials, weights) are solutions of a combinatorial system. They represent Calabi-Yau manifolds in a certain weighted projective space. It is worth mention that the polynomials are defined in 5 variables $(x_0, \dots x_4)$, and the length of coefficient vectors goes up to ~1000.
How to learn from a dataset of weighted polynomials
CC BY-SA 4.0
null
2023-05-22T11:49:45.940
2023-05-22T18:04:43.893
2023-05-22T18:04:43.893
388550
388550
[ "machine-learning", "dataset", "polynomial" ]
616565
2
null
614383
1
null
This is actually not an error - it is possible for the effective sample size to be larger than the actual sample size. This means that your MCMC samples provide more information about the parameter, than if you had obtained the same number of independent samples directly from the target (posterior) distribution. To get some intuition about what this means: Imagine that we are doing a Bayesian analysis of a problem where the true posterior distribution for some parameter, $\theta$, (unbeknownst to us) is a normal distribution with $\mu=172$ and $\sigma=12$. If an omniscient statistician were to estimate the posterior mean, $\mu$, for that parameter by directly drawing $N=100$ samples from the posterior $N(172, 12)$ distribution and taking the average, then the standard error of the mean for that estimate would be: $$ \textrm{SE} = \frac{\sigma}{\sqrt{N}} = \frac{12}{\sqrt{100}} = 1.2 $$ This means that if the omniscient statistician were to repeat this experiment many times (drawing $100$ random samples from the $N(172,12)$ distribution and each time computing the sample mean), then the resulting values would be distributed around the true value, $\mu=172$, with a standard deviation of approximately $1.2$. If we instead use MCMC to estimate $\mu$ using $N=100$ iterations (post warmup), and learn that $\textrm{ESS}>N$ then this means that our uncertainty about $\mu$ is less than $1.2$: if we were to run the MCMC for 100 iterations multiple times, then the posterior means from each run would be distributed around the true value with a standard deviation that was less than $1.2$. This can happen when consecutive MCMC samples are negatively correlated: ESS is often defined as: $$ \textrm{ESS} = \frac{N}{1 + 2 \sum_{n=1}^{\infty}\rho_n} $$ where $\rho_n$ is the lag-n autocorrelation (i.e., the correlation between the samples and the samples shifted n steps). If there is negative autocorrelation for odd lags then the denominator can be less than 1 leading to the situation where $\textrm{ESS}>N$. --- Stan reference manual: [Effective Sample Size](https://mc-stan.org/docs/2_20/reference-manual/effective-sample-size-section.html) Andrew Gelman blog: [Simple example of anticorrelated samples](https://statmodeling.stat.columbia.edu/2018/01/18/measuring-speed-stan-incorrectly-faster-thought-cases-due-antithetical-sampling/) Wikipedia: [Antithetic variates](https://en.wikipedia.org/wiki/Antithetic_variates#:%7E:text=In%20statistics%2C%20the%20antithetic%20variates,to%20obtain%20an%20accurate%20result.) Aki Vehtari GitHub: [Comparison of MCMC effective sample size estimators](https://avehtari.github.io/rhat_ess/ess_comparison.html)
null
CC BY-SA 4.0
null
2023-05-22T12:00:40.017
2023-05-22T12:20:20.770
2023-05-22T12:20:20.770
11877
11877
null
616566
1
null
null
0
19
I have survey data for multiple countries, with respondents nested in countries. The survey was conducted in different waves, but not all countries participated in all waves. Thus, there are countries that participated in Wave 1-7, but also countries that only participated in Wave 2, 3, and 4 etc. I now want to visualize the changing value of one question (dummy variable: Yes/No) over time; i.e. compare the average in the first wave of participation to the last wave of participation per country. Given the unequal time periods, does it make sense to standardize the data by using Z-scores and then plot the change (Z-score of average value from first participated wave - Z-score of average value from last participated wave)? Additional info: The data is right skewed. How else would I account for the different time periods, if I want to visualize the change of answers to the question per country? Thanks in advance!
Z-Score to compare data with different time periods?
CC BY-SA 4.0
null
2023-05-22T12:05:27.793
2023-05-22T12:05:27.793
null
null
388554
[ "survey", "standardization", "z-score" ]
616567
1
null
null
0
30
i am interested in measuring the volatility of a variable in the financial statements. previous studies measured it using the rolling standard deviation of the item that i am interested in measuring it divided by average total assets over the last four quarters. i don't understand how to calculate this measure. Also, in some articles they use quarters while in others they use years, my question is how can i choose the right period(quarters, years). Another question: what is meant by the standard deviation of a time series. could anyone please help? Thanks
standard deviation of the time-series
CC BY-SA 4.0
null
2023-05-22T12:05:55.580
2023-05-22T13:43:36.407
null
null
388553
[ "time-series" ]
616568
2
null
616561
0
null
To be accurate, "$X = UDV^T$" is not the "eigendecomposition", but "singular value decomposition (SVD)" of $X$. And with this SVD of $X$, it must be made clear that the size of $D$ is $n \times p$ (assuming $X$ is $n \times p$), and $U$ and $V$ are order $n$ and order $p$ orthogonal matrix respectively. So the notation "$D^{-2}$" does not make sense (unless you are using the "[Thin SVD](https://en.wikipedia.org/wiki/Singular_value_decomposition#Thin_SVD)" in which $U$ is $n \times p$ and $D$ is $p \times p$ as opposed to the common "[Full SVD](https://en.wikipedia.org/wiki/Singular_value_decomposition#Intuitive_interpretations)" as above). The correct way is expressing $D$ as $D = \begin{bmatrix}\Lambda \\ 0\end{bmatrix}$, where $\Lambda = \operatorname{diag}(\sigma_1, \ldots, \sigma_p)$ is an order $p$ diagonal matrix with $\sigma_1 \geq \cdots \geq \sigma_p > 0$ (which are called singular values of $X$. The relationship between the eigenvalues $\lambda_j$, $1 \leq j \leq p$ of the matrix $X^TX$ and $\sigma_j$ are given by $\lambda_j = \sigma_j^2$). Therefore, the true distribution of $\hat{\beta}$ is \begin{align} \hat{\beta} \sim N_p(0, \sigma^2V\Lambda^{-2}V^T), \end{align} which implies that $\sigma^{-1}\hat{\beta} \overset{d}{=} A\xi$, where $\xi \sim N_p(0, I_{(p)})$ and $A$ satisfies $A^2 = V\Lambda^{-2}V^T$. In addition, since $V$ is an order $p$ orthogonal matrix, $V^T\xi := W \sim N_p(0, I_{(p)})$. It then follows that \begin{align} \sigma^{-2}\|\hat{\beta}\|^2 \overset{d}{=} \xi^TA^2\xi = \xi^TV\Lambda^{-2}V^T\xi = W^T\Lambda^{-2}W = \sum_{j = 1}^p\sigma_j^{-2}W_j^2 = \sum_{j = 1}^p\lambda_j^{-1}W_j^2. \end{align} Therefore, the result you presented is incorrect in that: - There are $p$ summands, not $n - p$ summands. To see this more clearly, just consider the simplest $X = \begin{bmatrix}I_{(p)} \\ 0\end{bmatrix}$, for which case $\hat{\beta} \sim N_p(0, \sigma^2I_{(p)})$. - $\lambda_j$ is the eigenvalue of $X^TX$, not of $X$. In fact, as a rectangular matrix, it is impossible for $X$ to have eigenvalues.
null
CC BY-SA 4.0
null
2023-05-22T12:48:58.063
2023-05-22T14:09:40.480
2023-05-22T14:09:40.480
20519
20519
null
616569
1
null
null
1
31
The data I'm woking with consists of 3 types of data: 1- binary features: those features are either 0 or 1. I have about 6 or 7 columns. 2- cells: the values here range from 0 to 0.8 at max. Here I have 38 columns. 3- genes: genes expression. I've picked 14 genes and added them. The values here are very much different from the rest, as they range from from to 400 and even more. What I'm currently doing, is that I split the data to train and test before running the ML pipeline, and then I scale only the genes features. I scale the test set according to the train set. Like this: ``` gene_cols_train = grep("^gene_", names(train_set)) gene_cols_test = grep("^gene_", names(test_set)) scaled_gene_cols_train = scale(train_set[,gene_cols_train]) scaled_gene_cols_train = round(scaled_gene_cols_train*100)/100 train_set[,gene_cols_train] = scaled_gene_cols_train scaled_gene_cols_test = scale(test_set[, gene_cols_test], center = attr(scaled_gene_cols_train, "scaled:center"), scale = attr(scaled_gene_cols_train, "scaled:scale")) scaled_gene_cols_test = round(scaled_gene_cols_test*100)/100 test_set[, gene_cols_test] = scaled_gene_cols_test ``` My question: is scaling only the genes a good approach? combining different data sources into one is kinda new to me, and I wonder how should I scale it sense the range of values differs so much between them. thank you!
How should I scale data that has been assembled from different data sources?
CC BY-SA 4.0
null
2023-05-22T12:51:08.120
2023-05-27T18:05:23.590
null
null
362803
[ "r", "machine-learning", "normalization", "standardization" ]
616570
1
null
null
2
39
I have a question on possible approaches in modeling and predicting refugee flows. Task My task is to predict the number of refugees for a country origin-destination pair for a given year. For example, given data up to 2016, we have to predict the number of refugees that arrived in Colombia from Venezuela in 2017. Data Summary Aside from historical refugee flows, we are also collecting data on: - historical migration data - LDA topic modeled newspaper text - indices related to conflict and fragility - historical google trends data of relevant search terms (ex. people in Venezuela searching for 'passport' or 'Colombia') - socioeconomic stats about countries. Note that we have both country-specific data (e.x. the historical level of conflict), but also country pair-specific data that can either be directed (e.x. people in Venezuela searching for 'Colombia') or undirected (e.x. whether or not countries share a common language) How to model this/shape the data? One of the main challenges is that the data can be shaped multiple ways, including: - Pair forecasting- What is the number of people going from Syria to Turkey in 2015? The output is a scalar value denoting the number of people from the origin country to destination country at time t. The input is origin country characteristics, destination country characteristics, and characteristics specific to the origin-destination pair/relationship. This is the approach that seems to be common in the literature- predicting each origin-destination pair individually. An implicit assumption in this approach is that each corridor's refugee traffic is independent of other corridors. For example, knowing information about the increased traffic on google of people in Venezuela searching for 'Peru' isn't relevant in predicting the amount of people traveling from Venezuela to Colombia. - Single origin, multi-destination forecasting- What is the number of people leaving South Sudan in 2016 and where are they all going? The output is a vectory representing the number of people from the origin country arriving to each destination country at time t. The input is origin country characteristics, characteristics of all destination countries, and characteristics specific to each origin-destination pair/relationship. This is the approach that makes sense to me, as I think destination is secondary to one's desire to get out of a dangerous place. I have thought about this as a 2 stage model though it doesn't have to be- first predicting simply the total number of refugees leaving a country, and then allocating them to different destination countries. Another consideration is that we are dealing with mixed frequency data- while our y variable is yearly, some of our X's (e.g. google trends) are monthly. This could be aggregated to avoid complexity, but there would be a loss of information. I will try a RandomForest pair-forecasting approach as a baseline, but I am also very interested in modeling the data the second way, where origin country characteristics, characteristics of all destination countries, and characteristics specific to each origin-destination pair/relationship are used to predict the total volume of people and where they are going to go. The challenge is that common regression models aren't designed for this. How would one design a model where country pair-specific data both informs the total origin-country outflow and informs the pair-specific forecast? I think modeling the data like this would either require using Graph Neural Networks or designing a neural network architecture specific to this challenge, but I am unsure where to start. Any advice or places to start are greatly appreciated, thank you in advance!
Advice on multi-output regression task: Forecasting Refugee Flows
CC BY-SA 4.0
null
2023-05-22T12:59:16.440
2023-05-24T15:16:35.647
2023-05-22T13:08:03.063
267382
267382
[ "time-series", "multiple-regression", "forecasting", "predictive-models", "graph-neural-network" ]
616571
2
null
614935
4
null
A fast [svd](/questions/tagged/svd) implementation suits your needs. SVD is the "gold-standard" of rank-revealing matrix factorizations (Golub & van Loan, Matrix Computations). The first place to start is to try [scipy.sparse.linalg.svds](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.svds.html). If it's not fast enough for your needs, then you'll need to explore other packages & SVD algorithms. Some alternative SVD algorithms are mentioned here [What fast algorithms exist for computing truncated SVD?](https://stats.stackexchange.com/questions/159325/what-fast-algorithms-exist-for-computing-truncated-svd) but you might have to do some digging to turn up a python implementation that suits your needs. I've read that implicitly restarted lanczos bidiagonalization methods can compute SVD for much, much larger matrices than the one you have, such as the Netflix prize dataset, but I haven't found a widely used python implementation. Alternatively, you could simply use your rank-deficient matrix as-is and estimate a penalized regression instead. Options include [ridge-regression](/questions/tagged/ridge-regression), [lasso](/questions/tagged/lasso) and [elastic-net](/questions/tagged/elastic-net).
null
CC BY-SA 4.0
null
2023-05-22T13:04:52.437
2023-05-22T14:30:47.127
2023-05-22T14:30:47.127
22311
22311
null
616572
2
null
72117
0
null
Probability: you are given a beaker full of liquid. There's a 1% chance the liquid is acid, 99% chance it's beer. Fuzzy logic: you are given a beaker full of liquid. The liquid has 99% the characteristics of beer, 1% characteristics of acid. You want beer. Which breaker do you drink? - courtesy of Prof. Jerry Mendel, University of Southern California, retired.
null
CC BY-SA 4.0
null
2023-05-22T13:25:10.650
2023-05-22T13:25:10.650
null
null
388558
null
616575
1
null
null
0
22
I have just finished Chatfield and Xing's The Analysis of Time Series, 7th edition and I'm beginning to play around with some simple analysis. I have this graph representing the grand mean value selections from a bunch of participants over 30 game rounds. [](https://i.stack.imgur.com/RkPis.png) By visual inspection, there looks to be a curious cyclic pattern of higher followed by lower selected values. This pattern makes some sense theoretically (given the field and particular data, though I will not get into it). However, when I run `acf(blastovertime$meanround)`, I get non-significant values: [](https://i.stack.imgur.com/iKSCh.png) I would expect a significant ac at lag 1... Is my experiment just simply underpowered? TYIA
Autocorrelation returning non-significant values despite an apparent pattern
CC BY-SA 4.0
null
2023-05-22T13:38:55.467
2023-05-22T13:38:55.467
null
null
388561
[ "r", "autocorrelation" ]
616576
2
null
616567
0
null
The calculation part depends on the medium you use, and the exact syntax differs from the program or programming language you are using. Randomness in a phenomenon manifests variability or dispersion in the observations. There are many statistical objects you can use to model this variability, which come with their own advantages and disadvantages, standard deviation being a common one of them. Standard deviation measures how spread out your observations are from the mean. Under the independent, identically distributed assumptions (all your data points are independent and come from the same data generating process) standard deviation of the whole dataset can be quite explanatory. However, the statistical dispersion of observations in the case of a time series changes over time, depending on the underlying process. You can compute the standard deviation of the total series -what I think you are implying in your question- but most likely it will not be as explanatory as moving standard deviations. Therefore, it ends up being a modeling choice. If you are interested in monthly variabilities, you can compute the standard deviation in each month; there is no right or wrong way of doing this, it depends on the question in your mind.
null
CC BY-SA 4.0
null
2023-05-22T13:43:36.407
2023-05-22T13:43:36.407
null
null
282477
null
616579
1
null
null
0
5
I have a dataset where there are measurements taken over time. I have no assumptions whatsoever regarding the exact nature of the process, i.e. whether the measurements are iid or belong to a time series. I do know that I can test for autocorrelations (e.g. using Ljung-Box test) or plot autocorrelation plots as a sort of an eyeball test, but as far as I know these statistics are mostly looking at linear relationships. If there are no linear relationships, we take it as independence of consecutive data points, but this is not necessarily true for distributions other than the normal distribution. In general, is the lack of linear relationships between lagged values an indicator of independence in time series? If not, is there a measure (such as mutual information for non-time series datasets) to check serial dependence? Thanks in advance.
How to test whether a given set of data possesses serial dependence or not?
CC BY-SA 4.0
null
2023-05-22T13:59:05.047
2023-05-22T13:59:05.047
null
null
282477
[ "time-series", "iid" ]
616580
2
null
616470
2
null
Another option is to estimate the minimum-width credible interval of the posterior probability of observing none of the three events in a subsequent sample. This can be done quickly using sampling. After observing $k_i$ occurrences of event $i$ out of $n$ samples, with event $i$ occurring independently with probability $1-\theta_i$, the posterior distribution of $\theta_i$ is: $$p(\theta_i)\propto\text{Binomial}(n-k_i;n,\theta_i)\pi(\theta_i)$$ Where $\pi(\theta)$ is the prior distribution of $\theta_i$. Using, e.g., Jeffreys' prior on $\theta_i$, this becomes: $$\theta_i\sim{\text{Beta}(n-k_i+1/2,k_i+1/2)}$$ The probability of observing none of the $i$ events is $$\theta_0=\prod_{i=1}^3\theta_i$$ By sampling $\theta_0$, we can get an estimate for the credible interval of $\theta_0$. It takes a fraction of a second to produce $10^6$ samples in R: ``` k <- c(12, 16, 6) # example number of occurrences n <- 100 # example number of samples m <- length(k) # sample theta0 theta0 <- sort(Rfast::colprods(matrix(rbeta(m*1e6, n - k + 0.5, k + 0.5), m))) # get the intervals that contain 95% of the samples int <- rbind(theta0[1:50001], theta0[950000:1e6]) # find the minimum-width interval int[,which.min(diff(int))] #> [1] 0.6011458 0.7701781 # compare to the equal-tailed interval int[,25001] #> [1] 0.5983814 0.7676191 ```
null
CC BY-SA 4.0
null
2023-05-22T14:00:29.917
2023-05-22T14:00:29.917
null
null
214015
null
616581
1
null
null
0
4
[](https://i.stack.imgur.com/0iCko.png) Screenshot of a paper under a section about dynamic interpretations. Source: [https://doi.org/10.3200/JECE.36.1.77-92](https://doi.org/10.3200/JECE.36.1.77-92) Does anyone care to explain this to me. Are they stating that the negative coefficient of Xt-1 does not imply a negative relationship? If so, would it be correct to interpret a positive xt-1 being due to the long-run impact of x being greater than the short-run impact?
interpretation coefficient sign dynamic model
CC BY-SA 4.0
null
2023-05-22T14:01:16.217
2023-05-22T14:01:16.217
null
null
383188
[ "autoregressive", "dynamic-regression" ]
616582
1
616621
null
2
64
I recently heard an interesting comment from a gentleman on YouTube and it made sense instantly. To paraphrase he explained that "fine-tuning" an LM is not necessarily adding knowledge to a model - it is rather teaching that model to do or suggest things in the style of the fine-tuning data. So then my question is what if one does not want to change the model too much - just wants to make a model aware of a new set of data i.e.: a companies internal domain knowledge? Is there a recommended strategy? Edit: My thought is some sort of distillation where you train a smaller or same model against itself - plus the additional data.
How do you add knowledge to LLMs
CC BY-SA 4.0
null
2023-05-22T14:16:17.790
2023-05-22T21:35:01.940
2023-05-22T21:25:23.997
87106
87106
[ "neural-networks", "language-models", "chatgpt" ]
616584
2
null
604021
0
null
The paper [LIMA: Less Is More for Alignment](https://arxiv.org/abs/2305.11206) uploaded a few days ago to arXiv shows that fine-tuning with the standard supervised loss without any reinforcement learning works fine: > Large language models are trained in two stages: (1) unsupervised pretraining from raw text, to learn general-purpose representations, and (2) large scale instruction tuning and reinforcement learning, to better align to end tasks and user preferences. We measure the relative importance of these two stages by training LIMA, a 65B parameter LLaMa language model fine-tuned with the standard supervised loss on only 1,000 carefully curated prompts and responses, without any reinforcement learning or human preference modeling. LIMA demonstrates remarkably strong performance, learning to follow specific response formats from only a handful of examples in the training data, including complex queries that range from planning trip itineraries to speculating about alternate history. Moreover, the model tends to generalize well to unseen tasks that did not appear in the training data. In a controlled human study, responses from LIMA are either equivalent or strictly preferred to GPT-4 in 43% of cases; this statistic is as high as 58% when compared to Bard and 65% versus DaVinci003, which was trained with human feedback. Taken together, these results strongly suggest that almost all knowledge in large language models is learned during pretraining, and only limited instruction tuning data is necessary to teach models to produce high quality output.
null
CC BY-SA 4.0
null
2023-05-22T14:34:52.950
2023-05-22T14:34:52.950
null
null
12359
null
616585
1
616701
null
0
10
I've built very-well performing survival model (weibull proportional hazards model for interval censored data, modelled with `IcenReg`) with many covariates, some interaction terms and some splines. The model's coefficients are very hard to interpret, if not impossible- there's a lot of collinearity between the variables too. I would like to show how the model makes decisions- e.g. how to split the data to get good separation of predicted risk? My initial thoughts were decision trees- and they work to an extent. I get 4 sensible levels out of the tree if i play around with `rpart::rpart.control`, but it only splits according to three variables... What can I do to show how the model makes its predictions?
Feature importance/model exploration in a large survival model
CC BY-SA 4.0
null
2023-05-22T14:39:11.593
2023-05-23T15:02:55.347
null
null
265390
[ "r", "survival", "modeling", "interpretation", "explainable-ai" ]
616586
1
null
null
2
37
Lets say my model is : $y=\beta_0+\beta_1x_1+\beta_2x-2+\beta_3x_3$ Now lets say I know for sure that $\beta_2 = 4$. My teacher said I should create $y’ = y-4 X_2$ and ordinary least squares (ols) that. And after all estimates are done add up again the same value to get to the true value of y. I said why not keep it as 4 and use same $y$ and use ols with 4 as input?
Known coefficient in multiple linear model
CC BY-SA 4.0
null
2023-05-22T14:44:14.743
2023-05-23T06:10:19.387
2023-05-23T06:10:19.387
377784
377784
[ "regression", "multiple-regression", "least-squares", "regression-coefficients" ]
616588
2
null
616586
0
null
These are identical. Either way, you are solving the same optimization problem to optimize three parameters: $\beta_0$, $\beta_1$, and $\beta_3$. However, this is a bit of a specialized problem, and software is not necessarily readily available to solve this problem the way such software is available for OLS. Thus, the options are either to write your own specialized optimization code, find software that does it, or do this little subtraction workaround and use standard OLS software.
null
CC BY-SA 4.0
null
2023-05-22T14:53:52.473
2023-05-22T16:09:09.067
2023-05-22T16:09:09.067
247274
247274
null
616589
1
null
null
0
42
## Question Say I have a dataset $D$ with $N$ features that are trying to predict a target $y$. I would like to build a model from $D$ and part of that process is removing correlated columns to reduce redundancy. If $D$ remains constant, would changing the target $y$ ever change the method I use to check for correlation within the dataset $D$? ## Redundancy For an example of what I mean by redundancy see: [https://arxiv.org/abs/1908.05376](https://arxiv.org/abs/1908.05376). I'm not interested in the relevancy part of the paper. ## Example Say I'm using dataset $D$ to train a classification model. As part of preprocessing I check for correlations using method $M$, which could be any type of correlation algorithm, provided $M$ is unsupervised. I choose one column from each correlated group at random. In other words, I select columns in an unsupervised fashion. Should I ever change $M$ if I switch from a classification to a regression model, changing $y$ in the process? ## Pre-empting XY This is intended as a general question, which will lead to a specific question. The content of the specific question will depend on the answer to this question. Therefore, I believe it is not XY.
Is the correlation method used within your dataset problem dependent?
CC BY-SA 4.0
null
2023-05-22T15:14:42.600
2023-05-22T18:48:16.523
2023-05-22T18:48:16.523
363857
363857
[ "correlation", "feature-selection", "dimensionality-reduction", "data-preprocessing" ]
616590
1
null
null
0
15
On Exercise 5.14 of [Wainwright](https://www.cambridge.org/core/books/highdimensional-statistics/8A91ECEEC38F46DAB53E9FF8757C7A4E), it provides a way to estimate maximum singular value of Gaussian random matrices using the one-step discretization bound and Gaussian comparison inequality. Can we use Dudley Integral to estimate it? Intuitively I thought it works but I didn't work it out since there are too few examples of [Wainwright](https://www.cambridge.org/core/books/highdimensional-statistics/8A91ECEEC38F46DAB53E9FF8757C7A4E) about Dudley Integral. It was hard for me to cap it.
Using Dudley Integral to estimate maximum singular value of Gaussian random matrices
CC BY-SA 4.0
null
2023-05-22T15:15:16.990
2023-05-22T15:15:16.990
null
null
383159
[ "eigenvalues", "high-dimensional", "bounds", "random-matrix" ]
616591
1
616650
null
2
114
I am trying to use linear least squares regression to extract the coefficients of a model. Specifically, I am looking at a model with two independent predictor variables $x_1$ and $x_2$, and an output response variable $y$, with coefficients $\beta_0,\beta_1$ and $\beta_2$. (I believe this is a case of [multiple linear regression](https://en.wikipedia.org/wiki/Linear_regression#Simple_and_multiple_linear_regression).) The model is of the form $$ y_i = \beta_0 + \beta_1x_{i1} + \beta_2 x_{i2} + \varepsilon_i \tag{1} $$ where $i$ denotes the observation number. Or in matrix form with $N$ observations $$ \begin{align} \mathbf{Y} &= \mathbf{A}\boldsymbol{\beta} + \boldsymbol{\varepsilon} \\ \begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_N \end{pmatrix} &= \begin{pmatrix} 1 & x_{11} & x_{21} \\ 1 & x_{12} & x_{22}\\ \vdots & \vdots & \vdots \\ 1 & x_{1N} & x_{2N} \end{pmatrix} \begin{pmatrix} \beta_0 \\ \beta_1 \\ \beta_2 \end{pmatrix} + \begin{pmatrix} \varepsilon_1 \\ \varepsilon_2\\ \vdots \\ \varepsilon_N \end{pmatrix} \end{align} \tag{2} $$ I should be able to solve this using the least squares condition $$\boldsymbol{\beta} = (\mathbf{A}^\textrm{T}\mathbf{A})^{-1}\mathbf{A}^\textrm{T}\mathbf{Y} \tag{3}$$. I made a simple (Matlab) script to see if I could recover the correct beta coefficients on a test case, given below. I used three methods: 1) directly solving Eq.(3) using Matlab's [backslash operator](https://uk.mathworks.com/help/matlab/ref/mldivide.html) 2) the same thing, except neglecting the transpose in Eq.(3), since [it coincides](https://stats.stackexchange.com/a/616336/290032) with the least squares result. 3) solving Eq.(3) using a QR decomposition insead. ``` x1_data = linspace(-1,1,20).'; % First independent variable x2_data = linspace(-0.3,0.6,20).'; % Second independent variable beta0 = -0.2; % Intercept beta1 = 0.4; % coefficient of x1 data beta2 = 1.2; % coefficient of x2 data y_data = beta0 + beta1*x1_data + beta2*x2_data; % Create test response data A = [ ones(length(x1_data),1) x1_data x2_data ]; % Design matrix Y = y_data; beta_fit1 = (A.'*A) \ (A.'*Y); % Method #1 beta_fit2 = A \ Y; % Method #2 [Q,R] = qr(A); beta_fit3 = R \ (Q'*Y); % Method #3 ``` which outputs: ``` beta_fit1 = [-254.4 -762.3 1696.0] beta_fit2 = [-0.02 0.94 0] beta_fit= [-0.02 0.94 0] ``` None of which match the original input beta coefficients. Can anyone tell me what might be going wrong here? Thanks --- Edit An example to show that it works when fitting a 2D parabolic function with no noise (i.e. with deterministic data, as mentioned by @Gregg H): ``` x1_data = linspace(-1,1,20).'; % First predictor variable x2_data = linspace(-0.3,0.6,20).'; % Second predictor variable beta0_true = -0.2; % Intercept beta1_true = 0.4; % coefficient of x1 data beta2_true = 1.2; % coefficient of x2 data y_data = beta0_true + beta1_true*x1_data.^2 + beta2_true*x2_data.^2 ; % Create test response data A = [ ones(length(x1_data),1) x1_data.^2 x2_data.^2 ]; % Design matrix Y = y_data; beta_fit2 = A \ Y; ``` which outputs as expected: ``` beta_fit = [-0.2 0.4 1.2] ``` The issue is when trying to add a linear term (such as `beta4_true * x_data1` ), i.e. a tilt to the parabola.
Why does this multiple linear regression fail to recover the true coefficients?
CC BY-SA 4.0
null
2023-05-22T15:15:41.083
2023-05-23T09:17:12.057
2023-05-23T09:17:12.057
290032
290032
[ "regression", "multiple-regression", "least-squares" ]
616592
2
null
616586
0
null
You could do what you suggest (fix β2 to 4 in the model and estimate the remaining coefficients) in programs for structural equation modeling such as lavaan in R (other programs are Mplus and AMOS). The program will then estimate the remaining coefficients as free parameters using maximum likelihood estimation. The program will also give you a chi-square test that allows you to test the null hypothesis that β2 = 4 in the population.
null
CC BY-SA 4.0
null
2023-05-22T15:18:47.087
2023-05-22T15:18:47.087
null
null
388334
null
616593
1
null
null
0
47
My colleagues and I have been using a hierarchical mixed model that so far has provided good predictions. As time goes on, however, it's clear that not including time as a factor in the model degrades predictions. For clusters within the data it is reasonable to assume that the distribution of the outcome variable will change over time. In its current, time-oblivious form, the model accounts for these changes fairly slowly, since past (sometimes quite old) observations paint a different picture than more recent observations. How can such a model "put more weight" on more recent observations? I'll briefly sketch out the problem I'm specifically working on, but I hope to keep the question general enough that it can be relevant to others. # Problem background I'm modeling sales events for limited edition products. Each observation in my data set is one such event where a single product is sold, with features describing the event itself (e.g. the region it took place in, the date of the event) and the product (e.g. product group, product model within group, price, etc.).    The model is built to predict the expected consumer demand (i.e. how many people will show interest) for a future event given the above features about the event and product. This demand is observed for historical observations.  There are hierarchical structures in the data that the model intends to capture: - a product group (e.g. footwear) can include various product models. Each product model within the same group shares similarities (use-case, potential materials, price range, etc. ) with other models in its group.  - there are multiple products of the same product model, these share things like a general design theme, product features etc. but each represent a different 'take' on the model by different designers. - the regions where these events take place differ in unobserved social, economic and cultural factors, but likely share some global effects. # Time oblivious model I currently have a hierarchical mixed effects model that includes a hierarchical intercept $L$ with three levels: - $\mathrm{L1}$ is the intercept for observations in different regions ($a$) - $\mathrm{L2}$ is the intercept for observations of different product groups ($b$) per region ($a$) - $\mathrm{L3}$ is the intercept for observations of different product models ($c$) of different product groups ($b$) per region ($a$) The idea behind this structure is the following: it could be that a product model has not been sold in a particular region before, in which case the intercept should be informed by events for products from the same product group in that region. $$ \alpha \sim \mathcal{N}(5, 1)\\ \mathrm{L1}_{a_i} \sim \mathcal{N}(\alpha, 1)\\ \mathrm{L2}_{a_i, b_i} \sim \mathcal{N}(\mathrm{L1}_{a_i}, 1)\\ \mathrm{L3}_{a_i, b_i, c_i} \sim \mathcal{N}(\mathrm{L2}_{a_i, b_i}, 1)\\ \mu_i = \mathrm{L3}_{a_i, b_i, c_i} + \beta_1 \mathbf{X}_{i,1} + \cdots +\beta_k \mathbf{X}_{i,k} \\ \cdots \\ \sigma \sim Exp(1)\\ y_i \sim \mathrm{StudentT}(\mu_i, \sigma, \nu)\\ $$ where - $a_i$ is the region of the i-th event in the data - $b_i$ and $c_i$ are the product group and product model of the i-th observation, respectively - $\mathbf{X}_{i,1}$ to $\mathbf{X}_{i,k}$ are other features related to the i-th observation There are also some varying slope terms in the model that vary by some of the groups mentioned here, but I've excluded them to not make things confusing. As you can see, this model does not include time at all. The intercept for a product model $c_3$ for example is equally informed by an event where a product of this model was sold 4 years ago and one that happened just yesterday. My experience tells me that this is problematically ignorant, so I want to change that. # Approaches I considered - Time as a fixed or varying effect: I could capture time as a continuous variable $t$ (0 for the first observation, 1 for the last historic observation, >1 for future events) and include a fixed or varying effect for it into the linear model of $\mu$. - Interaction with time: as an extension of the above point, interactions between $t$ and other terms in the model could be introduced. - GP prior for the intercept: I'm not really familiar with Gaussian processes, but I'd assume a GP over $t$ could be used to model the intercept - Intercept level for units of time: an additonal level could be added to the intercept to model the average demand in a particular unit of time, for example a given year. Since this will increase the already large dimensionality of the intercept even more, I'm worried that this will be problematic. Am I missing any approaches? How can I determine which approach I should use?
Ways to include time in hierarchical mixed effects model
CC BY-SA 4.0
null
2023-05-22T15:22:08.610
2023-05-22T15:22:08.610
null
null
20221
[ "bayesian", "mixed-model", "hierarchical-bayesian", "time-varying-covariate" ]
616594
2
null
614595
0
null
## How can we obtain the noise variable values without knowing the structural equations? The short answer is we cannot obtain them, at least not without further assumptions. - The paper on counterfactual explanations gives structural equations for their experiments (section 6), and they do seem to assume additive noise as you suggest. - The VACA paper tries to tackle exactly your question empirically. They design their VAE in a way that the latent space should hopefully be something like the latent noise variables and give empirical evidence to support this. It's not theoretical work, instead they test their method on a handful of synthetic datasets. Although their approach seems sophisticated, their results can hardly be seen as conclusive evidence on the question. In summary, I'd say your question is an open one. There may be some suitable assumptions under which we can determine or at least bound the noise variables, and there may be some ways to achieve good empirical results on important real-world datasets. Time will tell, but for now computing counterfactuals given the structure but without the equations is not really a thing as far as I can tell.
null
CC BY-SA 4.0
null
2023-05-22T15:22:58.527
2023-05-23T08:20:26.727
2023-05-23T08:20:26.727
250702
250702
null
616595
1
null
null
0
10
I have 2 large samples from a population of users that checkout from our product offering website and generate certain sales, say r1 and r2. Each sample is from using a different price, say r1=\$20 and r2=\$30. The probability of purchase is of course different from each group. Say the first group has a purchase probability p1 of 0.4 based on large historical n samples, where n >> 10,000. For the second group, we have much fewer observations, on the order of 100-500. I'm looking for the right statistical test to determine whether the total revenue is higher on the second group. A classical chi-squared test to test whether the underlying 2 bernoulli probabilities are different is not enough, because that test will show that the probabilities are indeed different. I think I need to determine whether the probability p2 is at least greater by the relative different of the price difference. In this case if r1=\$20 and r2=\$30, and p1 is 0.4, I'm trying to determine if p2 is confidently at least greater than $p1/(r2/r1)$ or $0.4/(30/20)=0.2667%$. What is the right statistical test for this case? To make it even more concrete, the hypothetical data is roughly formatted follows: |group |visits |conversions |total_sales | |-----|------|-----------|-----------| |s1 |775 |310 |6,200 | |s2 |240 |72 |2,160 | Another way of framing it would be: what test to determine which population has a higher `total_sales per visit`?
Hypothesis testing to compare 2 bernoulli processes that generate purchase data
CC BY-SA 4.0
null
2023-05-22T15:25:49.193
2023-05-22T15:25:49.193
null
null
67301
[ "hypothesis-testing", "statistical-significance", "bernoulli-distribution" ]
616596
1
null
null
2
56
I found this [paper](https://academic.oup.com/njaf/article/18/3/87/4788527?login=false) very useful for my research, however, I'm not familiar with non-linear regressions and I'm finding it tricky replicating it. Using the first f=model for example: ``` HT = 1.3 + a(1-e^-b*DBH)^c ``` This is what I have through the help of someone ``` cr.fit = nls(y~a*(1-exp(-b*x))^c,start=list(a=a,b=b,c=c)) summary(cr.fit) ``` Now when trying to predict I'm doing this: ``` TH <- 1.3+(a*(1-exp(-b*10.9))^c) ``` So far, I have been testing the predictions and sometimes it produces spot-on results, other times it doesn't. I believe this has to do with the starting values I'm using. I'm fairly new to this statistical tool as well as also new to forestry research so I neither have technical knowledge nor domain knowledge for this. My question is what is a good empirical way to get starting values for this type of analysis? With my personal research on this, I understand this may be difficult to answer but if you were in my shoes, how would you go about this? I'm just building a methodology for my research so I'm just exploring this to see if it works for me, if I get good results, I would of course seek further knowledge on this to be able to determine good starting values for the species and location I'm researching on. I have sample data for this [here](https://github.com/brian-o-mars/height-diameter-sim). The relevant columns are HT and DIA.
Non-linear regression. Need some help implementing a model from a paper
CC BY-SA 4.0
null
2023-05-22T15:27:16.757
2023-05-23T07:32:36.110
null
null
388571
[ "r", "regression", "nonlinear-regression", "starting-values" ]
616597
1
null
null
1
23
I have a random effects model with two groups. $$ y_i = \alpha_{j[i]} + \gamma_{k[i]}+\epsilon_i $$ Where $j[i]$ and $k[i]$ denotes the group memberships for individual $i$. In R, I can estimate $\sigma^2_j$ and $\sigma^2_k$ using `lme` from the `nlme` package or `lmer` from `lme4`. However I'm unable to estimate the covariance, $\sigma_{k,j}$ between the two groups. My understanding is that `lmer` simply assumes the covariance is zero. Is this correct? Using `nlme` I get a strange error. I, perhaps naively, thought that the following would work: ``` library(nlme) res <- lme(mpg ~ 1, random = list(cyl = ~ 1, am = ~ 1), data = mtcars) getVarCov(res) ``` However, I get the error ``` Error in getVarCov.lme(res) : not implemented for multiple levels of nesting ``` Can't tell why this error appears. Have I specified the model in lme() incorrectly? In sum, how can I estimate $\sigma_{k, j}$ ?
covariance terms for random effects model
CC BY-SA 4.0
null
2023-05-22T15:31:50.457
2023-05-22T15:31:50.457
null
null
388573
[ "model", "intercept", "effects" ]
616598
2
null
616382
4
null
The description of the problem implies that cases have an inherent unknown "difficulty", which impacts the probability of a case being correctly classified by a rater, and that raters have a "skill" level with a similar effect. Modeling this therefore amounts to choosing some function that relates the success probability to those parameters. There are many possible ways of choosing such a function, a simple one might be: $$P(\text{success}|\theta_n,\phi_m) = \frac{1+\theta_n\phi_m}{2}.$$ Here, $0<\theta_n<1$ is the difficulty level, $\theta_n=0$ being the most difficult, such that the best any rater can achieve is just a random choice ($P=1/2$). $0<\phi_m<1$ is similarly the skill level, such that the most skillful rater ($\phi_m=1$) can correctly classify the most easy cases with a 100% probability. With the data $x_{nm}$ as described in the question ($x_{nm}=1$ for correct classification, $x_{nm}=-1$ for incorrect classification and $x_{nm}=0$ for no classification) the log-likelihood is (up to an irrelevant constant): $$ \log \mathcal L = \sum_{n,m} \log(1 + x_{nm}\theta_n \phi_m)$$ and this can be maximized numerically (e.g. using gradient descent) in order to find the maximum likelihood estimators. Notice that if we can assume that $\theta_n \phi_m \ll 1$, then the likelihood simplifies to $$ \log \mathcal L \approx \sum_{n,m} x_{nm}\theta_n \phi_m \equiv \mathbf \theta^T X \mathbf \phi .$$ For vectors $\theta,\phi$ with fixed norms, this is maximized by the [singular vectors](https://en.wikipedia.org/wiki/Singular_value_decomposition) that correspond to the largest singular value of the matrix $X$. So, an approximate solution can be achieved this way simply by performing a singular value decomposition of $X$. More generally, we can write the likelihood as $$ \log \mathcal L = \sum_{n,m} \log\left(\frac{1-x_{nm}}{2} + x_{nm}P_{nm}\right)$$ Where $P_{nm}$ is the success probability. (Notice that the expression inside the logarithm is equal to $P_{nm}$ when $x_{nm}=1$, $(1-P_{nm})$ when $x_{nm}=-1$, and a constant when $x_{nm}=0$). For a model that allow the probability to be zero, we can consider $P_{nm}=\theta_n \phi_m$ or, using unconstrained parameters, $P_{nm}=\text{Sigmoid}(w_n+v_m)=(1+e^{-(w_n+v_m)})^{-1}$. The corresponding modifications to the Python code are: ``` return -torch.log((1-X)/2 + teta * X * phi.T).sum() ``` or ``` return -torch.log((1-X)/2 + X * torch.sigmoid(w + v.T)).sum() ``` Comparing the maximum likelihood values of the two options with the provided data, we get a small but significant preference ($\Delta \log \mathcal L = 345.8$) for the second model. (The comparison is meaningful because the models have the same number of parameters). The MLE values for $v_m$ are: (converted to probabilities for case difficulty $w=0$) ``` print(torch.sigmoid(v).T) tensor([[0.8846, 0.8338, 0.8891, 0.8149, 0.9040, 0.9188, 0.8654, 0.7761, 0.8773, 0.8976, 0.9249, 0.8826, 0.8549, 0.9055, 0.9154, 0.9490, 0.7566, 0.8934, 0.9529, 0.8855, 0.7359, 0.8765, 0.9106, 0.8981, 0.9141, 0.9172, 0.8148, 0.9212, 0.8592, 0.8356, 0.8787, 0.9118, 0.8324, 0.9138, 0.8882, 0.9095, 0.8305, 0.8393, 0.8893, 0.8384, 0.8906, 0.9201, 0.7997, 0.9011, 0.8896, 0.9143, 0.9078, 0.9124, 0.9278, 0.8945, 0.7991, 0.7544, 0.9058, 0.8708, 0.7664, 0.8078, 0.9104, 0.8673, 0.8346, 0.8012, 0.8408, 0.7689, 0.9018, 0.8283, 0.8363, 0.8732]], grad_fn=<PermuteBackward0>) ``` Notice that, since the data is limited, not all differences between MLE values may be statistically significant. To determine that, further analysis of statistical significance is required. --- Maximum likelihood implementation using PyTorch: ``` import torch N, M = 3000, 70 #gererate random true parameters: teta = torch.rand(N,1) phi = torch.rand(M,1) #generate random data matrix X: P = (1 + teta*phi.T)/2 X = 2*torch.bernoulli(P) - 1 #set about 70% of X entries to zero X[torch.rand(N,M) > 0.3] = 0 #unconstrained optimization parameters, #such that teta = sigmoid(w) and phi=sigmoid(v) w = torch.randn(N,1,requires_grad=True) v = torch.randn(M,1,requires_grad=True) #likelihood loss function def loss(w,v,X): teta = torch.sigmoid(w) phi = torch.sigmoid(v) return -torch.log(1 + teta * X * phi.T).sum() optimizer = torch.optim.Adam([w,v],lr=0.05) #gradient descent iterations for iter in range(1000): optimizer.zero_grad() L = loss(w,v,X) L.backward() optimizer.step() print('true phi:') print(phi.T) print('MLE:') print(torch.sigmoid(v).T) ``` output: ``` true phi: tensor([[0.1621, 0.4545, 0.1322, 0.9994, 0.4517, 0.4210, 0.5734, 0.9993, 0.8364, 0.7181, 0.5347, 0.8723, 0.9996, 0.0760, 0.9831, 0.7306, 0.1410, 0.8279, 0.2426, 0.5312, 0.6453, 0.3937, 0.9949, 0.1446, 0.1909, 0.2223, 0.0982, 0.9986, 0.7534, 0.7858, 0.0199, 0.2229, 0.1371, 0.5162, 0.1269, 0.4183, 0.7202, 0.9995, 0.0551, 0.7844, 0.0248, 0.0022, 0.2925, 0.9989, 0.0608, 0.9984, 0.6897, 0.8560, 0.9992, 0.1023, 0.8478, 0.8740, 0.5565, 0.2198, 0.9949, 0.5519, 0.8228, 0.3855, 0.3905, 0.4450, 0.3783, 0.3177, 0.8440, 0.4595, 0.4974, 0.3771, 0.7208, 0.8438, 0.9292, 0.0892]], grad_fn=<PermuteBackward0>) MLE: tensor([[0.1528, 0.4522, 0.1803, 0.9994, 0.5803, 0.4260, 0.5195, 0.9988, 0.9986, 0.6797, 0.4950, 0.9968, 0.9997, 0.0402, 0.9998, 0.7741, 0.1542, 0.9965, 0.1423, 0.4255, 0.6306, 0.3968, 0.9996, 0.1225, 0.1858, 0.1826, 0.0672, 0.9973, 0.9987, 0.9986, 0.0558, 0.2186, 0.1733, 0.4893, 0.0905, 0.4301, 0.7361, 0.9985, 0.0657, 0.9991, 0.0501, 0.0018, 0.2379, 0.9986, 0.0548, 0.9984, 0.6548, 0.9981, 0.9991, 0.1299, 0.7725, 0.9949, 0.5826, 0.2785, 0.9995, 0.6145, 0.8397, 0.3643, 0.3875, 0.4685, 0.3267, 0.2936, 0.9969, 0.5136, 0.5735, 0.3453, 0.6652, 0.9958, 0.9984, 0.1792]], grad_fn=<PermuteBackward0>) ```
null
CC BY-SA 4.0
null
2023-05-22T15:57:00.557
2023-05-31T10:56:05.727
2023-05-31T10:56:05.727
348492
348492
null
616599
2
null
616561
0
null
If you know the mean μ=0 and variance (let’s call it Σ) of β, then you can compute the expected value and variance at least. The expected value is tr(Σ). The variance is 2 tr(Σ ⋅ Σ). [https://en.wikipedia.org/wiki/Quadratic_form_(statistics)?wprov=sfti1](https://en.wikipedia.org/wiki/Quadratic_form_(statistics)?wprov=sfti1) This distribution is a Generalized Chi ² distribution [https://en.wikipedia.org/wiki/Generalized_chi-squared_distribution?wprov=sfti1](https://en.wikipedia.org/wiki/Generalized_chi-squared_distribution?wprov=sfti1) It may be worth considering if you really want the sum of squares ( β’ β ) , or if you’d be just as happy with β’ Σ⁻¹ β. That is a lot easier. It follows a standard Chi² distribution I think.
null
CC BY-SA 4.0
null
2023-05-22T15:57:54.493
2023-05-22T15:57:54.493
null
null
388575
null
616600
2
null
616470
2
null
- You have estimates $\hat{q}_a$, $\hat{q}_b$ and $\hat{q}_c$, which are (presumably) approximate independent estimates of the probabities for the independent events 'no A', 'no B' and 'no C'. - You have related standard errors for these estimates (which can be derived from confidence intervals). - You want to compute an estimate for the product $q = q_aq_bq_c$, the probability of neither A, B and C, assuming a model where they are independent. You can estimate this by $$\hat{q} = \hat{q}_a\hat{q}_b\hat{q}_c$$ For the standard deviation, and associated confidence interval, you can use as approximation of propagation of errors the formula for the [variance of independent variables when they are multiplied](https://stats.stackexchange.com/questions/52646/variance-of-product-of-multiple-independent-random-variables). $$\sigma_{XYZ}^2 = \mu_{X}^2 \mu_{Y}^2 \sigma_{Z}^2 + \mu_{X}^2 \sigma_{Y}^2 \mu_{Z}^2 + \sigma_{X}^2 \mu_{Y}^2 \mu_{Z}^2 + \mu_{X}^2 \sigma_{Y}^2 \sigma_{Z}^2 + \sigma_{X}^2 \mu_{Y}^2 \sigma_{Z}^2 + \mu_{X}^2 \sigma_{Y}^2 \sigma_{Z}^2 + \sigma_{X}^2 \sigma_{Y}^2 \sigma_{Z}^2$$ --- ### Simulation I did a simulation when $n=100$ and $p_a=p_b=p_c=0.5$, and interestingly, computing $\hat{q}$ indirectly via $\hat{q}_a\hat{q}_b\hat{q}_c$ leads to a smaller variance of the estimate, in comparison to using the raw data directly (counting the cases no a, no b and no c). It is because we are using effectively more data, 300 datapoints instead of 100. [](https://i.stack.imgur.com/GCrVK.png) So the indirect estimate using the product $\hat{q} = \hat{q}_a\hat{q}_b\hat{q}_c$ has less variance than using counts of the events directly. But, potentially it might biased when the events a,b,c are not truly independent.
null
CC BY-SA 4.0
null
2023-05-22T15:59:40.503
2023-05-23T06:49:55.323
2023-05-23T06:49:55.323
164061
164061
null
616601
1
null
null
0
14
I have doubts about how to correctly interpret my dependent variable in the following regression model: c_l= ∝_0+∝_1 log(#Twitter followers)+∝_2(Age Group 70)+⋯+ε where c_l=(log(# hate tweets received))/(log(# total tweets received)) As the dependent variable is not solely an expression of one log-transformation (in which case I could use the normal log-linear interpretation) but a log over log relationship I am unsure how to intepret my significant coefficients. ``` coef std err t P>|t| [0.025 0.975] ``` --- fclog 0.8874 0.054 16.366 0.000 0.781 0.994 age_le70 -0.0919 0.033 -2.765 0.006 -0.157 -0.027 For the variable "number of followers" (fclog) I would suggest interpreting it as the higher the number of followers the higher the overall received hate tweets in relation to the total amount of tweets received. But I am unsure in which dimension and with which units this is. The variable "Age group 70 (age_le70)" is in relation to the reference categroy "Age group 30" but also here I am unsure how best to interpret the coefficient. Thanks a lot in advance for the ideas.
log/log-linear regression output interpretation
CC BY-SA 4.0
null
2023-05-22T16:00:41.253
2023-05-22T16:00:41.253
null
null
388574
[ "interpretation", "regression-coefficients" ]