Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
616363
2
null
616339
0
null
In brief, the answer is yes...this protocol would work. Here are a couple of variations you might consider. - For each participant, sample the original means (control and treatment from the actual 10 values) and randomly assign these as treatment and control; calculate the test statistic; repeat n times and find the p-value. This will ignore the variability within the different 10 trials, but will be more like the "conventional" bootstrap for a matched-pairs design. - Use the strategy you indicated here, but sample without replacement...so 10 of the measurements are assigned control and the remaining 10 are assigned treatment; calculate the test statistic; repeat n times and find the p-value. This will incorporate the within variability, but it forces (some of) the marginal means to match the original data set. - Use the strategy you indicated here...and you can either sample the 20 with or without replacement at each stage. It can be argued that either of these strategies would best capture the null hypothesis that the distributions are the same at control and treatment. Hope this helps.
null
CC BY-SA 4.0
null
2023-05-19T18:54:17.523
2023-05-19T18:54:17.523
null
null
199063
null
616366
2
null
616357
-1
null
The "ANCOVA model" usually refers to a model in which the slopes for the covariates are assumed equal across groups. The first post is using an unusual form of ANCOVA. Allowing the sloeps to vary across groups is sometimes called "the full model in every cell". See the following quote from [Muller and Fetterman (2003, Ch 16)](https://www.wiley.com/en-us/Regression+and+ANOVA%3A+An+Integrated+Approach+Using+SAS+Software-p-9780471469438): > The full model in every cell is a generalization of ANCOVA that allows interactions between continuous and categorical predictors. [...] The traditional analysis of covariance (ANCOVA) model is a special case of the full model in every cell. Adding the restriction of equal slopes reduces the full model to ANCOVA. The blog also describes an invalid methodology of testing whether the slopes are different and then fitting a model assuming they are the same if nonsignificant. The absence of statistical significance doesn't mean an interaction is absent, and building a model by backward selection invalidates p-values computed in the final step. What happens if the true data-generating model has different slopes for the two groups and you fit a model that assumest hey have the same slope? This is described in detail by [Słoczyński (2022)](https://doi.org/10.1162/rest_a_00953) and [Chattopadhyay and Zubizirreta (2022)](https://doi.org/10.1093/biomet/asac058). Basically, the effect of the grouping variable will be estimated for a population other than the one under study, and the papers describe the exact nature of that population, which tends to resemble the smaller group. In contrast, allowing the slopes to differ ensures the estimated effect generalizes to the population from which the sample was drawn. [Schafer and Kang (2008)](https://doi.org/10.1037/a0014268) also describe why ANCOVA is problematic and why you should allow the slopes to vary. In the context of experiments, [Lin (2013)](https://doi.org/10.1214/12-AOAS583) explains why you not only need to allow the slopes to vary, you need to use a special "robust" standard error to correctly calculate the p-values. I disagree with Gregg H that the choice should depend on what hypothesis you are testing. Even if you are testing the simple hypothesis of whether the groups differ after adjusting for covariates and have no substantive interest in the interaction, you should still include the interaction between the grouping variable and the centered covariates. You must center the covariates at their mean in the sample in order to interpret the coefficient on the grouping variable as the adjusted difference in means. You should not use a hypothesis test of whether an interaction is present to decide how to fit the model; just fit a single model and interpret it. There is very little cost to including an interaction when there is none in the data-generating model.
null
CC BY-SA 4.0
null
2023-05-19T19:09:59.810
2023-05-19T19:09:59.810
null
null
116195
null
616368
2
null
616351
2
null
In the parametric survival form $$\log T \sim \alpha + \sigma W $$ there are two different major types of sources of error in estimating event times. Even if you know $\alpha$ and $\sigma$ exactly, there will be error due to the random sampling from the distribution $W$ (whether standard minimum extreme value for exponential/Weibull, generalized minimum extreme value for Gamma, standard normal for log-normal, standard logistic for log-logistic...). That's the type of error that you evaluate by sampling with a function like your `simFx`: sampling from the distribution $W$. You seem, however, to be trying to incorporate that source of error into a different type of error: the modeling error in the estimate for $\alpha$ (associated with `(Intercept)`). That really doesn't make sense. If you want realistic estimates of what the errors might be, there's no reason to impose other random structures that aren't related to the data or the model type. The modeling error is best represented by sampling from the multivariate normal `vcov()` around the point estimates for `(Intercept)` and `log(scale)`, then recognizing that (in the above parametric form) the `(Intercept)` is $\alpha$ and `exp(log(scale))` is $\sigma$. See [this page](https://stats.stackexchange.com/a/552229/28500). Restrict your use of `simFx` to the further samplingfrom $W$, to get a distribution of individual survival times for a particular combination of (properly randomized) $\alpha$ and $\sigma$ by sampling. In most realistic situations, that error associated with sampling from $W$ will overwhelm that associated with joint sampling from $\alpha$ and $\sigma$, as [illustrated for an exponential model](https://stats.stackexchange.com/a/615697/28500), a special case of Weibull.
null
CC BY-SA 4.0
null
2023-05-19T19:16:09.857
2023-05-19T19:16:09.857
null
null
28500
null
616369
1
null
null
1
33
I want to make a correlation analysis of two nominal columns, the "advocates" column and the "company" column, the advocates in this case are processing the companies, the data looks like this |Advocate |Company | |--------|-------| |Adv 1 |Comp A | |Adv 1 |Comp A | |Adv 2 |Comp C | |Adv 3 |Comp B | |Adv 3 |Comp B | |Adv 2 |Comp D | |Adv 3 |Comp E | |Adv 1 |Comp A | So, I want to make an analysis based on a calculus that shows if theres a strong correlation between advocate X and company Y, for every pair. I tried to use Cramer's V method but I couldn't make it work properly. The result I want to achieve is something similar to a correlation matrix of advocate vs company. Thanks for any help!
How to make an correlation analysis in Python or R beetween two strings?
CC BY-SA 4.0
null
2023-05-19T19:19:52.587
2023-05-20T13:46:49.230
2023-05-19T20:01:06.627
56940
388366
[ "r", "mathematical-statistics", "correlation", "python", "cramers-v" ]
616370
2
null
616079
0
null
Let's imagine a randomized experiment for the effect of treatment $A$ on outcome $Y$ with measured covariate $X$. (You can also imagine a sample that has already been matched or weighted so that it resembles a randomized experiment based on the adjusted-for covariates; everything I say here applies.) We'll define the conditional ATE (CATE) at $X=x$ as $$ E[Y|A=1,X=x] - E[Y|A=0,X=x] $$ and the marginal ATE (ATE) as $$ E_X[E[Y|A=1,X]] - E_X[E[Y|A=0,X]] $$ If you fit the linear model $$ E[Y|A,X] = \beta_0 + \tau A + \beta_1 X $$ then $\hat\tau$ is an estimate of the CATEs, which are assumed to be equal no matter what value $X$ takes. It also happens to be equal to the estimate of the ATE. If the true CATEs differ across $X$, then your estimates of the CATE will be biased. However, your estimate of the ATE will be unbiased. If you instead fit the linear model $$ E[Y|A,X] = \beta_0 + \tau A + \beta_1 X + \beta_2 AX $$ then the estimated CATEs are allowed to vary as $\text{CATE}(x) = \hat\tau + \hat\beta_2x$ and the ATE is the average of the CATEs across the sample, which is equal to $\hat\tau + \hat\beta_2\bar X$. In this case, it is possible to estimate CATEs and the ATE when adjusting for covariates. If the true model is linear and the CATEs vary across $X$, then your estimates of the CATEs and ATE will be unbiased. What if the true data-generating is not linear, but instead has a curvilinear shape that is not captured by your linear model, and the true CATEs differ across $X$? We already know that your estimate of the CATEs will be biased if you fit the first model when the true CATEs vary across $X$. If we fit the second model, the estimates of the CATEs will still be biased. You have to get the form of the model right to validly estimate CATEs. However, no matter what form the true outcome model takes, the estimates of the ATE will be unbiased regardless of which model you use; the model exists solely to increase the precision of the estimate. This is the reason many causal inference-focused statisticians prefer to estimate the ATE over CATEs. Given the conservative assumptions that CATEs vary across levels of covariates and that the relationship between the covariates and outcome is nonlinear, many believe it is impossible to validly estimate CATEs without making extreme modeling assumptions, and one's estimates of the CATEs will vary largely based on the modeling choices you make. However, none of that is so with ATEs. In a randomized experiment (or a sample that has been adequately matched or weighted in such a way that the simple difference in outcome means is unbiased for the causal effect), no matter what assumptions you make on true outcome model, your estimate of the ATE will be unbiased. Frank Harrell's criticism of this outlook as described in the comments to OP is that the ATE is always estimated with reference to some population, but in the absence of a probability sample, that population is ambiguous or nonexistent, in which case it doesn't matter that your ATE is unbiased for the population in which it is estimated. In contrast, if you somehow get the true outcome model correct and can estimate the CATEs, they will be valid no matter what the source of the sample is or whether it is representative. In addition, CATEs provide more useful information about the prognosis for individual patients, which is more useful in medical decision making. My impression is that researchers who prefer the ATE would argue that the assumption that the study population is representative of some meaningful population is more tenable than the assumption that the outcome model is correctly specified, which is necessary for valid interpretation of the CATEs.
null
CC BY-SA 4.0
null
2023-05-19T19:46:20.777
2023-05-19T19:46:20.777
null
null
116195
null
616371
1
null
null
0
17
I'm having trouble finding references to (and the name of) an obvious correction procedure that surely must have been used extensively in time series forecasting, but my Google searches keep turning up references to anomaly detection (which isn't what I'm looking for). In (univariate) time series forecasting, we're always using past data to predict future data. But suppose there is a brief interval of data that is known to be somewhat inaccurate or different from what it would be normally. For example, perhaps a sensor was miscalibrated for a couple of days and it's been repaired or a temporary change in corporate policy that was later reversed affected the data collected while the policy was in effect (or COVID caused a temporary inflection, a street outside the store was under construction (which decreased customer traffic), an older device filled in for a newer one during maintenance window, etc.). If we use this period of anomalous data, as is, to predict future data as we normally would, it will derail the forecast somewhat because it will propagate these anomalies forward. We expect the target forecast interval to be more similar to the distant past than to the recent past. But we also don't want to discard the recent data altogether. It still contains some useful predictive information. We just want to "dilute" the recent, anomalous data with a corresponding interval of more typical past data. I'd like to see how other people have been doing this. How to choose the ratio, whether to keep the ratio constant within an interval or ease in and out of it with a Gaussian, whether to trigger the dilution manually or automatically (e.g., when the forecast error exceeds a threshold), etc. I think I'm just using the wrong search terms. Anybody have any references to recommend?
Compensating for anomalies in training data by diluting them with older data?
CC BY-SA 4.0
null
2023-05-19T19:51:47.473
2023-05-19T19:55:23.083
2023-05-19T19:55:23.083
22311
135727
[ "time-series", "forecasting", "references", "data-imputation" ]
616374
2
null
616356
0
null
Plot it in order to gain more insight into the meanings of those statistical tests. The command `plot(cars)` will creat a plot with points that are more or less along a hypothetical line with a clear slope but not an y-intercept much different from zero. [](https://i.stack.imgur.com/FTH3b.png)
null
CC BY-SA 4.0
null
2023-05-19T20:20:33.577
2023-05-19T20:20:33.577
null
null
164061
null
616375
2
null
616356
1
null
[This page](https://stats.stackexchange.com/q/5135/28500) has a useful explanation of what this type of model summary shows. As @Axeman said in a comment, the coefficients are for different hypotheses. In particular, the "significance" of an Intercept is whether its value is "significantly" different from 0. Simple recoding of the same data in a way that doesn't fundamentally change the underlying model can change that "significance" substantially. For example, people sometimes "center" variables in regression to have mean values of 0, by subtracting the mean value of each variable from the individual values. That provides the same association between outcome and predictor (what's usually of primary interest), but makes the Intercept here completely "insignificant": ``` summary(lm(I(dist-mean(dist))~I(speed-mean(speed)),data=cars)) # # Call: # lm(formula = I(dist - mean(dist)) ~ I(speed - mean(speed)), data = cars) # # Residuals: # Min 1Q Median 3Q Max # -29.069 -9.525 -2.272 9.215 43.201 # # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 1.397e-14 2.175e+00 0.000 1 # I(speed - mean(speed)) 3.932e+00 4.155e-01 9.464 1.49e-12 # --- # # Residual standard error: 15.38 on 48 degrees of freedom # Multiple R-squared: 0.6511, Adjusted R-squared: 0.6438 # F-statistic: 89.57 on 1 and 48 DF, p-value: 1.49e-12 ``` This gets even more confusing when there are interaction terms in a model, as then even the apparent "significance" of an individual predictor in a display like this can change when you [center another predictor with which it interacts](https://stats.stackexchange.com/q/65898/28500). Be very careful when interpreting individual coefficient p values from this type of model summary. This particular example happens to show an important reason for not immediately jumping to centering. Think about the data: they are for stopping distances (in feet) for cars that were originally going at different speeds (in miles per hour). The Intercept is not only "significantly" different from 0: it's negative. That means that, if you were going 0 miles per hour, you would go backward more than 17 feet when you applied the brake! There clearly is a problem with the way that the model represents reality. You might not have seen that if you just centered all the variables to start.
null
CC BY-SA 4.0
null
2023-05-19T20:31:34.643
2023-05-19T20:31:34.643
null
null
28500
null
616376
2
null
579095
0
null
Building on the excellent answer by @stats_model: If you have some idea of the range of values the treatment effect of $V$ on $Y$ could reasonably take, you could plug in those bounds and solve the equation in the @stats_model answer to get a range of treatment effects of $X$ on $Y$. YMMV. $$\frac{\frac{\Delta Y}{\Delta Z}-\text{[lower bound]}\cdot\frac{\Delta V}{\Delta Z}}{\frac{\Delta X}{\Delta Z}}\text{ and }\frac{\frac{\Delta Y}{\Delta Z}-\text{[upper bound]}\cdot\frac{\Delta V}{\Delta Z}}{\frac{\Delta X}{\Delta Z}}$$ Which one of those is higher would depend on the sign of $\frac{\Delta V}{\Delta Z}$.
null
CC BY-SA 4.0
null
2023-05-19T20:40:07.827
2023-05-19T20:41:17.353
2023-05-19T20:41:17.353
222259
222259
null
616377
1
616441
null
0
27
I have a continuous treatment variable which I dichtomize into a binary treatment dummy. Using MatchIt and the newly created dummy variable, I conducted genetic matching to create a dataset of treatment and control observations. My question is, can I replace that binary treatment variable with the original continuous treatment variable in a subsequent OLS regression analysis? If so are there any additional steps I must take, such adjusting standard errors or weighting.
Can I replace a binary treatment variable with a continuous treatment variable after matching
CC BY-SA 4.0
null
2023-05-19T20:40:38.593
2023-05-20T21:31:20.220
null
null
388370
[ "regression", "matching" ]
616378
1
null
null
0
23
I have two features. I need to perform T-test for these two datasets but with the same model. ``` train_x, test_x, train_y, test_y = model_selection.train_test_split(X_pac, labels, test_size=0.2) train_x1, test_x1, train_y1, test_y1 = model_selection.train_test_split(author_post_new, labels, test_size=0.2) ``` I need to perform T-test for the SVM model on these two datasets. How can I do that? ## Update: I need to T-Test to check whether the Sentiment feature has a significant role in identifying gender from text. I have computed the TF-IDF feature and got `author_post_new`. I have applied the Sentiment feature on the dataset and got `X_pac` from the dataset. Now I want to determine whether the Sentiment feature has a significant role in identifying gender.
Doing T-test across two data sets to validate null hypothesis
CC BY-SA 4.0
null
2023-05-19T20:41:03.643
2023-05-19T21:13:10.117
2023-05-19T21:13:10.117
307173
307173
[ "hypothesis-testing", "statistical-significance", "t-test", "spss" ]
616379
1
null
null
0
33
Suppose there are N measurements of a random variable x which has Gaussian p.d.f. with unknown mean $\mu$ and variance $\sigma^2$. Classical textbook solution for estimation $\mu$ and $\sigma$ is to maximize likelihood function: \begin{equation} L(\mu, \sigma) = \Pi_{i=1}^N\frac{1}{\sqrt{2\pi\sigma^2}}\,e^{-\frac{(x_i - \mu)^2}{2\sigma^2}} \end{equation} This method is explicitly based on the assumption that all data points $x_i$ are known without error. However I want to construct ML estimator for $\mu$ and $\sigma$ in case data points are measured with errors $x_i \pm \sigma_i$ and errors are known. Measurements are independent and each data point has it's own error $\sigma_i$. Assume also that errors are not correlated and they are known to be Gaussian. I guess the likelihood function in this case will be the product of convolutions of the p.d.f. I'm trying to estimate with Gaussian distribution of individual measurement: \begin{equation} L(\mu, \sigma) = \Pi_{i=1}^N\,\, \int \mathrm{d}x \frac{1}{\sqrt{2\pi\sigma_i^2}}\,e^{-\frac{(x_i - x)^2}{2\sigma_i^2}} \,\, \frac{1}{\sqrt{2\pi\sigma^2}}\,e^{-\frac{(x - \mu)^2}{2\sigma^2}} \end{equation} This expression is based on my intuition, but it has two desired properties: - When errors tends to zero, exponent turns into delta function and expression reduces to standard MLE - When errors are huge, the first exponent can be considered as constant and integral goes only over second exponent. So the integral equals to constant and as expected we lose any sensitivity to parameters. Summing up I have the following questions: - Is the above expression right? If not, what is the proper way to incorporate data errors into MLE estimation of $\mu$ and $\sigma$? - Can you provide refs to literature where I can read about this?
Maximum-likelihood estimator for data points with errors
CC BY-SA 4.0
null
2023-05-19T21:41:43.057
2023-05-19T22:03:18.330
2023-05-19T22:03:18.330
388371
388371
[ "normal-distribution", "confidence-interval", "maximum-likelihood", "estimators" ]
616381
1
null
null
0
13
I would like to plot contour plots of covariance functions in R: - one for a fitted separable spatio-temporal covariance; - one for the empirical covariance function, similarly to Figure 4.4 in Wikle et al [Spatio-Temporal Statistics with R](https://spacetimewithr.org/book). I understand the theory behind these concepts, I just need help with implementing it to R. Many thanks!
Contour plots of a patio-temporal covariance function
CC BY-SA 4.0
null
2023-05-19T21:56:50.090
2023-05-19T21:56:50.090
null
null
292427
[ "r", "time-series", "mathematical-statistics", "spatio-temporal", "spatial-correlation" ]
616382
1
616598
null
5
186
##### Ranking of Raters and Cases I am looking for a method for ranking a set of raters with partially overlapping binary assessments. I also want to rank the cases according to their difficulty. ##### Context I have dataset with N cases, $\{c_n\}_{n=1}^N$, and M raters, $\{r_m\}_{m=1}^M$, where each rater $r_m$ has assessed some number of cases, and where each case $c_n$ has been assessed by some number of raters. ##### Assumptions: - The number of cases assessed varies between readers, and the number of raters assessed by varies between cases. - The distribution of the assessments, i.e. the rater-case-pairs is unknown, i.e. the cases have not (necessarily) been assigned to raters uniformly at random. - The graph, formed by the rater-case-pairs, is connected, i.e. it has a single connected component. (Hence, there are no issues with "isolated clusters", which would unable a global ranking.) - (If necessary, you can assume that $N=3000$, $M=50$, that each rater has assessed at least 500 cases, and that each case has been assessed by at at least 5 raters. However, is is primarily to give a better idea of the problem – I general answer is preferred.) ##### Illustration: The data would look something like this ``` case_1 case_2 case_3 case_4 ... case_N rater_1 -1 1 0 1 ... 1 rater_2 1 0 -1 1 ... 0 rater_3 0 -1 0 1 ... 1 rater_4 -1 1 1 1 ... -1 ... ... ... ... ... ... ... rater_M -1 0 0 1 ... 0 ``` where - The rater correctly classified the case: 1 - The rater incorrectly classified the case: -1 - The rater did not asses the case: 0 ##### Initial Thoughts Personally, I believe some form of maximum-likelihood estimation (MLE) algorithm can be used to solve this problem. ##### Clarification The accuracy of the raters, and the difficulty of the cases are unknown and potentially subject to large variability. Therefore, simply aggregating over cases or readers will not yield accurate results. (Simply aggregating would work if the cases would have been assigned to raters uniformly at random, and each rater would have assessed a high enough number of cases. However, since this is not not true, the "skill" of the raters and the difficulty of cases have to be modelled (explicitly(?)).) ##### Suggestions, Guidance, and References I would like some guidance and suggested approaches, and to be pointed to relevant literature. (Please, also state what assumptions are made for a suggested approach.) ## Update Data can be found [here](https://drive.google.com/file/d/1cJ9wF1nGWtXbp4iqu-O9Ohp8Ml5dwkR1/view?usp=sharing) (1.5 MB) and read with `torch.load('X.pt')`. The distribution of the accuracy of raters: [](https://i.stack.imgur.com/XNe0s.png) The distribution of the accuracy on cases: [](https://i.stack.imgur.com/TN7MR.png) 35% cases are correctly classified by all raters.
Ranking of multiple raters with partially overlapping assessments
CC BY-SA 4.0
null
2023-05-19T22:04:52.343
2023-05-31T10:56:05.727
2023-05-30T18:28:56.667
204397
204397
[ "maximum-likelihood", "modeling", "references", "binary-data", "ranking" ]
616383
1
616385
null
0
39
I'm trying to calculate a good mean shrinkage parameter for a custom quadratic discriminant analysis (QDA), and I ran into a math problem. Suppose $X=(X_1, X_2, \ldots, X_k)^T\sim{\mathcal{N}(\textbf{0},\Sigma)}$, where $\Sigma_{ii} = 1$ for $i=1,2,\ldots, k$. Define $$ \text{mean}(X) = \frac{1}{k}\sum_{i = 1}^k X_i. $$ I would like to calculate $E[\text{mean}(X)|\ \text{mean}(X) > 0]$, if there's a closed form. I suspect there isn't and curve fitting will be necessary. If that's the case, is there a programmatic way of fitting a curve to results obtained via Monte Carlo simulation? I have been trying to fit a curve as a function of $k$ and $\mathbb{1}^T \tilde{\Sigma}\mathbb{1}$, where $\tilde{\Sigma} = \text{Var}(X|\ \text{mean}(X) > 0)$, since $\tilde{\Sigma}$ should be close to the covariance matrix I have for my labels in QDA. Unfortunately, I've had limited success. Neural networks and ML techniques are not an option due to speed concerns.
Conditional Expectation of Multivariate Normal
CC BY-SA 4.0
null
2023-05-19T22:05:15.720
2023-05-19T23:47:36.237
null
null
382612
[ "conditional-expectation", "multivariate-normal-distribution", "curve-fitting" ]
616384
1
616442
null
2
26
I'm trying to gain deper understanding of the logic behind vanishing and exploding gradients. Most sources I've come across explain the problem by saying that when the weights become too small, the forward pass and back propagation essentially become a chain of multiplications of numbers $< 1$, resulting in increasingly small output values/gradients. This is obviously true for the forward pass, and also seems to make sense for the backpropagation: As illustrated through the example made in this [article](https://programmathically.com/understanding-the-exploding-and-vanishing-gradients-problem/), the partial derivative of the cost function w.r.t. the weight $w_{1}$, is given by: $\frac{\delta J}{\delta w_{1}} = \frac{\delta J}{\delta \hat{y}} \frac{\delta \hat{y}}{\delta z_{2}} \frac{\delta z_{2}}{\delta a_{1}} \frac{\delta a_{1}}{\delta z_{1}} \frac{\delta z_{1}}{\delta w_{1}}$ Assuming that the sigmoid is used as the activation function function (i.e., $\hat{y} = \sigma(z_{2})$ / $a_{1} = \sigma(z_{1})$), whose derivative is given by $\sigma(x)(1-\sigma(x))$, then $\frac{\delta \hat{y}}{\delta z_{2}}$ and $\frac{\delta a_{1}}{\delta z_{1}}$ cannot be $> 1$ and will be close to zero if $z_{1}$ or $z_{2}$ are very big/small. Considering further that $z_{i} = w_{i}a_{i-1} + b_{i}$, it follows that $\frac{\delta z_{2}}{\delta a_{1}} = w_{2}$, and $\frac{\delta z_{1}}{\delta w_{1}} = x_{1}$, where $x_{1}$ is the input. So, indeed there are no values $> 1$ in this calculation (except maybe the input $x_{1}$), and I can understand how the gradient would vanish and how this could be mitigated by using a different activation function such as the ReLU. But even using the ReLU, whose derivative is simply 1, the gradient would still diminish when the weights are $< 1$, right? Just not as quickly, I'm assuming. I'm a bit confused about the exploding gradient problem. With very large weights, $\frac{\delta \hat{y}}{\delta z_{2}}$ and $\frac{\delta a_{1}}{\delta z_{1}}$ would still be $\leq 1$ and so only $\frac{\delta z_{2}}{\delta a_{1}}$ and $\frac{\delta z_{1}}{\delta w_{1}}$ could be $> 1$ (disregarding $\frac{\delta J}{\delta \hat{y}}$ for now). I see how in a deeper network, more weights $> 1$ would go into the gradient of early layers, but also the number of terms that would be $\leq 1$ would increase. Wouldn't the weights then have to be disproportionately big for the gradient to explode and can this actually happen? Also, wouldn't using the ReLU instead of the sigmoid increase the risk of running into this problem, rather than helping to avoid it? Sorry for the lengthy post and thanks a lot for any kind of input!
Exploding/Vanishing gradients deeper understanding
CC BY-SA 4.0
null
2023-05-19T22:13:22.707
2023-05-20T21:37:25.093
null
null
387314
[ "neural-networks", "gradient-descent", "gradient" ]
616385
2
null
616383
2
null
Since under the assumption $\bar{X} \sim N(0, \sigma^2)$ with $\sigma^2 = \frac{1}{k^2}1'\Sigma 1$, the problem can be reduced to finding $E[X|X > 0]$ given $X \sim N(0, \sigma^2)$, which by definition of conditional expectation (conditioning on an event) is \begin{align} E[X|X > 0] = \frac{E[XI_{[X > 0]}]}{P(X > 0)} = 2\int_0^\infty x\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{x^2}{2\sigma^2}}dx = \frac{\sqrt{2}\sigma}{\sqrt{\pi}}. \end{align}
null
CC BY-SA 4.0
null
2023-05-19T22:28:41.100
2023-05-19T23:47:36.237
2023-05-19T23:47:36.237
20519
20519
null
616386
1
null
null
1
32
Exponential family form is $$f_X(x) = h(x)\exp(\eta(\theta)\cdot T(x) - A(\theta))$$ I know $$\operatorname{Cov}(T(x), \log(h(x)) = 0.$$ But how can I prove it?
Prove covariance between sufficient statistic and logarithm of base measure in exponential family is equal to zero
CC BY-SA 4.0
null
2023-05-19T22:51:49.777
2023-05-20T05:42:34.237
2023-05-20T05:42:34.237
362671
388375
[ "self-study", "expected-value", "covariance", "sufficient-statistics", "exponential-family" ]
616387
1
null
null
1
8
I'm in the middle of writing my Master's thesis on undersampling techniques in imbalanced datasets, and I wanted to refer on [this](https://sci2s.ugr.es/keel/dataset/includes/catImbFiles/1976-Tomek-IEEETSMC.pdf) paper explaining the Condensed Nearest-Neighbor algorithm and what kind of adjustments is Tomek Links making on it. What niggles me is the beginning of the article that explains the steps of CNN algorithm. As far as I know, the algorithm ends when we assigned all the examples either to a new prototype subset $E$ or to absorbed points. But in the following explanation point h) tells to go back to b), which essentially starts over the whole procedure by setting $E={\{x\}}$ which is a random datapoint from $D$. Am I missing something there or is this simply wrong? [](https://i.stack.imgur.com/HUyhE.png)
Steps for condensed nearest-neighbor algorithm
CC BY-SA 4.0
null
2023-05-19T23:28:18.650
2023-05-19T23:29:09.960
2023-05-19T23:29:09.960
388376
388376
[ "unbalanced-classes", "k-nearest-neighbour", "under-sampling" ]
616388
1
null
null
0
13
Say that there's a horse race between the two horses Alice and Bob. People bet on which horse will win. The "house" is taking no money off the top, so basically it's a direct transfer where everyone guesses wrong loses their money, and it all gets split between the winners (I'm pretty sure that's how it works). My question is: how do the actual payouts work, since the odds keep changing? For example, the odds probably start out 1-to-1. Then a bunch of people bet on Alice, so now the odds are maybe 100 to 1. Then someone bets on Bob. Will their payout be ~100x if Bob wins? What about the people who bet on Alice initially? Will their payout be 2x if they win (which is to be expected by taking 1-to-1 odds)? Or ~1.01x (which are the odds that came to be much after they made their initial bet)? Basically, I just want to know how payouts work for such gambling scenarios where the odds are theoretically ever-changing as the bets come in. Sorry if this is the wrong stackexchange for this question, but I wasn't sure where to post it @.@.
How do payouts for bets change as the odds change?
CC BY-SA 4.0
null
2023-05-19T23:51:38.370
2023-05-19T23:51:38.370
null
null
106978
[ "finance", "games", "odds" ]
616389
2
null
616067
0
null
This response is based on my experience working with colleagues who have made similar analysis suggestions. The rationale behind their suggestions (and what I am assuming is the same for your advisor) is the idea of reducing the potential number of predictor variables in the model. And this is coupled with the idea that if variable A is related to B, then you can view the relationship as B being related to A. While this is not the best practice in doing a multiple regression analysis, the idea of trimming out variables that don't show a group difference at the bivariate level (using an ANOVA with the continuous predictor as the dependent variable in the ANOVA analysis and the grouping variable as the independent variable) is one that might be used to reduce the list of potential predictors (at least at the bivariate level). Note: I'm not necessarily supporting this approach, but I'm hoping to shed insight into why the suggestion may have been offered.
null
CC BY-SA 4.0
null
2023-05-19T23:55:42.637
2023-05-19T23:55:42.637
null
null
199063
null
616390
1
null
null
0
46
I am seeking assistance in analyzing whether there are any discernible trends, seasonality, or cycles present in the monthly data represented by the following figures: [](https://i.stack.imgur.com/8EQgp.jpg) [](https://i.stack.imgur.com/ZlUEx.jpg) I would greatly appreciate it if you could let me know how to set `seasonal.periods` in this code (12 or 6): ``` train.ts.msts <- msts(train.ts.df,seasonal.periods=12) ``` Should I ignore it and let the model automatically determine the seasonal periods based on the data as follows? ``` model1 <- hw(train.ts, seasonal="additive") ```
How to determine Seasonal periods in MSTS function?
CC BY-SA 4.0
null
2023-05-20T00:19:05.340
2023-05-20T14:57:00.263
2023-05-20T14:57:00.263
94909
94909
[ "r", "time-series", "forecasting", "seasonality", "trend" ]
616391
2
null
616352
0
null
Typically if we are more confident about certain data points than others, this is because there is more error in some of the measurements than in others. We can easily incorporate this into a linear model in at least one way. What you have done seems to be quite different. I will explain how this is different in a second. Most of this is pretty standard and you can find in most textbooks. Let's start where I like to start, from Bayes' rule: $p(\theta | D) = \frac{p(D | \theta)p(\theta)}{p(D)} = \frac{likelihood\hspace{0.1cm}* \hspace{0.1cm}prior}{marginal\hspace{0.1cm}likelihood}$ where $\theta$ is the parameters and $D$ is our data. Very frequently, rather than finding the posterior, we just want to find the set of parameters for which $p(\theta | D)$ is at a maximum, the maximum of the posterior distribution -- the most probable set of parameters. This is called MAP (maximum a posteriori) estimation. If doing MAP estimation, you can disregard the denominator, which is a normalization constant for a given model, and just maximize the numerator. For ridge regression, we are doing MAP estimation with an independent and identically distributed (iid) Gaussian likelihood and a Gaussian prior. In other words, if each datapoint $i$ has an associated feature vector $x_i$ and an associated real-valued label $y_i$, and there is a vector of weights we want to find $w$, then we say that: $p(D | w) = \prod_\limits{i}^{N} N(y_i\hspace{0.1cm}|\hspace{0.1cm}x_i^T w,\hspace{0.1cm}\sigma_i^2)$ $p(w) = N(w\hspace{0.1cm}|0,\hspace{0.1cm}\lambda^{-2})$ If we can include a dummy feature that is always 1 as the last element of the input vector for each datapoint, then the y-intercept is just the last element of the weight vector. Our prior here expresses a "prior belief" that the weights will be normally distributed and for the most part close to zero. $\lambda$ is in some sense quantifies the strength of our prior belief. Notice $\sigma_i$, which is the standard deviation of the likelihood for a given measurement. As we'll see in a second, datapoints with a large associated standard deviation will end up being downweighted, so we will tolerate a larger absolute value in the error for those datapoints. If your "score" is the inverse of the variance in each measurement, i.e. if the larger the score, the smaller the measurement error, you could use $1/score$ as the standard deviation for a given datapoint, and this would downweight the datapoints with larger variance (smaller score) as we'll see shortly. We can rewrite the likelihood as: $p(D | w) = \prod_\limits{i}^{N} N(y_i\hspace{0.1cm}|\hspace{0.1cm}x_i^T w,\hspace{0.1cm}\sigma_i^2) = N(Y\hspace{0.1cm}|\hspace{0.1cm}Xw,\hspace{0.1cm}\Sigma)$ if X is a matrix where each row $i$ is vector $x_i$ and $Y$ is a vector containing all the Y values from $1...N$, and $\Sigma$ is a diagonal matrix where $\Sigma_{ii}=\sigma_i^2$ (this is where your confidence scores would go). If we decide to minimize the negative log of the likelihood times the prior (equivalent to maximizing the likelihood times the prior, but easier), we find that we want to find theta such that: $argmin_w\hspace{0.2cm} constant + \frac{1}{2}(Xw - Y)^T\Sigma^-1(Xw - Y) + \frac{1}{2}\lambda^2w^Tw$ Notice that the last term -- the ridge regression penalty -- comes from the prior. Because of the way we set up the prior, it basically acts to penalize large weight values. Thus what I am calling $\lambda$ and you are calling $K$ measures how strongly we want to penalize large weight values. Interestingly, you then divide $\lambda$ (or $K$) by the score, which doesn't really make sense -- I'm not sure how you got from there to your last equation. It $is$ possible to set up a prior with a diagonal covariance matrix (or even a dense covariance matrix), in which case our prior basically indicates that we think certain features are "more important" or more likely to have larger associated weight values than others, but that's not what you're trying to do. It looks like you've started out by modifying your prior for the weights with the score values which should actually reflect your confidence for the datapoints, so that may be where you went wrong. But ok. Back to our derivation. Dropping the constant, eliminating the 1/2 and rearranging, we find: $argmin_w\hspace{0.2cm} w^TX^T\Sigma^{-1}Xw - 2w^TX^T\Sigma^{-1}Y + Y^T \Sigma^{-1}Y + \lambda^2 w^T w$ Now differentiate w/r/t $w$ and set equal to zero (to find the minimum), which obtains: $0 = 2X^T\Sigma^{-1}Xw - 2X^T\Sigma^{-1}w + 2\lambda^2w$ which is easily rearranged to yield: $w = (X^T\Sigma^{-1}X + \lambda^2I)^{-1}X^T\Sigma^{-1}Y$ where I is the identity matrix. This I think is the expression you're actually looking for; confidence value for each data point goes on the diagonal of $\Sigma$. Just be careful as pointed out in the comments about interpretation if the score really corresponds to something quite different from the variance of the corresponding measurement. Pretty much any statistical software you're using, whether statsmodels in Python or R or something else, will have an implementation of weighted linear regression like this. Hope that helps...
null
CC BY-SA 4.0
null
2023-05-20T00:20:35.087
2023-05-20T17:19:28.820
2023-05-20T17:19:28.820
250956
250956
null
616392
2
null
616332
2
null
Without further restrictions, you can't say much about the likelihood function as $n \rightarrow \infty$. Illustrative example: suppose you have a Uniform(0, $\theta$) distribution. If $\theta_{true} < 1$, then for any $\theta \in [\theta_{true}, 1)$, the likelihood function will approach $\infty$. On the other hand, if $\theta_{true} > 1$, then the likelihood will approach 0 for any $\theta$, including $\theta_{true}$.
null
CC BY-SA 4.0
null
2023-05-20T01:16:20.813
2023-05-20T01:16:20.813
null
null
76981
null
616393
1
616399
null
1
54
Let $F$ be a distribution function, and let $g \colon \mathbb{R} \to \mathbb{R}$ be a real function. I want to prove $\int_{\mathbb{R}} g(x) dF^n(x) = n \int_{\mathbb{R}} g(x) F^{n-1}(x) dF(x)$, where $dF$ means to take integral with respect to the distribution measure generated by distribution function $F$, and $F^n$ refers to $F \times \ldots \times F$. We may assume that $F$ is continuous and strictly increasing. If $F$ admits density $f$, then the question is easy since $F^n$ has density $nF(x)^{n-1} f(x)$. However, $F$ may not have a density, in which case I'm not sure how to approach the problem. Thank you.
How to prove $\int_{\mathbb{R}} g(x) dF^n(x) = n \int_{\mathbb{R}} g(x) F^{n-1}(x) dF(x)$
CC BY-SA 4.0
null
2023-05-20T01:53:56.600
2023-05-20T05:40:31.000
2023-05-20T05:14:24.713
20519
260660
[ "probability", "distributions", "measure-theory" ]
616394
1
null
null
2
38
It's surprisingly difficult to find a clear answer to this online. Nearly every online source claims that error terms must follow a normal distribution for statistical inference but stops short of proving it. There are a million "data science" and "statistics" articles out there that rehash the same assertions and simple arguments but neither show any math nor link to any sources. What concerns me is that I don't see this assumption being used at all in statistical papers, such as "A Practitioner's Guide to Cluster-Robust Inference" by Cameron and Miller ([link](https://cameron.econ.ucdavis.edu/research/Cameron_Miller_JHR_2015_February.pdf)). In fact, reading through this briefly, I feel increasingly convinced that normality is completely unimportant, except possibly for small sample sizes. As far as I can tell, all that's necessary is for the sampling distribution of the estimated parameter to be approximately normal, and this should be true due to the central limit theorem. Once that is known, then the estimate for the parameter's variance can be used for inference, and this only depends on the variance of the error term and not on its distribution's shape. The problem is that I'm not a statistician, and I don't have complete confidence in my reasoning. For example, I don't know for sure that CLT results in the parameter's sampling distribution being approximately normal, and I don't know for sure that tests using p-values computed this way don't reject the null hypothesis too often.
Are normal errors required for OLS with a large sample size?
CC BY-SA 4.0
null
2023-05-20T02:10:51.630
2023-05-20T03:47:45.350
null
null
383890
[ "regression", "hypothesis-testing", "least-squares" ]
616395
2
null
26300
0
null
I add a less statistically technical answer here for the less statistically inclined audience: One variable (let's say, X) can positively influence another variable (let's say, $Y$), while not being associated with $Y$, or even being negatively associated with $Y$, if there are confounding factors that distort the association between $X$ and $Y$. For example, suppose that the very best doctors are put in wards with the highest needs patients. While the quality of doctors itself has a positive influence on reducing death rates of patients, the quality of doctors could actually be negatively correlated with death rates, because of the confounding variable of the needs of the patients.
null
CC BY-SA 4.0
null
2023-05-20T02:52:30.463
2023-05-20T02:52:30.463
null
null
365319
null
616396
2
null
326446
1
null
You may wish to take a look at the spatial statistics literature where different classes of massively scalable Gaussian processes have witnessed significant development with the number of space or space-time coordinates (inputs into the Gaussian processes) into several millions. The idea is not to rely solely upon computations but build different classes of Gaussian processes that can scale up to tens of millions of locations. Most of these methods are agnostic to any specific algorithm and can be adapted to full inference using MCMC or faster approximate Bayesian algorithms such as Variational Bayes or INLA. Some recent pointers are available here: Sparsity-inducing Gaussian processes including Nearest Neighbor Gaussian Processes (NNGPs) based upon an approximation due to Vecchia (arranged chronologically): Vecchia's original paper [https://doi.org/10.1111%2Fj.2517-6161.1988.tb01729.x](https://doi.org/10.1111%2Fj.2517-6161.1988.tb01729.x) While the paper was published in 1988, interest in this approach was rather lukewarm until the following NNGP paper was published in 2016: [https://doi.org/10.1080/01621459.2015.1044091](https://doi.org/10.1080/01621459.2015.1044091) (accompanying R package: [https://cran.r-project.org/web/packages/spNNGP/index.html](https://cran.r-project.org/web/packages/spNNGP/index.html)) For dynamic NNGP with space-time inputs (where neighbors of inputs are estimated rather than fixed): [https://doi.org/10.1214/16-AOAS931](https://doi.org/10.1214/16-AOAS931) [https://doi.org/10.1080/10618600.2018.1537924](https://doi.org/10.1080/10618600.2018.1537924) (focusing on efficient Bayesian algorithms for NNGP models) [https://doi.org/10.1214%2F19-STS755](https://doi.org/10.1214%2F19-STS755) (accompanying R package: [https://cran.r-project.org/web/packages/GPvecchia/index.html](https://cran.r-project.org/web/packages/GPvecchia/index.html)) Meshed Gaussian processes [https://doi.org/10.1080/01621459.2020.1833889](https://doi.org/10.1080/01621459.2020.1833889) (accompanying R package: [https://cran.r-project.org/web/packages/meshed/index.html](https://cran.r-project.org/web/packages/meshed/index.html)) Apart from this, there is ExaGeoStat here: [https://cemse.kaust.edu.sa/stsds/exageostat](https://cemse.kaust.edu.sa/stsds/exageostat) Here is some recent work on Variational NNGPs: [https://proceedings.mlr.press/v162/wu22h/wu22h.pdf](https://proceedings.mlr.press/v162/wu22h/wu22h.pdf) In terms of pure scalability, these approaches have been applied to data sets with number of inputs exceeding millions. Review articles include: [https://doi.org/10.1214/17-BA1056R](https://doi.org/10.1214/17-BA1056R) (only Bayesian) [https://link.springer.com/article/10.1007/s13253-018-00348-w](https://link.springer.com/article/10.1007/s13253-018-00348-w) [https://marcgenton.github.io/2021.HASLKG.JABES.pdf](https://marcgenton.github.io/2021.HASLKG.JABES.pdf)
null
CC BY-SA 4.0
null
2023-05-20T02:58:32.140
2023-05-20T02:58:32.140
null
null
347363
null
616397
2
null
616394
1
null
BRIEFLY, the Gaussian errors are what guarantee the theoretical sampling distributions, but there is often (but not always) good robustness to violations of this Gaussian ideal. When you have $iid$ Gaussian error terms, you are guaranteed to have the proper sampling distribution. When you do not, you do not have the proper sampling distribution. In that regard, you do need Gaussian errors. However, OLS is fairly robust to violations of the Gaussian ideal, and appealing to convergence theorems starts to explain why (more to it than CLT, since the variance is usually unknown, but CLT is a starting point). In that regard, you do not need the Gaussian ideal, since the robustness, especially with large sample sizes, gets you pretty close to the theoretical distribution. There are, however, error distributions so awful that this robustness is not enough. In that regard, you might not get close to the theoretical distributions without the Gaussian ideal. Further, all of this talk about being “close” lacks the exactness present in so much of mathematics, and what is close for one scientist might not be for another.
null
CC BY-SA 4.0
null
2023-05-20T03:47:45.350
2023-05-20T03:47:45.350
null
null
247274
null
616398
2
null
615290
1
null
Contrary to standard feature importance, calculating SHAP values on a hold-out set does not make a big difference because ultimately we expect our model to have similar behaviour for instances in the training set, the test set or the validation set. We do not evaluate generalisation; taking an edge case of using a simple linear model, the coefficients $\beta_j$ associated with the feature $x_j$ will have the same impact on the $i$-th instance irrespective of where $\beta_j$ is learned. (This does not mean that the realised effect of feature $x_j$ is exactly the same for all instances; especially for non-linear models we will have different local behaviour and the effect will be conditional to the values of other features.) As such, using the training set for something like SHAP will allow a post-hoc explainer like SHAP to use a "larger set" and therefore have a more faithful representation. It goes without saying that performance metrics have to be evaluated on a hold-out set. This is the reason that most tutorials, even in the official [SHAP](https://shap.readthedocs.io/en/latest/index.html) package the section [Introduction to Shapley values](https://shap.readthedocs.io/en/latest/example_notebooks/overviews/An%20introduction%20to%20explainable%20AI%20with%20Shapley%20values.html), use the whole dataset. Ideally, we would use a hold-out set just to ensure no data reuse and potential over-fitting but realistically the SHAP methodology provides in-sample explanations anyway. A fun paper on how this can backfire (in a rather artificially adversarial setting) is [Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods](https://arxiv.org/abs/1911.02508) (2020) by Slack et al. It shows how in some edge cases SHAP can be misled; even there though we need a specifically trained test-set generator to "muddle" the SHAP predictions, rather than use a standard "sample split".
null
CC BY-SA 4.0
null
2023-05-20T04:10:05.990
2023-05-26T03:02:33.723
2023-05-26T03:02:33.723
11852
11852
null
616399
2
null
616393
4
null
Denote the probability measure associated with $F$ by $\mu$ so that $\mu(a, b] = F(b) - F(a)$ for $a < b$. Then the probability measure $\nu$ associated with $F^n$ is given by $\nu(A) = \int_A dF^n(x)$ for $A \in \mathscr{R}^1$. The goal is to prove for any $g$ is non-negative or integrable with respect to $\nu$ (this is a natural condition imposed on the integrand that you didn't clearly stated) under the assumption $F$ is strictly increasing and continuous, it holds that \begin{align} \int_\mathbb{R}g(x)\nu(dx) = n\int_\mathbb{R}g(x)F^{n - 1}(x)\mu(dx). \tag{1} \end{align} By Theorem 16.11 in Probability and Measure (3rd edition) by Patrick Billingsley, to show $(1)$, it suffices to show that $\nu$ has density $nF^{n - 1}(x)$ with respect to $\mu$, that is, to show that for any $A \in \mathscr{R}^1$, it holds that \begin{align} \nu(A) = \int_AnF^{n - 1}(x)\mu(dx). \tag{2} \end{align} Because the linear Borel set $\mathscr{R}^1$ is generated by the class $\mathscr{I}$ consisting of intervals $(a, b]$, which is a $\pi$-system, if we can show that $\nu(a, b] = \int_a^bnF^{n - 1}(x)\mu(dx)$ holds for any $a < b$, then $(2)$ holds by Theorem 10.3 in Probability and Measure. Since $\nu(a, b] = \int_a^b dF^n(x) = F(b)^n - F(a)^n$, we eventually reduced the original problem to proving \begin{align} F(b)^n - F(a)^n = \int_a^b nF^{n - 1}(x)\mu(dx) = \int_a^bnF^{n - 1}(x)dF(x). \tag{3} \end{align} $(3)$ is almost trivial under the condition that $F$ is continuous and strictly increasing, which allows us to make the variable substitution $u = F(x)$ in the right-hand side of $(3)$, which gives \begin{align} \int_a^b nF^{n - 1}(x)dF(x) = \int_{F(a)}^{F(b)}nu^{n - 1}du = F(b)^n - F(a)^n. \end{align} This completes the proof. It should be pointed out this equality does not hold for general distribution function $F$. For example, when $F$ is the distribution function of $X \sim B(1, 1/2)$. For this $F$ and $n = 2$, \begin{align} \nu(\mathbb{R}^1) = 1 \neq 2\int_\mathbb{R}F(x)dF(x) = 2 \times (0.5 \times 0.5 + 1 \times 0.5) = \frac{3}{2}. \end{align}
null
CC BY-SA 4.0
null
2023-05-20T05:13:39.790
2023-05-20T05:40:31.000
2023-05-20T05:40:31.000
20519
20519
null
616400
1
616503
null
0
27
I am new to Mahalanobis distance matching and trying to see for information on how I can assess whether the matches I got using Mahalanobis matching are good matches. ``` gen long matchD3 = . mahapick hhh_age55older hhh_edu nfemale1859 nmale1859 nmale05 nmale615 nmale61up nfemale05 nfemale615 nfemale61up x1_17, idvar(a01) treated(program)pickids(matchD3) /// genfile(matchD3) matchon(hhh_sex wealth provincename) score ``` After I executed the above stata commands, I merged and stacked all the necessary files together, and now I would like to evaluate whether my treated observations and all their matches are sound. Can someone please guide me on how to do this? I know for PSM we have to check the common support, balance, and etc to make sure if meets the PSM assumptions. For Mahalanobis, I am unsure of what post-matching method to take in order to check the quality of my resulting matches. Any guidance you may have will be helpful. Thank you!
Evaluating the quality of matches after Mahalanobis distance matching on stata
CC BY-SA 4.0
null
2023-05-20T05:14:07.717
2023-05-21T18:29:10.843
null
null
347906
[ "stata", "matching", "mahalanobis" ]
616401
1
null
null
0
12
Theorem 1 [here](https://sites.stat.washington.edu/jaw/JAW-papers/NR/jaw-gaenssler.ess.83.pdf) states $\lim_{n\to\infty} \sup_x |\mathbb{F}_n(x) - F(x)| = 0$ with probability $1$, where $\mathbb{F}_n$ is the empirical distribution function of the first $n$ $X$s, which are i.i.d. and $F$ is the commmon distribution function of the $X$s. Is it implied that $\mathbb{F}_n$ and $F(x)$ are random variables and so $|\mathbb{F}_n(x) - F(x)| = 0$ is an event?
Random variables implicit in Glivenko-Cantelli theorem
CC BY-SA 4.0
null
2023-05-20T05:25:20.410
2023-05-20T05:25:20.410
null
null
364080
[ "probability", "mathematical-statistics", "measure-theory", "empirical-cumulative-distr-fn" ]
616402
2
null
616386
1
null
The claim is false. As a counterexample, let $f_X(x) = xe^{-x}, x > 0$ (i.e., Gamma(2, 1) distribution) so that $T(X) = X$ and $h(X) = X$. Simulation shows that the covariance of $X$ and $\log(X)$ is around $1$: ``` > x <- rgamma(100000, shape = 2, rate = 1) > cov(log(x), x) [1] 1.007776 ```
null
CC BY-SA 4.0
null
2023-05-20T05:27:50.867
2023-05-20T05:27:50.867
null
null
20519
null
616403
1
616409
null
0
57
I'm using the tool in the following website to compare correlation coefficients: [http://vassarstats.net/rdiff.html](http://vassarstats.net/rdiff.html) The tool uses Fisher's r to z transformation to ultimately compare two correlation coefficients. Let's say I want to compare a correlation coefficient that was calculated with Pearson's correlation and a correlation coefficient that was calculated using Spearman's correlation. Is that ok to do? Or do both need to be calculated using the same correlation procedure (i.e., both Pearson's or both Spearman's?). Thanks.
Can I compare Pearson's r and Spearman's rho with Fisher's r to z transformation?
CC BY-SA 4.0
null
2023-05-20T05:43:36.490
2023-05-20T09:48:24.403
null
null
128883
[ "correlation", "pearson-r", "spearman-rho", "fisher-transform" ]
616404
1
616452
null
5
76
Suppose $X_i\overset{\text{iid}}{\sim} N(0,1)$, and define the random vector $\mathbf{X}=(X_1,\ldots,X_n)$. Then the normalized vector $\mathbf{Z}:=\frac{\mathbf{X}}{\|\mathbf{X}\|_2}$ is uniformly distributed on the sphere $\mathbb{S}^{n-1}$. Several properties of the distribution of $\mathbf{Z}$ are known, e.g., - All odd moments are $\mathbf{0}\in\mathbb{R}^{n}$ - Cross-term/second moments: $\mathbb{E}[Z_iZ_j]=n^{-1}\mathbb{1}(i=j)$ (More properties of higher order moments are listed in Chapter 9 of Mardia and Jupp's Directional Statistics [textbook](https://onlinelibrary.wiley.com/doi/book/10.1002/9780470316979).) My question is: what happens if we define $X_i\overset{\text{ind}}{\sim}N(0,\sigma_i^2)$, more generally? One could rely on symmetry arguments to show that Property 1 above still holds for the normalized vector $\mathbf{Z}$, but is there a straightforward way to obtain second and higher order moments of $\mathbf{Z}$? I am most keen to be pointed to any references or techniques (recurrences or recursions perhaps?) for deriving these quantities. Thanks in advance!
Non-Uniform Spherical Distributions
CC BY-SA 4.0
null
2023-05-20T05:58:18.573
2023-05-21T07:21:17.300
2023-05-20T23:10:45.813
5176
266513
[ "distributions", "multivariate-analysis", "multivariate-normal-distribution", "circular-statistics" ]
616405
1
616423
null
0
40
One result of Slutsky's theorem is that when $X_{n}$ is a sequence of random variables converging to a random variable $X$ and $Y_{n}$ a sequence of random variables converging to a constant $c$ then it holds that in the limit $n \rightarrow \infty$ we have that $$X_{n}Y_{n}\ {\xrightarrow {d}}\ cX$$. But apparently this doesn't hold if the sequence $Y_{n}$ converges to a non-constant random variable $Y$, i.e. generally it does not hold that $$X_{n}Y_{n}\ {\xrightarrow {d}}\ XY.$$ Does anybody have a good intuitive counter-example for this case? Edit: Both $X_{n}$ and $Y_{n}$ converge in distribution to their respective limits.
Example of product of sequence of random variables that does not converge in distribution to the product of the limits
CC BY-SA 4.0
null
2023-05-20T06:30:51.087
2023-05-21T12:36:34.410
2023-05-21T12:36:34.410
388389
388389
[ "probability", "random-variable", "convergence", "slutsky-theorem" ]
616406
1
null
null
1
24
I have read 2 papers ([this](https://arxiv.org/pdf/2003.02395.pdf) and [this](https://openaccess.thecvf.com/content_CVPR_2019/papers/Zou_A_Sufficient_Condition_for_Convergences_of_Adam_and_RMSProp_CVPR_2019_paper.pdf)) about the convergence of the Adam optimizer. One of the assumptions is the smoothness of the loss function, meaning that the gradient of the loss function is Lipschitz continuous. Let's consider a neural network $f(\theta)$ with a loss function $L$. If I want to prove the smoothness of the loss function, does it imply that I have to derive the calculation about the gradient of the loss function w.r.t every weight (do the backpropagation) and prove that the Lipschitz inequality holds? Also, does this assumption depends on the network architecture?
How to prove the smoothness assumption of the loss function in the convergence of the Adam optimizer?
CC BY-SA 4.0
null
2023-05-20T08:16:58.023
2023-05-20T08:16:58.023
null
null
387019
[ "neural-networks", "adam" ]
616407
1
null
null
0
9
Can I compare the forecasting performance of two models VECM and VAR with the same dataset, if in the case of VECM I have some of the variables I(1) at the level, and others are stationary?
Can I compare the forecasting performance of two models VECM and VAR
CC BY-SA 4.0
null
2023-05-20T08:28:03.440
2023-05-20T08:28:03.440
null
null
361080
[ "time-series", "vector-autoregression", "vector-error-correction-model" ]
616408
1
617191
null
1
19
The equation of a margin in SVM is $\frac{2}{\lVert \mathbf{w} \rVert}$, which I completely understand. It is also rewritten as $\frac{1}{2} \lVert \mathbf{w} \rVert^2$ for the sake of mathematical convenience or "convention". However, I do not see how these two equations are equivalent. I understand it works but I just don't see it and it's bothering me. Could anyone please help me with the mathematical proof that they're equivalent?
SVM margin equations
CC BY-SA 4.0
null
2023-05-20T09:40:38.417
2023-05-29T07:53:09.303
null
null
388398
[ "mathematical-statistics", "svm" ]
616409
2
null
616403
1
null
These two coefficients don't measure the same thing, so your comparison will be more appropriate if you use the same correlation in both cases. If you're worried about model assumptions, use Spearman. However keep in mind that model assumptions are always idealisations, so it is wrong to say that for example you can only use Pearson correlation if data are normal. Even if you want a formal comparison, I'd look at plots first to see whether there is anything that makes the Pearson correlation really misleading at least in one of your cases, such as domination by an outlier. Note also that the tool you cite is about significance testing and will give you a misleading comparison at least if sample sizes are different. Also it doesn't solve the problem that not the same thing is measured by the two correlations.
null
CC BY-SA 4.0
null
2023-05-20T09:48:24.403
2023-05-20T09:48:24.403
null
null
247165
null
616410
1
null
null
2
42
Hello I am modeling a regression for my school project and one of the question I got from my professor was, how would I model this with polynomial regression. I have quite good random forest and knn regression models, however it is desired that I improve my polynomial regression model. My question is, is polynomial regression any good for identifying multiple curves? Should I try to raise my polynomial regression degree, so its more "wobbly", or should I try to make 2 different models, as it is fairly easy to identify into which curve does the data point go? I included a picture of the graph: [](https://i.stack.imgur.com/7xDEP.png)
Use of polynomial regression when I can identify more curves
CC BY-SA 4.0
null
2023-05-20T10:12:20.537
2023-05-20T16:27:18.727
null
null
388401
[ "regression", "machine-learning", "polynomial" ]
616411
2
null
616338
1
null
As you said, in problems where RNNs are used, we assume that there exist a function $f: (\mathcal X\times \mathcal X)\to \mathcal X$ and a function $g: \mathcal X \to \mathcal Y$ such that for all $T\in\mathbb N $, the correspondence between a sequence $x_1,\ldots,x_T\in\mathcal X$ and an output $y_T\in\mathcal Y$ is given by $$y_T = g( h_T)$$ Where the sequence $(h_t)_{1\le t\le T}$ is recursively defined as $$h_1 := f(x_1,x_0),\quad h_{t+1} = f(h_t,x_t) $$ for $x_0$ some element of $\mathcal X$. Here, the function $g$ is not too important (it doesn't depend on time, so it is efficiently learnable by a standard Feedforward Neural Network up to very mild assumptions), so we will ignore it (that is, we will pretend it is known) and focus on $f$. In the vanilla RNN, given an input sequence $x_1,\ldots,x_T$, we approximate the output $y_T$ by $$ \hat y_T = g(\hat h_T),\quad \hat h_1 := f_\theta (x_1,x_0),\quad \hat h_{t+1}=f_\theta(\hat h_t, x_t) $$ Where, crucially, the function $f_\theta$ is of the form $$f_\theta(h,x) := \sigma(Ah + Bx + v) \tag1$$ Where $\sigma$ is some activation function (tanh, sigmoid... or even the identity !) and $A,B,v$ are matrices/vectors of appropriate dimensions (these are the parameters we want to optimize over). Now, something is very clear : if the target function $f$ is not expressible as in expression $(1)$, RNNs will never be able to learn the true input/output mapping, since even the best $f_\theta^*$ can remain arbitrarily far from $f$. Therefore, if we want any approximation guarantees on vanilla RNNs, we have to assume that $f$ is indeed (close to) expressible as in $(1)$. This is indeed the approach taken in the literature : see the recent [A Brief Survey on the Approximation Theory for Sequence Modelling](https://arxiv.org/abs/2302.13752) or chapter 8 of [this older review](https://arxiv.org/abs/2007.04759). This may seem underwhelming, since this seems like an overly restrictive assumption in practice (outside of dynamical systems). Indeed, this is an issue and it is partially allievated by using more sophisticated units $f_\theta$ (like GRUs) which improve the expressivity of the neural networks, but from an approximation perspective, the improvements are marginal at best. --- So now you ask : why not take $f_\theta$ to be a good old Deep Feedforward Neural Network ? We know that these will be able to represent almost any functions, right ? That is perfectly true : given any mildly regular function, you can find a Neural Network that approximates this function well in basically any norm you want (Sobolev norm, $L^2$ norm, uniform norm...). The issue however is that we need to learn that function. That is, you are free to pick a Neural Network with one billion parameters as your class of functions $\{f_\theta\}$, in which case you're sure that for some parameters $\theta $, you will have a good approximation of $f$ (and thus the output $y_T$), but the issue is that you don't know what these parameters are, you have to find them ! For these kind of models, the state-of-the-art way to find this "best neural network" is by performing gradient descent/backpropagation on the loss with respect to the parameters. Here is the bad news : [RNNs highly suffer from the vanishing/exploding gradient problem](https://stats.stackexchange.com/questions/140537/why-do-rnns-have-a-tendency-to-suffer-from-vanishing-exploding-gradient), that is even if you pick the simplest function $f_\theta$ given in $(1)$, the norm of the gradient will roughly scale like $\|A\|^T $, which for vanishes/explodes exponentially fast in $T$, leading to unstable training, making it very hard to find the best possible RNN in the hypothesis class (this is the reason why people have considered alternatives like LSTMs, bi-directional RNNs etc... in the first place : they alleviate this phenomenon by quite a lot). If you replace $f_\theta$ from the vanilla expression $(1)$ to a Deep Neural Network ([which also suffer from the vanishing gradient problem](https://en.wikipedia.org/wiki/Vanishing_gradient_problem)), you will basically compound the effect, leading to a gradient norm roughly scaling like $\|A\|^{LT}$, where $L$ is the depth of the Neural Network you picked. This is untrainable in practice, and thus finding the best Neural Network $f^*_\theta$ would simply be impossible. (I encourage you to try on a non-trivial problem and report your results here !) In conclusion : If we replaced the function $f_\theta$ from a linear combination of $h$ and $x$ to a Deep Network, we would gain a lot of expressivity, but at the cost of having unlearnable, and thus unusable models. Also note that it may seem restrictive to ask for $f$ to be of the form $(1)$, but in practice, RNNs (or some variants) have been succesfully used on numerous highly complex NLP tasks, so it seems like the assumption is actually not that unreasonable. My intuition for that would be that due to the time dependency, even for a very simple function $f$, the relationship between $y_T$ and $x_1,\ldots,x_T$ might be highly complex. This is essentially the main idea of [dynamical systems](https://en.wikipedia.org/wiki/Dynamical_systems_theory) and [chaos theory](https://en.wikipedia.org/wiki/Chaos_theory) (though to be honest I know nothing about these topics). Hope that helps !
null
CC BY-SA 4.0
null
2023-05-20T10:16:17.213
2023-05-20T10:16:17.213
null
null
305654
null
616412
1
null
null
0
21
[](https://i.stack.imgur.com/IG7P1.jpg) I have a time series dataset that records the number of passengers per day. Based on this data, I need to make predictions for the next three months at the monthly level. The provided plot illustrates the number of passengers in different areas of the city since 2019. Each line represents the passenger count observed in a specific area. The data also exhibits weekly seasonality. However, I am currently facing the following issues: - what models can be used to predict values on the long run? - How can I predict the values for the new areas (since Nov-2022) in the next three months, considering the limited available data? - Should I treat each area as a separate time series ? - As evident from the plot, there is noise and a downward trend between certain dates,how can I smooth out this noise without affecting the chosen prediction model? I am new to time series forecasting.
Time series forecasting for the next three months
CC BY-SA 4.0
null
2023-05-20T11:27:54.320
2023-05-20T16:01:10.480
2023-05-20T16:01:10.480
53690
388402
[ "time-series", "forecasting", "predictive-models", "seasonality" ]
616414
2
null
616193
0
null
Main point: use a standard RCT methodology but adjust your $\alpha$ level accordingly to account for multiple comparisons. More details: Irrespective of how many arms we have and their respective sizes, it is still the case of the smaller sample sizes dominating the power of the study - smaller sample sizes inflate the occurrence of Type II errors. That means we need to run our experiment longer (to get more data) and/or make the different arms of the study have "more dramatic" changes (to get larger expected effect sizes). On top of that, we need to do a correction for multiple comparisons in our $\alpha$ too because we now have two comparisons, i.e. use a [family-wise error rate](https://en.wikipedia.org/wiki/Family-wise_error_rate) for an adjusted level of $\frac{\alpha}{k}$ is usually fine ($k$=2 here). Notice that this latter point is counter to the usual idea of increasing $\alpha$ to avoid Type II errors. If one wants to explore this behaviour a bit further using R, the function [stats::p.adjust](https://stat.ethz.ch/R-manual/R-devel/library/stats/html/p.adjust.html) is immediately available in every R installation and offers a good place to start.
null
CC BY-SA 4.0
null
2023-05-20T11:56:21.863
2023-05-20T11:56:21.863
null
null
11852
null
616415
1
null
null
0
13
When we over-difference a time serie, we are introducing units roots in its moving average component, hence obtaining an IMA. My question is: IMA is an integrated time serie, hence it has unit root and in order to remove them I should take differences; why doesn't it work this way and instead, taking differences of an IMA,I introduce other unit roots instead of removing them? It would be great if you can provide some references because in the book that I'm studying this part isn't so clear...
What happens if you difference a IMA?
CC BY-SA 4.0
null
2023-05-20T12:19:40.417
2023-05-20T16:03:30.187
2023-05-20T16:03:30.187
53690
388411
[ "time-series", "arima", "unit-root", "differencing" ]
616416
1
null
null
0
18
I use monthly log returns for some stock portfolios and rejects the null of the ADF-test for both. Hereafter I use AIC to select best fitting models using auto.arima in R. The selected models are ARIMA(0,0,0) with zero-mean and ARIMA (0,0,0) with a drift. Now I have a couple of questions. - Since I use log returns, equal to having differenced the log prices. Would it then be more appropriate to denote the number of differencings d as 1, so the models are ARIMA(0,1,0)? I have read others using similar approach who denotes d as 0, therefore I'm in doubt. Besides this the general nature of calculating returns for financial data seems more appropriate as original data. - Since I reject the null of the ADF-test there is evidence for the series being stationary, and therefore not a Random Walk. Though the models selected by AIC is equivalent to either white noise or random walk with a drift?
Rejection of ADF-test for log returns and AIC selected ARIMA(0,0,0) and ARIMA (0,0,0) with a drift?
CC BY-SA 4.0
null
2023-05-20T12:20:22.757
2023-05-20T12:20:22.757
null
null
388412
[ "arima", "stationarity", "aic", "augmented-dickey-fuller", "random-walk" ]
616417
1
null
null
1
9
How to use heteroscedasticity tests in model with ARIMA errors?
How to apply heteroscedasticity tests in model with ARIMA errors?
CC BY-SA 4.0
null
2023-05-20T12:39:19.777
2023-05-20T16:07:02.050
2023-05-20T16:07:02.050
53690
361080
[ "time-series", "hypothesis-testing", "arima", "heteroscedasticity" ]
616419
2
null
419830
1
null
No, not only does cross-validation make no sense here, but no analysis makes any sense at all. You cannot validly analyze 250 features on a sample size of 16. Cross-validation cannot validly help you with that. Let's go to the raw, absolute minimum: suppose you had only one predictor and you want to estimate an outcome with only 16 samples. That alone would be almost definitely an insufficient sample size for any valid conclusions. A sample is supposedly a representative sample from some larger population with the hope that you would want to extrapolate your findings to the larger population. To what kind of larger population could you validly extrapolate 16 cases? I do not know of any statistical test (not to talk of machine learning algorithm) for which a sample of 16 would be valid--and I'm still talking about the minimum case of just one predictor. 250 predictors make your sample 250 times as insufficient as for one predictor, so you can only imagine the magnitude of the problem. (That is why I am not even bothering to mention the possibility of dimension reduction--which you should probably do with 250 predictors--because that would not solve the problem even if you could reduce all 250 predictors into just one dimension.) CV would make the problem only worse by reducing your insufficient sample from 16 to something smaller. Even if you carried out leave-one-out cross-validation, each of the 16 training samples of size 15 would be invalid. 15 wrongs do not magically make a right. Sorry; this is probably not what you want to hear, but I think your best solution is to collect a truly large sample. Remember that as far as the computer is concerned, machine learning is just playing with numbers. If you give it data, it will try to give you a response according to some mathematical rules. The computer cannot tell you whether its response is good or completely worthless--only a proper study design can assure that.
null
CC BY-SA 4.0
null
2023-05-20T13:26:29.007
2023-05-20T13:26:29.007
null
null
81392
null
616420
2
null
616362
1
null
It's effective degrees of freedom for the smooth that is used in the test of the null hypothesis $\text{H}_0: f_j(x_{ji}) = \mathbf{0}$ (that the function is a constant 0 function). There are at least three definitions of the effective degrees of freedom for a smooth term (see `$edf`, `$edf1`, and `$edf2` components in a fitted GAM object). These represent different views on the complexity of the smooth, with the reference EDF being one view that is useful in tests (I believe this is `$edf1`). `$edf2` is an EDF that is corrected for having estimated the smoothing parameters, and `$edf` is the standard definition of EDF.
null
CC BY-SA 4.0
null
2023-05-20T13:39:28.690
2023-05-20T13:39:28.690
null
null
1390
null
616421
2
null
616369
2
null
Turn it into a contingency table: $$\begin{array}{c|cccccc} & \text{Comp A} & \text{Comp B} & \text{Comp C} & \text{Comp D} & \text{Comp E} \\ \hline \text{Adv 1}&3&0&0&0&0\\ \text{Adv 2}&0&0&1&1&0\\ \text{Adv 3}&0&2&0&0&1\\ \end{array}$$ And use the typical treatment of contingency tables (e.g. [Most appropriate statistical test for count data (2x2 contingency)](https://stats.stackexchange.com/questions/610610/)). This requires several considerations about how the data is generated.
null
CC BY-SA 4.0
null
2023-05-20T13:46:49.230
2023-05-20T13:46:49.230
null
null
164061
null
616422
1
null
null
0
26
This is probably a stupid question that I'm overthinking, and apologies if it's super obvious, but - Say you have some regression model `y ~ f(x1, x2...)` that you fit to data. Call this the 'main model' for now. You get your regression from the main model, and you use a `predict` type function to yield in-sample predicted `y` values (let's say `y*`) for all the observed `x1, x2...` in your data. Now, `y*` are predicted values for `y`, so let's say we now run a simple linear regression with the model `y ~ y*` - call this the 'simple model'. My question is, should various goodness-of-fit statistics (e.g. RMSE, R^2) for the main model and the simple model be identical? Why or why not? (Unless 'why' is "because they're literally the same thing", in which case feel free to just say that!) Edit: tested with following code: ``` > test_data # A tibble: 20 × 3 y x1 x2 <dbl> <dbl> <dbl> 1 -0.0149 21.8 0.487 2 -0.0170 23.8 0.131 3 0.0128 20.8 0.733 4 -0.0204 24.3 0.327 5 -0.0401 23.1 0.404 6 0.0930 21.2 0.926 7 -0.0513 22.0 0.529 8 -0.129 30.3 -0.715 9 -0.0170 23.8 0.131 10 -0.0409 22.3 0.442 11 -0.137 21.3 0.446 12 -0.0338 25.9 0.316 13 -0.00822 22.3 0.648 14 0.0345 20.8 0.643 15 0.0471 24.3 0.200 16 -0.00650 21.5 0.547 17 -0.0486 22.1 0.901 18 -0.0737 28.5 -0.297 19 0.0234 26.8 0.118 20 -0.00890 24.0 0.546 main <- lm("y ~ x1 + x2", test_data) test_data$y_pred <- predict(main) # Add predictions from main model simple <- lm("y ~ y_pred", test_data) # Regress observed against predicted ``` Here are the `summary` results for the main and simple models: ``` > summary(main) Call: lm(formula = "y ~ x1 + x2", data = test_data) Residuals: Min 1Q Median 3Q Max -0.106462 -0.019905 -0.003817 0.030844 0.083208 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.240293 0.245335 -0.979 0.3411 x1 0.007453 0.009491 0.785 0.4431 x2 0.115037 0.064026 1.797 0.0902 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.04826 on 17 degrees of freedom Multiple R-squared: 0.2788, Adjusted R-squared: 0.1939 F-statistic: 3.286 on 2 and 17 DF, p-value: 0.06216 > summary(simple) Call: lm(formula = "y ~ y_pred", data = test_data) Residuals: Min 1Q Median 3Q Max -0.106462 -0.019905 -0.003817 0.030844 0.083208 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -6.517e-17 1.336e-02 0.000 1.0000 y_pred 1.000e+00 3.791e-01 2.638 0.0167 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.0469 on 18 degrees of freedom Multiple R-squared: 0.2788, Adjusted R-squared: 0.2387 F-statistic: 6.958 on 1 and 18 DF, p-value: 0.01671 ``` So it looks like the multiple R-squared is the same, but the adjusted R-squared, F, and p-value are different, which I assume is because the degrees of freedom are slightly different for each model? The RMSE looks like it's the same though: ``` > sqrt(mean(simple$residuals ^2)) [1] 0.04449425 > sqrt(mean(main$residuals ^2)) [1] 0.04449425 ``` --- Test data here: ``` structure(list(y = c(-0.0149054927871505, -0.0170137466591455, 0.0128318420089774, -0.0204478017687445, -0.0401227514812484, 0.0929525887669447, -0.0513457972324178, -0.129018618587365, -0.0170137466591455, -0.040857954317206, -0.136977183779344, -0.0337755478138668, -0.0082243454229032, 0.0344788355710609, 0.0471334903603776, -0.00649550644694302, -0.0485828575527817, -0.0737301275807424, 0.0233928795747054, -0.00889576418458499 ), x1 = c(21.7906349206349, 23.79425, 20.8316981132075, 24.3160526315789, 23.1439393939394, 21.1842857142857, 21.995956284153, 30.2811392405063, 23.79425, 22.2793373493976, 21.2571212121212, 25.9298611111111, 22.3015584415584, 20.8416, 24.3116129032258, 21.505, 22.1379824561403, 28.5379207920792, 26.7947887323944, 23.9788148148148), x2 = c(0.487459583729331, 0.131205294570451, 0.733106660831697, 0.326525914159468, 0.404294159066793, 0.92635727520522, 0.529406871129557, -0.715495784602749, 0.131205294570451, 0.442094223229665, 0.446343589682511, 0.316249126379862, 0.647954821964668, 0.64301642106813, 0.200120016662466, 0.546638394449764, 0.900781869154798, -0.296739694460581, 0.118212601580376, 0.546327126903104), y_pred = c(`1` = -0.021808691493989, `2` = -0.0478577256820188, `3` = -0.000697387783133552, `4` = -0.0214996272484705, `5` = -0.0212893728657664, `6` = 0.0241613984725961, `7` = -0.0154529257119745, `8` = -0.0969115624454971, `9` = -0.0478577256820188, `10` = -0.0233849941968201, `11` = -0.0305148963963352, `12` = -0.0106538699900019, `13` = 0.000462129290600535, `14` = -0.0109872636341906, `15` = -0.0360740236786066, `16` = -0.0171298427121645, `17` = 0.0283273396363096, `18` = -0.0617317563447944, `19` = -0.0269888819695119, `20` = 0.0012720744442658)), row.names = c(NA, -20L), class = c("tbl_df", "tbl", "data.frame")) ```
Goodness of fit for in-sample predictions
CC BY-SA 4.0
null
2023-05-20T14:05:37.657
2023-05-20T18:03:58.360
2023-05-20T18:03:58.360
361155
361155
[ "regression", "goodness-of-fit", "r-squared" ]
616423
2
null
616405
2
null
I am assuming you are looking for a counterexample of $X_n \overset{d}{\to} X$, $Y_n \overset{d}{\to} Y$ but $X_nY_n \overset{d}{\not\to} XY$. If so, let $X, Y$ be i.i.d. $N(0, 1)$ random variable, then let $X_n \equiv Y_n = X$. Then the condition $X_n \overset{d}{\to} X$, $Y_n \overset{d}{\to} Y$ is clearly satisfied. However, $X_nY_n = X^2 \sim \chi^2_1$ obviously does not converge in distribution to $XY$. One way to see this is by noting $P[X^2 \leq 0] = 0 \not\to P[XY \leq 0] = \frac{1}{2}$.
null
CC BY-SA 4.0
null
2023-05-20T14:11:04.327
2023-05-20T14:11:04.327
null
null
20519
null
616424
1
null
null
0
23
I am working with what I believe is quite a big dataset, with $17$ columns (each corresponds to one variable) and $41714$ rows (each corresponds to one observations). My goal is to implement clustering algorithms with $16$ of these variables and then check if the clusters created coincide somehow with the categories that are given by the categorical variable I am leaving out of the clusters formation. I was able to perfom $k-means$ algorithm using only the quantitative variables and also perfomed $k-means$ algorithm using every type of variables (in this case, I coded the qualitative variables accordingly). The problems. Issues started coming up when I tried to implement different methods of clustering (initial results using $k-means$ weren't satisfatory): when trying to compute, for example, $k-medoids$ method, I am given the following error (by RStudio): > Cannot allocate vector of size $6.5$ Gb. After some research, I figured this is a RAM memory issue and so my idea to deal with this is to remove some variables from my dataset, which are less correlated with my left out categorical variable. How should I do this. The variable left out is "City" and it is a factor with $9$ levels ($9$ City names). The other $16$ variables are mixed type variables (some are logical/boolean, others are numeric or integers). How can I determine which variables to remove from the dataset? Is Chi-Squared-Test a good approach to take decisions? Thanks for any help in advance.
How to deal with a large dataset when implementing clustering?
CC BY-SA 4.0
null
2023-05-20T14:27:00.310
2023-05-20T14:27:00.310
null
null
383130
[ "r", "machine-learning", "correlation", "dataset", "mixed-type-data" ]
616425
1
616427
null
2
54
Let $X_1,X_2,....$ be iid random variables with Cauchy distribution and $S_n=X_1+X_2+\cdots+X_n$, find $P(S_n>an)$, $a>0$. This is exercise 8.44 of the intro book of Grimmet and Welsh. We cannot use the Large deviation theorem, because all the $X_i$ have no defined mean. The only thing that came in my mind was that $S_n$ must also be a random variable with Cauchy distribution, hence : \begin{align} & P(S_n>an)=1-P(S_n<an) \\[6pt] = {} & 1-\int_{-\infty}^{an} \frac{1}{\pi(1+x^2)}\, dx =\frac{3}{2}-\arctan(an)\pi^{-1}.\end{align} The correct answer is $\frac{1}{2}-\arctan(a)\pi^{-1}$. My method is clearly wrong. I also could not use the central limit theorem, because the mean is not defined.
Let $X_1,X_2,\ldots$ be iid random variables with Cauchy distribution and $S_n=X_1+X_2+\cdots+X_n$, find $P(S_n>an)$, $a>0$
CC BY-SA 4.0
null
2023-05-20T14:53:39.153
2023-05-21T14:03:17.340
2023-05-21T14:03:17.340
20519
386534
[ "random-variable", "independence", "cauchy-distribution" ]
616427
2
null
616425
4
null
If $X_1, \ldots, X_n$ are i.i.d. standard Cauchy distribution $C(0, 1)$, then since Cauchy distribution is closed under independent summation (see [the second item of this link](https://en.wikipedia.org/wiki/Cauchy_distribution#Transformation_properties)), the distribution of $S_n = X_1 + \cdots + X_n$ is still Cauchy, with location parameter $0$, and scale parameter $n$ (i.e., $C(0, n)$ with density function $f(x) = \frac{1}{\pi n(1 + x^2/n^2)}$), it then follows that \begin{align} P(S_n > an) = \int_{an}^\infty\frac{1}{\pi n(1 + x^2/n^2)}dx = \int_a^\infty \frac{1}{\pi (1 + u^2)}du = \frac{1}{\pi} \left(\frac{\pi}{2} - \arctan a\right), \end{align} matching the answer key.
null
CC BY-SA 4.0
null
2023-05-20T15:30:45.430
2023-05-20T22:13:46.263
2023-05-20T22:13:46.263
20519
20519
null
616429
1
null
null
1
11
I've estimated a VECM model on four variable. my results are - Unit root test (ADF): all variables are I(1) - Johansen rank test suggests estimating VECM with a trend and 1 cointegrating vector - Estimation of VECM gives very large error correct terms (estimated via the cajorls function in r, and then summary of model$rlm) : from -7 to 10 - Impulse response functions do also not converge My question is how should I interpret these results? does it indicate stationarity in my variables? I would be very happy if someone could point me in the direction of some literature that handles this case.
Interpretation of large error correction terms in VECM
CC BY-SA 4.0
null
2023-05-20T15:35:48.823
2023-05-20T15:41:45.007
2023-05-20T15:41:45.007
388423
388423
[ "econometrics", "vector-error-correction-model" ]
616430
2
null
616410
1
null
As a comment suggests, you can certainly combine a polynomial fit for a continuous predictor along with its interactions with other predictors in your model. Those interactions could be with categorical predictors or other continuous predictors. As that comment also notes, it seems that you have already categorized another continuous predictor, `ZPO`. That's [not a good idea](https://stats.stackexchange.com/q/230750/28500) in general. You will probably be better off keeping all your continuous predictors as continuous. You probably don't want to go to a higher-degree polynomial that goes over the entire data range. [This page](https://stats.stackexchange.com/q/549012/28500) both points out the limitations of such polynomial fits (particularly of high degree) and points out a better general solution: regression splines. A regression spline fits polynomials within several partitions of the range of data, while ensuring continuity and smoothness of the overall fit. Section 2.4 of Frank Harrell's [Regression Modeling Strategies](https://hbiostat.org/rmsc/genreg.html#sec-relax.linear) is one good introduction. Regression splines are a particular type of "generalized additive model" (not to be confused with "generalized linear model" or "generalized least squares" or other "generalized" methods in statistics). Section 7.7 of [An Introduction to Statistical Learning](https://www.statlearning.com) outlines the principles; earlier sections of Chapter 7 discuss different types of splines.
null
CC BY-SA 4.0
null
2023-05-20T16:27:18.727
2023-05-20T16:27:18.727
null
null
28500
null
616432
1
null
null
1
10
Recently, I read a paper ([this paper](https://arxiv.org/pdf/2206.13290.pdf)) that provides convergence analysis of the Adam optimizer without assuming the smoothness condition of the loss function. Suppose I have a neural network $f(\theta)$ with a loss function $L$, and I want to verify if my network satisfy the assumptions (S1)-(S3) and (A1)-(A2). I am confused about how to prove that a specific network $f(\theta)$ and a specific loss function $L$ satisfies those assumptions. How should I get started? Also, do these assumptions depend on the network architecture? Note: Maybe this question seems a bit silly because I'm still relatively new to neural network analysis.
How to prove that my neural network satisfies convergence assumptions of the Adam optimizer?
CC BY-SA 4.0
null
2023-05-20T17:59:06.523
2023-05-20T17:59:06.523
null
null
387019
[ "neural-networks", "adam" ]
616433
1
616444
null
2
42
For GLRT, the ratio is: $$ \Lambda^* = \frac{\max_{\theta \in \omega_0} L(\theta)}{\max_{\theta \in \omega_1}L(\theta)} $$ but we instead use: $$ \Lambda = \frac{\max_{\theta \in \omega_0}L(\theta)}{\max_{\theta \in \omega_0 \cup \omega_1}L(\theta)} $$ I'm not understanding what the rationale behind this is and would appreciate any pointers.
Generalized Likelihood Ratio Test - Why is the denominator a union
CC BY-SA 4.0
null
2023-05-20T18:13:23.477
2023-05-20T23:08:18.100
2023-05-20T21:59:00.020
5176
191452
[ "hypothesis-testing", "mathematical-statistics", "likelihood-ratio" ]
616434
1
null
null
0
18
I am learning Time series analysis and I am studying the augmented Dickey Fuller test. My understanding of the null hypotheses is clashing with all the explanations I find online and in books. I refer to the notation of the picture attached. For a series I am working on all $\tau$- and $\phi$-nulls are rejected, for all three options "none", "drift" and "trend" in R. Many places online I see if $\phi_2$ is rejected they conclude "trend stationarity" even tho $\tau_{\tau}$ also rejects. But if $ \gamma \neq 0$ as tested in all the $\tau$-Nulls, my thought is that all the other hypotheses will be rejected as well. My understanding of the Null $\phi_2$ is that $a_0=0$ AND $a_2=0$ AND $\gamma=0$. And if just 1 of them is different from 0 the Null hypothesis is rejected. Hence if $\tau_1$ is rejected all the other Nulls will also be rejected. So I can't see how I can conclude anything more than "No unit root" if the $\tau$ is rejected. The only case where I see I can conlude a trend is if a unit root is present AND a time trend present. Because then all $\tau$ would fail to reject, but $\phi_2$ would reject. What do I miss here? So the main questions: How do I conclude trend stationarity if there is no unit root? How do the results of the 3 different ADF-tests relate to each other? [](https://i.stack.imgur.com/1OenG.png)
Can an ADF-test distinguish between stationarity and trend stationarity?
CC BY-SA 4.0
null
2023-05-20T18:22:23.693
2023-05-20T18:22:23.693
null
null
388425
[ "r", "time-series", "hypothesis-testing", "stationarity", "augmented-dickey-fuller" ]
616435
1
null
null
2
27
I have 3 different groups based on their level of resilience : low (n=60), moderate (n=35), high (n=10). After a qualitative thematic analysis of their responses, different subjects emerged, like "being super funny" for example. => I would like to know is there is a significant difference between those 3 groups about how many people said that they are funny. I initially did Chi2 tests with a contingency table 3x2 (said that they are funny/didn't say that they are funny) but I think that the Kruskal Wallis test is finally more appropriate, what do you think ? However, when I tried to do the Kruskal Wallis test on JASP, it said that the number of obs is <2 for some data (ex : 11 people in low resilience group said that they are funny, 5 in the moderate resilience group and 0 in the high resilience groupe...). Do you know how I can still conduct my test please ? Thank you !!
Chi2 or Kruskal Wallis?
CC BY-SA 4.0
null
2023-05-20T18:30:08.753
2023-05-20T18:42:00.487
2023-05-20T18:31:25.407
388429
388429
[ "mathematical-statistics", "chi-squared-test", "kruskal-wallis-test", "psychometrics" ]
616436
2
null
616435
2
null
It sounds like [Fisher's exact test](https://en.wikipedia.org/wiki/Fisher%27s_exact_test) would be suitable for your analysis. It's similar to a chi-squared test but works at lower sample sizes. You have a contingency table by the sounds of it with three levels of resilience and two levels of funny (not funny and funny).
null
CC BY-SA 4.0
null
2023-05-20T18:42:00.487
2023-05-20T18:42:00.487
null
null
318475
null
616437
1
null
null
0
12
When conducting and reporting a statistical analysis, is it more correct to state: - "The associations between x and y " - "The associations of x with y"? Or are both equally correct? 1) seems more logical, but can be awkward to word when several predictors are applied.
"The associations between x and y " vs "The associations of x with y"
CC BY-SA 4.0
null
2023-05-20T19:18:20.987
2023-05-20T19:18:20.987
null
null
350675
[ "association-measure" ]
616438
1
null
null
0
28
I have an analysis question that I need some help on. I received an analysis request to identify events that have statistically different frequencies between two groups. For example, assume I have group A and group B and in each group, there are events that are either shared or unique to each group. And each event is found to be happening at a certain frequency as denoted in parenthesis below: Group A: e1 (0.1), e2 (0.03), e3 (0.9), e6 (0.01),e10 (0.02),e11 (0.95) Group B: e1 (0.2), e2 (0.02), e7 (0.01),e8 (0.02), e12 (0.03) So the end goal is to identify in this hypothetical case, that event e3 is the significantly different event(occurring at a high frequency in Group A and not at all in Group B) with a p-value and all other events are not significantly different between Group A and Group B. For example, e2 is occurring in both group but at similar frequency. I was suggested by a friend that binomial model might work but I couldn't figure out how to implement it in R. Could anyone help me come up with a solution? Thanks!
Comparing frequency of events between two groups?
CC BY-SA 4.0
null
2023-05-20T19:25:17.240
2023-05-20T19:25:17.240
null
null
30475
[ "r", "hypothesis-testing" ]
616439
1
null
null
1
18
I have some analysis I'm working on and I'm having a hard time nailing down the correct approach to take. I am modeling the dynamics of frog choruses by looking at what predicts the outcome of calling interactions between different chorus members. Interactions can result in call overlap or not, so I have coded each interaction as 1 or 0 and am using a GLMM to model what predicts whether an interaction will result in overlap or not. Predictor vars are size of both males in the interaction, and acoustic properties of their calls etc. I have 6 choruses total, each with 6 males, so I'm including IDs of both males involved in each interaction as random effects, nested within chorus ID. I have data on many interactions (~3000 total in the dataset, ~500 per chorus). I feel confident enough about this step. But, since I am including quite a few variables that may influence probability of overlap, I want to use multimodel inference to determine which features are most predictive of overlap, and then produce an averaged model from the top model set. I'd like to test how generalizable the results of this model are by doing some sort of cross validation. I figure I'll leave 10% of the data out to test the average model with after it has been trained on the training set. My big question is: how do I take into account the random effects present in my model when choosing what 10% of the data to leave out for testing? Is a completely random 10% fine? Or do I need to structure the test data by accounting for the levels of the random effects in some way? e.g., by ensuring the test set contains at least one example from all pairings of males or something? Another idea I had was to train the model on data from 5 of the choruses and then test it with data from the 6th. Would that work? My concern with this option is that the chorus I'd be testing on has a very small sample of males (6), meaning that by chance their idiosyncrasies could give them random effects that fall somewhere weird on the distributions of population-level random effects, which could make the model predict them very poorly even if it could produce good predictions at a population level. Thank you for any information anyone can provide! I've been having a hard time finding much on this.
Cross validation with GLMMs; best way to partition train and test data with regard to random effects?
CC BY-SA 4.0
null
2023-05-20T19:56:09.230
2023-05-20T19:56:09.230
null
null
154902
[ "mixed-model", "cross-validation", "glmm", "model-averaging" ]
616440
2
null
615260
0
null
After much hunting, this appears to be known as, imaginatively (/s), the Max Bandit Problem. (This is annoyingly difficult to search for, as 'max' appears in almost all descriptions of the 'standard' k-armed bandit problem.) I have implemented the algorithm described in [Kikkawa & Ohno, 2022](https://arxiv.org/pdf/2212.08225.pdf), and it appears to suffice for my usecase.
null
CC BY-SA 4.0
null
2023-05-20T21:13:46.690
2023-05-20T21:13:46.690
null
null
100205
null
616441
2
null
616377
1
null
You should never dichotomize a continuous treatment, especially when methods exist for adjusting for confounding with continuous treatments. Dichotomizing and then matching will fail to make treatment independent from the covariates, which is the whole point of matching. Just use a method that is well-suited for continuous treatments. `WeightIt` offers many options for weighting for continuous treatments. The `cobalt` [vignette](https://ngreifer.github.io/cobalt/articles/cobalt.html#using-cobalt-with-continuous-treatments) provides guidance on assessing balance for continuous treatments, and the `WeightIt` [vignette](https://ngreifer.github.io/WeightIt/articles/estimating-effects.html#continuous-treatments) on estimating effects provides guidance on estimating the average dose-response function of a continuous treatment. Weighting methods for continuous treatments aim to make the treatment independent of the covariates in the weighted sample. There are version that involve modeling the treatment and using methodology similar to propensity score weighting, but colleagues and I recently developed a powerful method that doesn't require modeling the treatment called distance covariance optimal weights, which you can read about in [Huling et al. (2023)](https://doi.org/10.1080/01621459.2023.2213485) and which are implemented in `WeightIt` by using `weightit()` with `method = "energy"`.
null
CC BY-SA 4.0
null
2023-05-20T21:31:20.220
2023-05-20T21:31:20.220
null
null
116195
null
616442
2
null
616384
1
null
Relu does not address the exploding gradient problem. And, it could even increase the chances for the very reason you explained. Therefore, weights should be initialized small to mitigate the risks. Gradient clipping is another way to address this problem. For the vanishing gradient aspect, yes, the gradients may still get smaller but do not diminish as quickly as sigmoid.
null
CC BY-SA 4.0
null
2023-05-20T21:37:25.093
2023-05-20T21:37:25.093
null
null
204068
null
616444
2
null
616433
1
null
I'm assuming you mean there is a null hypothesis stating that the parameter $\theta$ is in $\omega_0$ and $\omega_1$ is just the complement of $\omega_0$. The unconstrained MLE for $\theta$ is $$\widehat{\theta\,} = \operatorname*{argmax}_{\theta\,\in\,\omega_0\, \cup\,\omega_1} L(\theta).$$ If $\widehat{\theta\,} \in\omega_0$ then, according to what you say "we instead use", the ratio $$ \Lambda = \frac{\max_{\theta\,\in\,\omega_0} L(\theta)}{\max_{\theta\,\in\,\omega_0\,\cup\,\omega_1} L(\theta)} $$ is equal to $1.$ Thus it is only when there is some evidence against the null hypothesis that $\Lambda<1.$ The stronger the evidence against the null hypothesis, the smaller $\Lambda$ is.
null
CC BY-SA 4.0
null
2023-05-20T23:08:18.100
2023-05-20T23:08:18.100
null
null
5176
null
616445
2
null
321367
0
null
I could not find a proof in the paper. Instead, Smith (1995), which was published much earlier, seems to include a different formula for obtaining cumulants from moments in the multivariate setting. In contrast to the method in the paper you have posted, which is one-step, Smith (1995) suggests a two-step method for obtaining multivariate moments. I suggest checking out the paper. Smith, P. J. (1995). A Recursive Formulation of the Old Problem of Obtaining Moments from Cumulants and Vice Versa. The American Statistician, 49(2), 217–218. doi:10.1080/00031305.1995.10476146
null
CC BY-SA 4.0
null
2023-05-20T23:32:02.387
2023-05-20T23:34:42.170
2023-05-20T23:34:42.170
388438
388438
null
616446
1
null
null
0
31
I am running logistic regression. The odds ratio estimate I got is positive but the lower bound of my 95% CI is negative (close to -1) and the upper bound is greater than 1. Would this mean that my result is not statistically significant? In a hypothetical case that the odds ratio 95% CI is something like (-1, .5), would this also mean it is not statistically significant? The odds ratio in this hypothetical situation is not 0 or negative. I used stargazer in R to get the 95% CIs, and these are the optional arguments I put in stargazer: `apply.coef = exp,ci = TRUE, ci.level = .95`
Can the confidence interval of an odds ratio be negative?
CC BY-SA 4.0
null
2023-05-21T01:51:22.160
2023-05-21T02:27:24.187
2023-05-21T02:27:24.187
328519
328519
[ "r", "logistic", "confidence-interval", "odds-ratio" ]
616447
1
null
null
0
17
for my thesis I have to analyze a pre-post intervention control design. Participants who had a certrain treatment were measured prior to the treatment (pre-test) and after the treatment they were managed again (posttest). To analyze this stastically. I have made a dummy variable of the intervention group coded as 1 and the control group as 0.There is also another variable with 2 levels. High and low IQ. And I have a pre and post testas variable My hypothesis is that after the intervention the state of anxiety will be reduced. I used an ANCOVA but unfortunatly there is a significant interaction between my group variable and anxiety variable.. My hypothesis is dat anxiety will be reduced after the intervention
what statistical test in a pre-post design while expecting a positive effect
CC BY-SA 4.0
null
2023-05-21T02:16:36.207
2023-05-26T16:43:20.480
2023-05-26T16:43:20.480
388441
388441
[ "regression", "categorical-data", "spss", "pre-post-comparison", "control-group" ]
616448
2
null
616446
0
null
Odds ratios cannot be negative. An odds ratio of 1.0 is the no-difference null hypothesis value. Values between 0 and 1 denote an effect in one direction. Values greater than 1 denote effects in the other direction. Negative values are not possible, so the values you are looking at are not odds ratios. Maybe they are logarithms of odds ratios?
null
CC BY-SA 4.0
null
2023-05-21T02:24:11.987
2023-05-21T02:24:11.987
null
null
25
null
616449
1
null
null
0
7
I have proved that my function with two variable is concave. I am looking for the minimum of the function. Since the function is continuous over a convex set the minimum should occur on the border of the function. How can I find the minimum efficiently having these information?
Find the minimum of a concave function
CC BY-SA 4.0
null
2023-05-21T02:57:18.720
2023-05-21T02:57:18.720
null
null
388442
[ "mathematical-statistics", "optimization", "convex" ]
616451
1
616455
null
2
56
I'm minimizing distances between two 6 dimensional vectors. I have been using manhattan distance so far and it works ok but my problem would benefit from discriminating between the following two cases ``` case 1. v1 = [1, 1, 1, 1, 1, 1] v2 = [1, 1, 1, 1, 1, 4] manhattan(v1, v2) = 3 case 2. v1 = [1, 1, 1, 1, 1, 1] v2 = [1.5, 1.5, 1.5, 1.5, 1.5, 1.5] manhattan(v1, v2) = 3 ``` I want to have a distance measure that prefers (smaller distance) case 2 because it ensures that `v1 - v2` is small in all dimensions
Minimizing a distance metric where all dimensions are small
CC BY-SA 4.0
null
2023-05-21T03:10:28.900
2023-05-21T06:20:19.450
null
null
388395
[ "distributions", "optimization", "distance" ]
616452
2
null
616404
1
null
In two dimensions, the second moments $E[Z_i^2]$ have a simple formula, which implies that those moments are proportional to the standard deviations of the variables, and which follows from a complicated formula for an integral. Suppose $X_i$ and $X_j$ have standard deviations $s$ and $t$. By considering these as dilations of two standard normals, $$(X_i,X_j)\sim (Rs\cos\theta,Rt\sin\theta)$$ where $\theta$ is uniformly distributed on $[0,2\pi]$, and the distribution of $R$ is half-normal but irrelevant. By letting $u=t/s$, $$Z_i^2=\frac{(Rs\cos\theta)^2}{(Rs\cos\theta)^2 + (Rt\sin\theta)^2}$$ \begin{align} E[Z_i^2] &= \frac{1}{2\pi}\int_0^{2\pi}\! \frac{\cos^2\theta\,d\theta}{\cos^2\theta + u^2\sin^2\theta}\\ &=\frac{2}{2\pi} \frac{u\arctan(u \tan \theta)-\theta}{u^2-1}\Big|_{-\pi/2}^{\pi/2}\\ &= \frac{1}{u+1} =\frac{s}{s+t} \end{align} Similarly $E[Z_j^2]=t/(s+t)$. In the special case mentioned in the post of identical standard deviations, this gives the expected value of $1/2$.
null
CC BY-SA 4.0
null
2023-05-21T03:12:05.163
2023-05-21T07:21:17.300
2023-05-21T07:21:17.300
225256
225256
null
616453
1
null
null
1
20
In 3.5.2 of [Machine Learning Methods for Estimating Heterogeneous Causal Effects by Susan Athey and Guido Imbens](https://www.gsb.stanford.edu/gsb-box/route-download/406621) it is stated: "If the models include an intercept, as they usually do, most estimation methods would ensure that the average of $$(Y_i^{tr,obs}-\hat{\mu}(X_i^{tr})).\hat{\mu}(X_i^{tr})$$ would be equal to zero," I have the following questions: - The paper uses trees for regressions in which case I am not sure what is meant by an intercept. The paper does state "Although we focus in the current paper mostly on regression tree methods (Breiman, Friedman, Olshen, and Stone, 1984), the methods extend to other approaches such as Lasso (Tibshirani, 1996), and support vector machines (Vapnik, 1998, 2010)." Update: My answer to (1) is that the intercept for any regressor is just the mean response when all predictors are zero. - In trying to attempt to prove the above expression is zero I make the assumption that the residuals are independent of the fitted values. However if I understand correctly this is another check applied to linear regression so I am not sure if this can be applied to trees as well. My version of the proof based on the assumption stated in this point is: $$E[(Y_i^{tr,obs}-\hat{\mu}(X_i^{tr})).\hat{\mu}(X_i^{tr})]=E[Y_i^{tr,obs}-\hat{\mu}(X_i^{tr})].E[\hat{\mu}(X_i^{tr})]$$ If the estimator is unbiased and samples are drawn i.i.d then using linearity of expectations: $$E[Y_i^{tr,obs}-\hat{\mu}(X_i^{tr})] = E[Y_i^{tr,obs}] - E[\hat{\mu}(X_i^{tr})]=E[Y^{tr,obs}] - E[Y^{tr,obs}] = 0$$ therefore $$E[Y_i^{tr,obs}-\hat{\mu}(X_i^{tr})]=0 \implies E[(Y_i^{tr,obs}-\hat{\mu}(X_i^{tr})).\hat{\mu}(X_i^{tr})]=0$$ Is this reasoning correct and can we assume fitted values are independent of the residuals? - This part is also not clear to me: " $$-\frac{1}{N^{tr}}\sum^{N^{tr}}_{i=0}((Y_i^{tr,obs})^2-\hat{\mu}^2(X_i^{tr})).$$ To interpret this, because the first component does not depend on the estimator being used, a model fits better according to this criteria if it yields higher variance predictions." If the above expression in this point is used as goodness of fit and the aim is to maximize the goodness of fit then this translates to maximizing $$\sum^{N^{tr}}_{i=0}\hat{\mu}^2(X_i^{tr})$$ but is $$\frac{1}{N^{tr}}\sum^{N^{tr}}_{i=0}\hat{\mu}^2(X_i^{tr})$$ the same as $$var(\hat{\mu}(X_i^{tr}))$$
An Alternative In-sample-goodness-of-fit Measure in Machine Learning Methods for Estimating Heterogeneous Causal Effects Paper
CC BY-SA 4.0
null
2023-05-21T03:55:11.877
2023-05-23T01:35:35.590
2023-05-23T01:35:35.590
269745
269745
[ "machine-learning", "expected-value", "causality", "goodness-of-fit" ]
616455
2
null
616451
3
null
There is an entire family of norms, the [$p$-norms](https://en.wikipedia.org/wiki/Norm_(mathematics)#p-norm): $$ ||x||_p := \big(\sum_{k=1}^n|x_k|^p\big)^{1/p}, $$ which give rise to a distance through $d_p(x,y):= ||x-y||_p$. The Manhattan distance is the case $p=1$. The Euclidean distance is the case $p=2$. You may be looking for the maximum norm and its associated distance metric, which you get by letting $p\to\infty$: $$||x||_{\max} := \max \{|x_k|, k=1, \dots n\}, \quad d_{\max}(x,y)=\max\{|x_k-y_k|, k=1, \dots, n\}.$$ This will be small if all entries of your two vectors are close together.
null
CC BY-SA 4.0
null
2023-05-21T06:20:19.450
2023-05-21T06:20:19.450
null
null
1352
null
616456
1
null
null
0
34
For a problem I am working on I need to compute the conditional expectation of a continuous random variable $T$ given a discrete random variable $K$. I already derived a formula for the joint distribution function $P(T\leq t, K = k)$, as well as the pdf of $T$ and the probability mass function of $K$. Is there some formula to compute $\mathbb{E}\left[T|K=k \right] $? (And just out of interest, is there a formula for $\mathbb{E}\left[K|T=t \right] $?)
Conditional expectation of a continuous random variable given a discrete random variable
CC BY-SA 4.0
null
2023-05-21T06:25:05.130
2023-05-21T15:30:49.057
2023-05-21T09:26:16.027
304809
304809
[ "probability", "self-study", "conditional-probability", "conditional-expectation", "discrete-distributions" ]
616457
1
616517
null
3
29
I've been reading a [paper](https://www.nature.com/articles/s41467-021-21330-0) (Machine learning identifies candidates for drug repurposing in Alzheimer’s disease), but having a hard time understanding its core idea. Basically, the researchers - Obtained a dataset of postmortem brain tissues that includes the expression values of ~20k genes as features and disease stage (early, intermediate, late) as the target. - Applied 80 drugs to cell cultures and recorded the differentially expressed (either upregulated or downregulated vs control) genes for each drug. - For each drug, they limited the feature space (all ~20k genes from the brain tissue data) to only the drug associated gene list; then trained and validated "predictors" or "classifiers". - Tried different algorithms for the ML work (logistic regression, SVM, boosted random forest, neural network) and decided on logistic regression based on its AUC. - Produced empirical p-values for each drug by comparing the AUC of the predictor of the drug and 1000 size-matched gene lists. For example, if ruxolitinib has an empirical p-value of 0.004, and caused the perturbation of 300 genes, then that means the logistic regression model fit by genes expressed after ruxolitinib application predicts the disease stage better than (has a higher AUC than) 996 randomly selected lists of 300 genes. - Ranked drugs based on their empirical p-values and claimed that "If a classifier trained on the expression of genes associated with a particular drug is substantially more accurate than equivalent classifiers trained on the expression of any arbitrarily chosen genes, then such a result suggests that the drug-associated genes carry at least some disease-related signal." Now what I don't understand is, how can they go from this line of reasoning to claiming that the pharmacological mechanism of action of high ranking drugs have a higher possibility of overlapping with the pathological mechanisms of Alzheimer's disease? How can merely the list of genes perturbed by a certain drug predicting the disease stage better compared to random lists provide evidence for the drug being associated with the disease? From the article > We present DRIAD (Drug Repurposing In AD), a machine learning framework that quantifies potential associations between the pathology of AD severity (the Braak stage) and molecular mechanisms as encoded in lists of gene names. Could it actually be the approach some statistics blogs & textbooks recommend [2](https://machinelearningknowledge.ai/predictive-power-score-vs-correlation-with-python-implementation/) [3](https://towardsdatascience.com/rip-correlation-introducing-the-predictive-power-score-3d90808b9598) when they talk about using predictive statistics (predictive power score, to be precise) instead of merely testing for correlation?
The relationship between prediction performance and the degree of association between variables?
CC BY-SA 4.0
null
2023-05-21T06:28:32.453
2023-05-22T12:31:42.157
null
null
388459
[ "machine-learning", "logistic", "bioinformatics" ]
616458
1
616471
null
0
52
I have a dataset with 11 observations and 11 features. I want to use linear regression for estimating the coefficients by using OLS method. I know it is not advisable to use linear regression with such a big number of features, comparing with the number of records. I have about 6 features that are highly correlated (correlation > 0.88). But I don't want to delete these columns, as I can loose information. So can I use PCA on these 6 highly correlated features to transform them into one column? Then, I want to combine this column with other features and use linear regression.
Can I apply PCA to combine correlated variables into one variable?
CC BY-SA 4.0
null
2023-05-21T06:35:12.850
2023-05-21T10:36:06.423
null
null
388460
[ "regression", "correlation", "pca", "least-squares", "high-dimensional" ]
616459
1
null
null
0
7
I'm new to MCMC methods, and as I learn, it seems there is a lot of art in choosing the proper priors especially when there are many parameters (which is why we use MCMC in the first place). Are there any strategies that go into which distributions are suitable for which scenarios or parameter types? Coming from physics systems, the naive way I use is truncated normals or uniform distributions, however, the results I'm getting are not great. Maybe there are more clever distributions that I can use for better results? Maybe different distributions for parameters that are powers? As for samplers, I was wondering about that too. Metropolis-Hastings is a go-to, but it seems like NUTS gets the same results but in less computational time. Are there other options to consider in relation to ODE parameters? What methods can one use to improve the sampling process? I know these very broad questions, but I couldn't find too many (beginner-level) practical examples when it comes to Bayesian parameter estimation for ODE systems, so I guess this post can be beneficial for many people who are stepping into this world.
General strategies for choosing priors and samplers to estimate parameters of nonlinear ODE system?
CC BY-SA 4.0
null
2023-05-21T07:11:48.263
2023-05-21T07:11:48.263
null
null
314855
[ "bayesian", "sampling" ]
616461
2
null
55598
0
null
You could predict a variable that is a transformation of the class size instead of the class sizes directly. In the case of students per class a variable that is restricted to values between 0 and 30, you can consider [ordinal regression](https://en.m.wikipedia.org/wiki/Ordinal_regression) which models a latent variable for the probabilities of class sizes rather than the class sizes themselves. --- Other common transformations are the use of log transformation, which transforms a positive variable to the entire real line, or a logit transformation which transforms a number between 0 and 1 to the entire real line.
null
CC BY-SA 4.0
null
2023-05-21T07:49:06.433
2023-05-21T07:49:06.433
null
null
164061
null
616464
1
null
null
0
13
Are there any ways to assign a metric to time series that measures its distance from the white noise? By white noise I mean a time series sampled from $N(0,\sigma^2)$ for some $ \sigma$. This metric should assign non-negative numbers to the times series with a given confidence interval, that somehow measures the similarity between this times series and one sampled from a white noise. When this metric is zero it should imply that the time series is sampled from the white noise with that given confidence interval.
Distance of a time series from the white noise
CC BY-SA 4.0
null
2023-05-21T08:41:48.960
2023-05-21T09:57:06.067
2023-05-21T09:57:06.067
274933
274933
[ "time-series", "metric", "white-noise" ]
616465
1
null
null
0
30
I've read multiple posts/papers citing Tabachnick and Fidell's cut off of +/- 1.5 as the acceptable range for skewness and kurtosis to determine normality; however, I cannot find it in their book. Can someone please tell me where Tabachnick and Fidell have stated this cut off? Book and page number please.
Cut off value of +/- 1.5 for Skewness and Kurtosis (Tabachnick & Fidell)
CC BY-SA 4.0
null
2023-05-21T08:52:18.353
2023-05-21T09:13:45.103
2023-05-21T09:13:45.103
22047
388465
[ "normal-distribution", "skewness", "kurtosis" ]
616467
2
null
55598
0
null
> I'd like to construct a confidence interval using a Student's t-distribution. For the sake of argument, we'll assume that all assumptions and conditions for this are met. In my opinion, this is where the problem is. IF there is a hard boundary, a defining assumption in the t-distribution is violated. It is very common to ignore this: when the mean is “far enough” from the boundaries, the inaccuracy is meaningless. In your case, it is clearly not meaningless. In addition, you don’t have a continuous variable; there are no “fractional students” in a classroom. Again, ignoring this is something that we can often get away with. I have little experience with this type of data. When I ran into it, I got away with converting to proportions and using inference methods for that (in this case that would be [number in class]/30). You may try this, but I suspect that the step size of 1/30 may be too rough for this to work. It is therefore probably best to read up on inference methods for discrete distributions. I personally never got around to that, and never had an application for it in my daily work. So I can’t do more than point you to Ecosia. (Google will probably also work)
null
CC BY-SA 4.0
null
2023-05-21T08:54:12.797
2023-05-21T08:54:12.797
null
null
356008
null
616468
1
616492
null
3
93
The `cox.zph` function of the survival package computes the Schoenfeld residuals for each variable and tests the porportionality assumption with a score test, but I don't understand how it deals with factors with more than 2 levels since when the option `terms` is set to TRUE (by default) it computes a set of residuals for each factor, not for each dummy variable corresponding to the factor. I was wondering how those residuals are computed in this case. The same questions apply to continuous variables with polynomials or splines terms. I didn't find an explanation of this in the package vignettes. An example with splines terms: ``` data(pbc) fit <- coxph(Surv(time,status==1)~ns(age,df=4),data=pbc) prop <- cox.zph(fit,terms=F) prop ``` With `terms` = FALSE, Cox.zph computes 4 sets of residuals, one for each spline term. ``` prop$y ns(age, df = 3)1 ns(age, df = 3)2 ns(age, df = 3)3 533 9.758356 80.023447 146.527736 617 -9.315727 16.220166 15.753548 732 -9.120447 16.132956 14.925629 737 -9.797900 15.518557 17.384126 837 -9.039351 16.062516 14.563635 877 -7.277408 -9.441608 -5.152168 ``` However when `terms` is set to TRUE, it computes only one set of residuals as if there were only one coefficient. ``` prop <- cox.zph(fit,terms=TRUE) head(prop$y) ns(age, df = 3) 533 -5.078828 617 1.817389 732 1.780094 737 1.928395 837 1.765340 877 1.949620 plot(prop) ``` [](https://i.stack.imgur.com/JBXcg.png) In this case how is this unique set of residuals computed?
Schoenfeld residuals for factors with cox.zph
CC BY-SA 4.0
null
2023-05-21T09:35:39.267
2023-05-22T12:42:32.080
2023-05-21T17:01:58.847
388471
388471
[ "r", "survival", "cox-model", "proportional-hazards" ]
616469
1
null
null
1
37
In an RCT, I want to find out whether the treatment (treatment vs control) has an effect on the uptake of aftercare (yes/no + time). I have five measurement points, which are not equidistant (i.e., baseline, 3 weeks, 6 weeks, 12 weeks and 24 weeks after randomization). There are some questions regarding the difference between Cox regression and discrete-time survival analysis here, but I still wonder whether discrete-time survival analysis would be better in my case. As I read, there are two advantages of discrete-time over Cox regression: 1) handling of ties and 2) no proportionality of hazards. Regarding 1): the handling of ties should be no problem in interval-censored Cox regression. Regarding 2): as I normally would assume PH, but the time periods of my five measurement points are not equal in length, discrete-time survival analysis would be favored. Is that true? EDIT: I am only interested in the first uptake of aftercare (or, conversely, no uptake at all until end of measurement period), so I don't care about whether aftercare was continued or not after uptake at other measurement timepoints, so the outcome can't change over time)
Discrete time survival analysis or Cox regression?
CC BY-SA 4.0
null
2023-05-21T09:37:02.560
2023-05-23T12:49:09.040
2023-05-22T08:45:54.973
379768
379768
[ "survival", "cox-model", "discrete-data", "discrete-time" ]
616470
1
616600
null
5
81
In a binomial experiment, I have an estimate for the probability of 3 independent events A, B & C, each with a 95% confidence interval. (Trivial example values) `P(A) = .12 (.05, .29)` `P(B) = .16 (.08, .25)` `P(C) = .06 (.02, .14)` I need to calculate `P (no event) = P (no A) * P (no B) * P (no C)` which is `(1 - P(A)) * (1 - P(B)) * (1 - P(B))`, or `(1 - .12) * (1 - .16) * (1 - .06)`. Now, my question arises when I do the same calculation using the lower and upper bounds of the confidence intervals to calculate a CI around `P (no success)`. It seems logical to do it, but I know that in some circumstances, you can't just add or subtract lower or upper bounds of C.I.'s without affecting the width, or rather the confidence level of your newly calculated interval. (Adding two 95% C.I.'s would lead to a close to 98% C.I., I've read somewhere recently). I'm just not sure if this is one of those circumstances, and if it is, how do I find / calculate the proper confidence level (85%? 90%?) to use in the first step in order to end up with a truly 95% C.I. at the end? EDIT: This is an epidemiological study. Sample proportions for A, B, and C were obtained from the same sampled individuals; however, the three events are assumed independent (finding A does not impact chance of finding B in the same individual).
Confidence intervals calculated from other confidence intervals (binomial problem)?
CC BY-SA 4.0
null
2023-05-21T09:52:05.390
2023-05-23T07:19:52.663
2023-05-23T00:04:14.970
4754
4754
[ "probability", "confidence-interval", "binomial-distribution" ]
616471
2
null
616458
1
null
Technically you can use the first principal component of your predictors as the sole predictor (I don't think multiple regression with more than one predictor variable makes sense with 11 observations). However, the interpretation of this is then determined by the data, which determine the principal component. One thing I recommend in such cases is to think hard about how to create a meaningful index for what is relevant to you from your variables rather than using PCA to do it automatically, as the meaningful index will normally give you better interpretation. If however you think you need more than one predictor for what you want to achieve (like interpreting relative importance of the different variables), you are pretty much lost with 11 observations. Generally, in regression problems where the number of dimensions is critically high compared with the number of observations, there are various techniques for dimension reduction and variable selection. PCA is one of them, but this is typically done in situations in which interpretation of the impact of single predictor variables is not a central issue. Also you are much better off in terms of information in the data with, say, 500 observations and 600 variables than with 11 and 11. Statistics is not magic. If you plot your data you should already get a feeling for how difficult it is to precisely locate the line with one predictor and 11 observations (unless residual variance is very small). Making statements about how more predictors work together to explain the observations, taking into account their correlation, is near impossible. Also any diagnosis of the model assumptions will be extremely unreliable if possible at all.
null
CC BY-SA 4.0
null
2023-05-21T10:12:31.420
2023-05-21T10:36:06.423
2023-05-21T10:36:06.423
22047
247165
null
616472
1
null
null
0
19
I'm conducting a Difference-in-Difference (DiD) analysis where I included clusters at the state level. To fulfill the crucial common trend assumption of DiD, I examined the pre-treatment period to see if there are parallel trends. I observed that the trends are not parallel. However, I believe that the non-parallelism arises not from omitted variable bias but rather from coincidental developments at the state level. Given that my outcome variable is heavily influenced by state-level variables, this seems to be a plausible explanation. My question is: Do these coincidental developments invalidate the common trend assumption and lead to biased results? Alternatively, does this non-parallelism just imply that there is a decrease in the precision of the estimate?
Interpreting Non-Parallel Pre-Treatment Trends in Difference-in-Difference Analysis: Coincidential Failure of Common Trend Assumption?
CC BY-SA 4.0
null
2023-05-21T10:16:09.037
2023-05-21T10:16:09.037
null
null
388474
[ "causality", "difference-in-difference", "identifiability" ]
616473
1
null
null
0
15
I'll illustrate what I want to do with a Poisson GLM first. I have a GLM with only factor co-variates, thus, to bootstrap this GLM what I can do is e.g. take a single random observation without the response, predict the response using my Poisson GLM, and get a value $\lambda$. Since this is Poisson, the predicted expected value (response) is exactly the same as the parameter I need to for the distribution: I can sample a number from uniform $[0, 1]$ and quantile this number from a $POI(\lambda)$. Now, is it possible to do this with the Gamma GLM (log-link)? The expected value of a gamma distribution is $\alpha\beta$ or $\frac{\alpha}{\beta}$, depending on your definition. I'm also open to other ideas.
Finding appropriate parameters for Gamma when bootstrapping a GLM
CC BY-SA 4.0
null
2023-05-21T10:44:58.210
2023-05-21T10:44:58.210
null
null
342779
[ "generalized-linear-model", "bootstrap", "gamma-distribution" ]
616474
2
null
610093
1
null
This is not a full answer, but it addresses one issue in the original question: I misunderstood the definition of "locally unbiased", and using the correct definition the trivial constant estimator doesn't qualify. The question of how to derive the locally unbiased estimator built via the Fisher still stands, though. --- As pointed out in the comments, I seem to have misunderstood what the authors mean with "locally unbiased". I thought it just meant to have an estimator that is unbiased when the true parameter has a specific value. However, the paper also requires a stronger condition, which I understand as asking the estimator to also be "well-behaved" around the parameter value where it's unbiased (also, I couldn't find this definition of "locally unbiased" in other recent sources, so it might not be that common in the statistical literature). More precisely, we're asking the two conditions, spelled out in Eqs. (4) and (5) in the [above linked paper](https://arxiv.org/abs/2001.11742): $$\sum_m p_{\theta_0}(m)\hat\theta(m)=\theta_0, \qquad \sum_m \Big[\partial_\theta p_\theta(m)\Big]_{\theta=\theta_0} \hat\theta(m)=1.$$ The paper reports these in the multiparameter case, while I rewrote them for simplicity in the case of a single parameter. Here, $p_\theta(m)$ is the probability of the $m$-th outcome when the true parameter is $\theta$ (we're assuming discrete distributions), $\hat\theta$ is the estimator, and $\theta_0$ is the value of the true parameter wrt which the estimator is locally unbiased. With these definitions in mind, the trivial constant estimator $\hat\theta(m)=\theta_0$ is not locally unbiased: while it obviously satisfies the first unbiasedness requirement, it doesn't satisfy the one with the derivative, as $$\sum_m \big[\partial_\theta p_\theta(m)\big]_{\theta=\theta_0} \hat\theta(m) = \theta_0\sum_m \big[\partial_\theta p_\theta(m)\big]_{\theta=\theta_0} = \theta_0 \bigg[\partial_\theta \sum_m p_\theta(m)\bigg]_{\theta=\theta_0}=0.$$
null
CC BY-SA 4.0
null
2023-05-21T11:18:41.090
2023-05-21T11:18:41.090
null
null
82418
null
616475
1
616481
null
5
168
I understand Lemma 8 in Chapter 1 from Lehmann's [Testing Statistical Hypotheses](https://amzn.to/42TQbbl) [or Lemma 2.7.2 in Lehmann and Romano] as follows: If the pdf of an exponential family is $$p_{\theta}(x)=\exp\bigg\{\big(\eta_1^\top(\theta),\eta_2^\top(\theta)\big)\big(T_1(x),T_2(x)\big)^\top-\xi(\theta)\bigg\}h(x),$$ then the pdf of $T_1$ is presumably $$g_{\theta}(t)=\exp\{\eta_1^\top(\theta)t-\xi(\theta)\}.$$ However, when I take the joint distribution of $n$ iid rv's form normal distribution $N(\mu, \sigma^2)$ as an example, where $$\eta(\theta)= \left( \frac{\mu}{2\sigma^2},\frac{1}{2\sigma^2} \right)^\top, T(x)= \left( \sum_{k=1}^n X_k,-\sum_{k=1}^n X_k^2 \right)^\top,\xi(\theta)=\frac{n\mu^2}{2\sigma^2}+\frac{n}{2}\log\sigma^2,h(x)=(2\pi)^{-\frac{n}{2}}$$ The lemma yields that the pdf of $T_1=\sum_{k=1}^n X_k$ should be \begin{align} g_\theta(t) & =\exp\left\{\frac{\mu}{\sigma^2}t-\frac{n\mu^2}{2\sigma^2}-\frac{n}{2}\log\sigma^2\right\} \\[6pt] & = \exp\left\{-\frac{(t-\mu)^2}{2n\sigma^2}+\frac{t^2}{2n\sigma^2}-\frac{n}{2}\log\sigma^2\right\} \end{align} Whereas, since $X_k$'s are iid normal distributions, their sum should also be a normal distribution with mean $n\mu$ and variance $n\sigma^2$, that is:$$g_\theta(t)=\exp\left\{-\frac{(t-\mu)^2}{2n\sigma^2}-\frac{1}{2} \log(n\pi\sigma^2)\right\}$$ Why do the two results contradict? Did I misunderstand the lemma?[](https://i.stack.imgur.com/bkqRf.png)
A lemma concerning the distribution of sufficient statistic from exponential family
CC BY-SA 4.0
null
2023-05-21T11:39:27.700
2023-05-21T18:58:25.377
2023-05-21T18:58:25.377
7224
388472
[ "distributions", "normal-distribution", "sufficient-statistics", "exponential-family" ]
616478
1
null
null
1
42
I'm going over past exam papers and there's a question on probability clusterin algorithms that I'm not really sure how to approach. It goes as follows: A probabilistic clustering algorithm based on a mixture of Student’s T distributions has been trained on a labelled dataset consisting of pairs of the form (x,c), where x, is a real number and c ∈ {1,2} is a class label. The result is a distribution: $$ f(x)=p_1 f(x \mid c=1)+p_2 f(x \mid c=2) $$ where $$ f(x \mid c=j)=\frac{\Gamma\left(\frac{1+\kappa_j}{2}\right)}{\sqrt{\kappa_j \pi} \Gamma\left(\frac{\kappa_j}{2}\right)}\left(1+\frac{\left(x-\mu_j\right)^2}{\kappa_j}\right)^{-\frac{1+\kappa_j}{2}} $$ Given that the parameters are: |j |$p_j$ |$k_j$ |$μ_j$ | |-|-----|-----|-----| |1 |0.2 |1 |0 | |2 |0.8 |2.5 |3 | Compute the probability that an unclassified point with x = 1 belongs to class c=2 I'm having trouble even beginning to approach this problem and would like some pointers (or preferably a full solution) if possible. Thanks
Can I Have Some Insight Into This Probabilistic Clustering Algorithm?
CC BY-SA 4.0
null
2023-05-21T12:57:32.900
2023-05-21T13:20:09.930
2023-05-21T13:20:09.930
296197
388482
[ "probability", "self-study", "classification", "algorithms" ]
616479
2
null
94349
0
null
Since your goal is simply to identify the best/fastest user overall, I would start with a mixed/hierarchical model of the form: ``` lmer(time_spent_on_task ~ task_category + (1|user_id)) ``` This models variation in the time spent on individual tasks as a function of the task categories and user identity. User identity is modelled with a random intercept, [taking advantage of partial pooling](https://stats.stackexchange.com/a/151800/121522). Examining the random intercepts would offer one answer to your question: the user with the most negative random intercept would be the estimated fastest user after accounting for variation in time spent due to differences between categories. That said, this is not a perfect solution and there isn't likely to be one. This is because it's extremely likely that users are not universally faster or slower but differ in how much faster or slower they are based on the type of task (maybe they choose their tasks based on this as well, but you've stipulated that the task choice is pseudo-random and so I will ignore that). In the model, we could account for this by including a random slope that varies with task category. But this does not provide the simpler answer you are looking for - it would mean that the best/fastest user depends on the task category.
null
CC BY-SA 4.0
null
2023-05-21T13:10:19.907
2023-05-21T13:10:19.907
null
null
121522
null
616480
1
null
null
0
20
Does scaling the instance weights impact the XGBoost ML Model training? The instance weights determine the importance of each data point during the training process. The XGBoost ML Model uses instance weights to multiply gradients in the learning process. I am reviewing the literature and can not find any relevant paper.
XGBoost Algorithm: Impact of Scaling Instance Weights
CC BY-SA 4.0
null
2023-05-21T13:32:55.387
2023-05-21T13:32:55.387
null
null
388456
[ "machine-learning", "boosting", "gradient-descent" ]
616481
2
null
616475
4
null
The lemma states that that you can express the probability in that way with respect to a specific measure. You do not have $$dP(t)=\exp\{\eta_1^\top(\theta)t-\xi(\theta)\} \, d t$$ but instead you need to use $$dP(t)=\exp\{\eta_1^\top(\theta)t-\xi(\theta)\} \, d \nu_\theta( t)$$ when you use $\nu(t) = \int e^{-ct^2} \, dt = \frac{\sqrt{\pi} \operatorname{erf}(\sqrt{c}t)}{2\sqrt{c}} $ then $d \nu_\theta( t) = e^{-ct^2} \, dt$ and $$dP(t)=\exp\{\eta_1^\top(\theta)t-ct^2-\xi(\theta)\} \,dt$$ You will have to figure out the constant $c$ but in this way you have a form that relates to the normal distribution.
null
CC BY-SA 4.0
null
2023-05-21T14:16:02.000
2023-05-21T17:17:11.013
2023-05-21T17:17:11.013
5176
164061
null
616483
2
null
612308
1
null
> Conversely, if I were to condition on C, the pathway Treatment -> C -> Outcome would be biased because C is on the front door path between the Treatment and the Outcome, so C should be left out from a regression model OR else B would also need to be conditioned on to close the formed back door path. I don't think this is right. Controlling for `B` would not solve all the problems you would introduce by controlling for `C`. While it would help you close the back-door path `Treatment -> C <- B -> Outcome`, the front-door path `Treatment -> C -> Outcome` would still be closed. > what are the implications, benefits or drawbacks of including B-type variables in my regression models? Would I not gain any precision or explanatory power in the model by including it as a control, rather than optionally leaving it out? Yes, that's right. Controlling for `A` allows you to identify the total causal effect of Treatment on Outcome unbiasedly. Additionally controlling for `B` should yield a more precise estimator.
null
CC BY-SA 4.0
null
2023-05-21T14:22:48.533
2023-05-21T14:22:48.533
null
null
333765
null