Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
613677
1
null
null
1
62
The data consists of a time series with daily frequency. It is basically temperature data, and I can see a clear seasonal pattern throughout the year. I would like to model this time series using statsmodels SARIMAX method. Since I have daily data and a yearly seasonal pattern, I chose s=365 in the seasonal order parameter. Now my understanding of the other seasonality parameters is as follows: In (P,D,Q,s) I first determined s as before. Then, since s is positive, I chose P=1 and Q=0. My time series is stationary, so I chose D=0. This leads to the following code: ``` order = (3,0,1) seasonal_order = (1,1,0,365) model = sm.tsa.statespace.SARIMAX( endog=NG_train, exog=HDD_train, order=order, seasonal_order=seasonal_order, time_varying_regression=True, mle_regression=False, measurement_error=True, ) ``` Now this causes two problems: First of all, the method takes way too long to calculate, and I don't get any results. Second of all, if I would get results I suspect my model would be heavily overfitted, as to my understanding this adds 364 extra parameters to it. I am struggling with the method's documentation, hence why I am asking my question here. Is there any way to overcome this problem? I would be happy to include monthly seasonality, however simply setting s=12 doesn't work, and I don't want to aggregate my data to monthly data.
Using SARIMAX for daily data with yearly seasonal pattern
CC BY-SA 4.0
null
2023-04-21T11:26:19.057
2023-04-21T11:26:19.057
null
null
384768
[ "time-series", "python", "seasonality", "statsmodels" ]
613678
1
null
null
1
14
I have two different models that give me different estimates on my data. The difference is not huge, but one model significantly shrinks the estimates towards zero. However, when I run leave-one-out out-of-sample predictive accuracy, the models perform roughly the same. Furthermore, when I calculate mean absolute error for some synthetic data, one of the model significantly outperforms the other. What are the possible reasons why the two models have similar out-of-sample predictive accuracy despite giving different estimates for your data?
Different Estimates but same Out-of-sample predictive accuracy
CC BY-SA 4.0
null
2023-04-21T11:33:49.083
2023-04-21T11:33:49.083
null
null
219593
[ "cross-validation", "predictive-models", "accuracy", "out-of-sample" ]
613679
2
null
140716
0
null
In a simple linear regression with just one predictor variable, the visualization is easy to do with a scatterplot. When you have two predictors, such a graph has to move to three dimensions and creates challenges when it comes to visualization, but there are good software packages for doing 3D plots. Then there are serious issues when it comes to visualization when you have many predictor variables, so some alternative must be developed. While it might be viable to plot your points in three dimensions to examine for patterns and strength of a relationship, even that can be challenging compared to the 2D case. However, a plot of interest to you might be a scatterplot of the true and predicted outcomes. This exists in just two dimensions. However, the predictions contain information about how all of the predictors work together to make a prediction. Despite the fact that correlation between true and predicted values has some serious issues, such as those I discuss [here](https://datascience.stackexchange.com/a/114457/73930) and demonstrate visually [here](https://stats.stackexchange.com/a/584562/247274), seeing a strong relationship between the true and predicted value is a good sign that you have captured something about the relationship (subject to all the usual caveats in predictive modeling related to overfitting). For instance, in the graph below, it is easy to tell which model captures a stronger relationship between the predictor variables and the observed outcomes. If you modify my code, you can do this same kind of plot with hundreds (or more) of predictor variables, and you still would get a visualization in two dimensions. [](https://i.stack.imgur.com/NrY9h.png) ``` library(ggplot2) set.seed(2023) N <- 500 x1 <- runif(N) x2 <- runif(N) y <- x1 + x2 + 7*x1*x2 L1 <- lm(y ~ x1 + x2) L2 <- lm(y ~ x1 + x2 + x1:x2) d1 <- data.frame( predictions = predict(L1), observations = y, Model = "No Interaction Term" ) d2 <- data.frame( predictions = predict(L2), observations = y, Model = "With Interaction Term" ) d <- rbind(d1, d2) ggplot(d, aes(x = predictions, y = observations, col = Model)) + geom_point() ```
null
CC BY-SA 4.0
null
2023-04-21T11:34:30.943
2023-04-21T11:47:24.673
2023-04-21T11:47:24.673
247274
247274
null
613680
1
null
null
0
21
Suppose I have a time series that I model with double exponential smoothing as implemented in the R package `forecast` via the `ets` function. The formulas given for double exponential smoothing, i.e., $$ a_t = \alpha x_t + (1-\alpha) (a_{t-1} + b_{t-1}) $$ $$ b_t = \beta (a_t - a_{t-1}) +(1-\beta)b_{t-1} $$ illustrate that the constant and the trend parameter can change over time. I would like to analyse that change of the trend parameter over time, i.e., I want to analyse whether an observed change in the trend or in the level is statistically significant or not (as opposed to resulting from random disturbances given the variance of the parameters in the overall time series); or, at what point in time there is a significant change in the trend or the level parameters. Note that this change may occur over the course of a few obervations and is not necessarily instantaneous. At first, I thought that the correct way to do that was structural break analysis, i.e., to test whether segments of the time series are best described by different models. But I have found out that most structural break analysis literature deals with linear models such as ARIMA instead of exponential smoothing models. While those are, to some degree, equivalent, there are substantial differences between them. I found it quite puzzling that the huge host of structural break packages does not offer a way to analyse `ets` models. But perhaps this is not relevant here, as before as well as after the possible break point, the time series is assumed to be described by the same double exponential smoothing model, so there is no break point in the sense of a different model explaining the data, but rather a non-random sequence of observational errors that accumulate to changes in $a$ and $b$ respectively. Here we have to note that the observations are linked to the double exponential smoothing formulas above by $$ X_t = a + b t + \epsilon $$ where the parameters of the model may change over time, and with $\epsilon$ normally distributed. So perhaps, instead of analysing break points, should I analyse statistical properties of the observational error term? And if so, what methods should I use to do that?
Check exponential smoothing forecasts for significant changes
CC BY-SA 4.0
null
2023-04-21T11:51:21.900
2023-04-21T13:18:53.157
2023-04-21T13:18:53.157
53690
5191
[ "forecasting", "change-point", "exponential-smoothing" ]
613683
2
null
613670
0
null
Support of $ S:=X-Y$ is $[-n, m].$ So, the pgf would be \begin{align}\mathsf P_s(t) &=\sum_{i=-n}^m p_it^i\\&= \frac{p_{-n}}{t^{n}}+\cdots+\frac{p_{-1}}{t}+p_0+p_1t+\cdots+p_mt^m\tag1\label 1\end{align} In order to generate the probability sequence from the pgf, you can adhere to the general result: Theorem: If a random variable $X$ takes a finite number of real values, say $\langle \alpha_i\rangle_{i=-n}^m,$ and the sequence is monotonically increasing, then $$p_{-n}=\lim_{t\to 0^+}(t^{-\alpha_{-n}}\mathsf P_s(t) ).$$ This is easy to see: From $\mathsf P_s(t) =\sum_{i=-n}^m p_it^{\alpha_i},$ we get $$t^{-\alpha_{-n}}\mathsf P_s(t) =p_{-n}+\sum_{i=-n+1}^m p_it^{\alpha_i-\alpha_{-n}}.\tag 2\label 2$$ Then take the limit. To get $p_{-n+1}, $ now we need to differentiate $\eqref 2$ w.r.t. $t~\lfloor \alpha_{-n+1}-\alpha_{-n}\rfloor$ times and dividing both sides by $t^{\{\alpha_{-n+1}-\alpha_{-n}\}}$ and check the limiting value as $t\to 0^+.$ Rest of the coefficients follow similarly. Coming to the present problem, from $\eqref 1,$ $$p_n=\lim_{t\to 0^+} (t^n\mathsf P_S(t)) . $$ From that very relation, we can see $$ t^n\mathsf P_S(t) = p_{-n} +p_{-n+1}t^{-n+1+n} +\cdots.$$ Differentiate this and see what happens as $t$ tends to zero. You can then investigate the others. --- ## Reference: $\rm [I]$ M. L. Esquível, Probability generating functions for discrete real-valued random variables, Teor. Veroyatnost. i Primenen., $2007,$ Volume $52,$ Issue 1, $129–149$ DOI: [https://doi.org/10.4213/tvp8](https://doi.org/10.4213/tvp8)
null
CC BY-SA 4.0
null
2023-04-21T12:29:18.927
2023-04-21T19:35:44.653
2023-04-21T19:35:44.653
362671
362671
null
613685
2
null
613674
1
null
When you transform just one variable at a time like that, all you’re doing is subtracting the mean and then dividing by the standard deviation. This is the usual standardization procedure and is unrelated to autocorrelation. To see this, you might consider having your function output the `mean_data`, `cov_matrix`, and `inv_sqrt_cov` to see that they are numbers instead of a vector and two matrices, respectively.
null
CC BY-SA 4.0
null
2023-04-21T12:50:35.307
2023-04-21T12:50:35.307
null
null
247274
null
613687
1
null
null
0
15
My datasets contain of 5 regional spot electricity prices and 2 fixed electricity prices. I also have a time variable going week to week from 2010 to the end of 2021. Is there any way to use the data I have available to use the Mahalanobis Transformation and have the autocorrelation from my variables removed? My goal is to be able to compare the regional prices for for example region 1 against the fixed price contracts with uncorrelated data. I also have been thinking of if it's possible to use the transformation if I use for example my first variable and then against the same variable only with 1 lag, would that be possible or is there any other way? I can add that down below is my code if you are familiar with python, but it's not my main question just a bonus: ``` def mahalanobis_transform(data): # Calculate the mean of the data mean_data = np.mean(data, axis=0) # Calculate the covariance matrix cov_matrix = np.cov(data, rowvar=False) # Calculate the inverse square root of the covariance matrix inv_sqrt_cov = 1 / np.sqrt(cov_matrix) # Perform the Mahalanobis transformation for a scalar covariance matrix transformed_data = (data - mean_data) * inv_sqrt_cov return transformed_data ``` All answers are greatly appreciated!
Is it possible to use Mahalanobis Transformation using the data I have available?
CC BY-SA 4.0
null
2023-04-21T13:14:42.897
2023-04-21T13:14:42.897
null
null
385918
[ "time-series", "mathematical-statistics", "python", "autocorrelation", "mahalanobis" ]
613689
1
null
null
3
245
I have two variables $Y1$ and $Y2$ and the sample size is $240$. I computed Pearson's correlation coefficient in python with pearsonr from scipy.stats. The following are my results: Correlation coefficient = 0.0434, p-value=0.4687 The correlation coefficient indicates that there no correlation between $Y1$ and $Y2$, but I am unable to correctly interpret what the p-value means here. Is the p-value large because the correlation coefficient is low? From my understanding, since the p-value>0.05, we cannot reject the null hypothesis ("There is no correlation between the variables"). In this case, are both the p-value and correlation coefficient saying the same thing i.e there is no correlation? For sufficiently large data sets, will high pearson correlation coefficients also generally have low p-values?
Why is the p-value of Pearson's correlation test large even when the sample size is sufficient?
CC BY-SA 4.0
null
2023-04-21T13:29:39.727
2023-05-01T21:44:20.973
2023-05-01T21:44:20.973
247274
384812
[ "hypothesis-testing", "correlation", "p-value", "pearson-r" ]
613691
2
null
613689
7
null
The p-value is a function of both the effect size and the sample size. If you have a gigantic sample size but a tiny (or zero) effect size, then you still do not have to wind up with small p-values. In the case where the null hypothesis really is true and there is no correlation, the p-value should be $\le0.05$ only $5\%$ of the time. If the null is only slightly incorrect, then even a small sample size might result in $6\%$ or $12\%$ of the p-values being $\le0.05$.
null
CC BY-SA 4.0
null
2023-04-21T13:33:52.067
2023-04-21T13:33:52.067
null
null
247274
null
613692
1
null
null
1
18
If I use the Benjamini-Krieger-Yekutieli (BKY) procedure for an FDR correction, is it possible for the critical P-value returned to be greater than the desired false discovery rate? I just tried applying the BKY procedure using [this Matlab code](https://github.com/dmgroppe/Mass_Univariate_ERP_Toolbox/blob/master/fdr_bky.m) from the mass univariate ERP toobox, I set the desired FDR to 0.05 and it gave me a critical P-value of 0.055 ... so that's more generous than if I hadn't corrected at all ... Can the output be correct?
Can the critical P-value returned by the Benjamini-Krieger-Yekutieli (BKY) procedure be greater than the false discovery rate?
CC BY-SA 4.0
null
2023-04-21T13:41:43.443
2023-04-21T13:41:43.443
null
null
332782
[ "statistical-significance", "inference", "p-value", "multiple-comparisons", "false-discovery-rate" ]
613693
1
null
null
0
19
I want to design an RCT (randomised controlled trial) with interim analysis using the O’Brien Fleming interim stoppage criteria to control the Type I error. I wan to have 3 interim analysis and then the final one then at each analysis I need to have to following p-values to reject $H_0$: - 0.00005 - 0.0039 - 0.0184 - 0.0412 [ref](https://online.stat.psu.edu/stat509/lesson/9/9.5) I know these works fine if I use the Wald or Z-statistic $$ Z = \frac{ \delta_j }{ \text{se}(\delta) } = \frac{\hat{p}_2 - \hat{p}_1}{ \sqrt{ \frac{\hat{p}_1\hat{q}_1}{n_1} + \frac{\hat{p}_2\hat{q}_2}{n_2} } } $$ Bu I want to adjust for some covariates and use logistic regression and test if the $\beta$ coefficient of the group variable (no treatment/treatment) is significant. To test the significance of the $\beta_1$ I will use the Wald [ref](https://stats.stackexchange.com/questions/60074/wald-test-for-logistic-regression). > Can I still use the group sequential boundaries for the interim analysis when I use logistic regression to assess the difference between the groups? If do can someone give me a reference or an explanation of why? If not what else can I do? [This paper](https://arxiv.org/pdf/2201.12921.pdf) states that "the sequential test statistics need to have the independent increments covariance structure in order to control Type I error " and uses an information criteria to control the Type I error ($\alpha$). But I would rather use the O'Brian Flemming boundaries if possible.
Logistic regression and group sequential design/interim analysis
CC BY-SA 4.0
null
2023-04-21T13:49:47.103
2023-04-21T13:49:47.103
null
null
257625
[ "regression", "logistic" ]
613695
1
null
null
0
16
my question is: I need to calculate the MLE for the transition probabilities of a Markok chain of order 2 ≤ k < ∞. I've calculated the MLE for Markov chain of order 1 on a previous exercise; but I don't understand where to start for the higher order. If someone could help me, I'll be thankful.
How to calculate the MLE for transition probabilities of markov chains of higher order?
CC BY-SA 4.0
null
2023-04-21T13:54:19.793
2023-04-21T13:54:19.793
null
null
385659
[ "maximum-likelihood", "markov-process", "transition-matrix" ]
613697
1
null
null
0
13
I have an experiment where 6 subjects performed 20 different lifting tasks `LT` with 3 different knee braces `KB`. The emg activity `emg` of the rectus femoris is the dependent variable and `LT` and `KB` are independent within factors. I am only interested in the effect of the `KB` on the emg activity. Additionally, for some subjects there is quite some amount of missing data. So i wanted to conduct a two-way repeated measure Anova. However when using all 20 `LT` the results of the p-values are NaN. I guess this is because there are not enough degrees of freedom caused by the missing data. My next idea was to conduct a one-way repeated measure ANCOVA with `LT` as the covariate. However i am not sure if an ANCOVA is appropriate in this case. I need to mention that i am not too familiar with ANCOVA and i only found detailed information about it when using continuous covariates. I am doing my statistical analysis with R. The syntax i used is the following. However i am also unsure whether the syntax for the ANCOVA is correct. ``` ANOVA: anova_test(data = data, dv = emg, wid = PatientID, within = c(KB,LT), type = '3') ANCOVA: anova_test(data = data, dv = emg, wid = PatientID, within = KB, covariate = LT, type = '3') ``` I also tried the ez package with ezANOVA() which gives me completely different results. Here i only got an effect for `KB` wheras for anova_test() i also got one for `LT` and for the interaction of `KB:LT`. ``` ezANCOVA: ezANOVA(data, dv = emg, wid = PatientID, within = KB, within_covariates = LT,type = 3) ``` I would greatly appreciate your help in determining whether an ANCOVA is appropriate for my data, or if you could recommend sources with more information on performing an ANCOVA with categorical data and how to do it in R.
ANCOVA controlling for a categorical manipulated factor
CC BY-SA 4.0
null
2023-04-21T14:03:28.143
2023-04-21T14:04:55.570
2023-04-21T14:04:55.570
386169
386169
[ "r", "anova", "categorical-data", "repeated-measures", "ancova" ]
613698
1
null
null
0
17
Let the true model be $$ Y = X_1\beta_1 + X_2\beta_2 + u $$ but $X_2$ is omitted, so we estimate $$ Y = X_1\gamma_1 + e $$ by a valid instrument $Z$, so $\gamma_1$ is the 2SLS estimator. Assume now, $E[\pmb{z}_ie_i]=0$ and $E[e_i^2\pmb{z}_i\pmb{z}_i']=V \succcurlyeq 0$. What is an additional condition under which GMM cannot be more efficient than IV? And is it necessary or sufficient? I only know that GMM is efficient if we use a weighting matrix proportional to $V^{-1}$ and that Cramer-Rao bound is attained asymptotically, but I think this is only amongst other GMM like estimators and doesn't really say anything about IV comparison.
GMM efficiency vs IV efficiency condition
CC BY-SA 4.0
null
2023-04-21T14:04:27.310
2023-04-21T19:18:10.917
2023-04-21T19:18:10.917
53690
99530
[ "regression", "instrumental-variables", "generalized-moments" ]
613699
2
null
164261
0
null
SEM is a covariance base model which gives priority to CFA. When working with SEM, it is must to go for CFA before proceeding to path analysis and checking for model goodness of fit. The basic point to be clear with is the issues of validity (convergent/average variance extracted and discriminant) and reliability (Composite reliability and internal consistency or alpha value). Once you check for these issues you can directedly jump into path analysis and hypothesis testing. [https://drive.google.com/file/d/19v2nmhuM5G6OhclnilDEGFhVD6mEZtKJ/view](https://drive.google.com/file/d/19v2nmhuM5G6OhclnilDEGFhVD6mEZtKJ/view)
null
CC BY-SA 4.0
null
2023-04-21T14:11:35.983
2023-04-21T14:11:35.983
null
null
338647
null
613700
2
null
613457
2
null
@Lea_M, I took a look at it, and I'm still learning about how to use these functions myself. The [help info](https://rdrr.io/cran/DHARMa/man/testSpatialAutocorrelation.html) says you can have residual spatial autocorrelation even if the model takes care of it. I added the command rotation="estimated" during the recalculation and the autocorrelation is no longer statistically significant. ``` res2 <- recalculateResiduals(res, newData$site, rotation="estimated") testSpatialAutocorrelation(res2,x=groupLocations$X,y=groupLocations$Y) data: res2 observed = 0.072138, expected = -0.034483, sd = 0.056906, p-value = 0.06098 alternative hypothesis: Distance-based autocorrelation [Quantile command help][2] ``` That said, the residual plot doesn't look great [](https://i.stack.imgur.com/geHla.jpg) and I'm not certain why. Another (non-exclusive) method suggested is to condition the simulations on the random effects, but it states [here](https://rdrr.io/cran/glmmTMB/man/simulate.glmmTMB.html) that for glmmTMB, it's not yet possible to do that.
null
CC BY-SA 4.0
null
2023-04-21T14:16:16.907
2023-04-21T16:20:58.230
2023-04-21T16:20:58.230
205125
205125
null
613702
2
null
613666
0
null
First Question For standard GLMs, the null distributions of the standardized coefficients are taken to be asymptotically $N(0,1)$ aka $t_{inf}$. Normally the significance of these linear models is computed from a $t_\nu$ distribution, but I suppose for consistent outputs, it makes sense to also just set the degrees of freedom for GLMs at infinity. Second Question You have specified the following model: $$ E[log(Y_i)]=\beta_0+\beta_{Year}x_{Year}+\beta_{Loc}x_{Loc} $$ Suppose you ask to compute the expected value at Location i, then the above reduces to this: $$ E[log(Y_i)|Loc=Loc_i]=\beta_0+\beta_{Year}x_{Year}+\beta_{Loc_i} $$ However, you want the marginalized means, so you need to compute the following, where you marginalize out over the remaining variables: $$ E[E[log(Y_i)|Loc=Loc_i]]=E[\beta_0]+\beta_{Year}E[x_{Year}]+\beta_{Loc_i} $$ So then your observed marginal mean should be $\beta_0+\beta_{Year}\hat{x}_{Year}+\beta_{Loc_i}$, where $\hat{x}_{Year}$ is the average value of the years. So, the marginal means yields an estimate at the average year value.
null
CC BY-SA 4.0
null
2023-04-21T14:45:15.953
2023-04-21T14:45:15.953
null
null
311086
null
613704
1
null
null
0
35
I am fine-tuning CamemBERT model for text classification. I have a lot of domain specific words and a small dataset (10k sentences with 70 labels) and when I added tokens, it didn't help the model to perform better (probably because of my small dataset). When adding tokens with the add_tokens function, can we instantiate the weight embedding of the new tokens (like with CBOW embedding)?
Create weight for tokens with BERT models
CC BY-SA 4.0
null
2023-04-21T14:47:00.717
2023-04-21T14:47:00.717
null
null
297085
[ "transformers" ]
613706
2
null
613289
2
null
As you recognize, the multiple testing with an adaptive design poses a risk of false-positive results (Type I error). You certainly do not want to continue adding cases and testing the hypothesis repeatedly until you reach an apparently "significant" result. If you do not correct for multiple comparisons, then this is an extreme form of [p-hacking](https://stats.stackexchange.com/q/200745/28500). If you do correct for multiple comparisons, then in a Bonferroni-type correction "the denominator will increase as fast as the number of analysis," as you say, and you will lose power to detect a true treatment effect. There are some general strategies to adapt the sample size or otherwise reduce the number of individuals in a way that is statistically acceptable. First, you can use the data in a way that doesn't involve hypothesis tests at an interim analysis. The estimated variance among observations is often the most difficult thing to choose in power analysis. An early-stage evaluation of variance to refine the sample size, without a test of the treatment effect, poses little risk of inflating Type I error. See this [FDA guidance](https://www.fda.gov/media/78495/download) (a useful overview of broader issues in adaptive designs). Problems arise when you perform hypothesis tests at one or more interim stages of the study: for example, you do an interim analysis to estimate both the treatment effect and the variance to adjust the sample size. If that's done, then you have at least two general strategies to try to minimize the sample size, although they need to be chosen during study design to avoid inflating Type I error. One is to choose a p-value for the study's Type I error and design the study so that you "spend" a certain amount of that p value at each interim stage. If you pass a corresponding criterion at an early stage, you can stop the trial at that point and test fewer individuals than anticipated. This [Penn State web page](https://online.stat.psu.edu/stat509/lesson/9/9.5) outlines advantages and disadvantages of three ways to do that. If you adaptively change the sample size based on results at an interim stage, you can perform the hypothesis test on each stage of the study separately, then do a test on their combined p-values. If the null hypothesis holds, then the p-values of all the stages are distributed uniformly and independently over [0,1], whether or not subsequent stages were re-designed based on the results of earlier stages. That allows a combined test on the p-values of the single-stage hypothesis tests. [This paper](https://www.jstor.org/stable/2533441) discusses that approach. A z-test on a weighted sum of the z-scores corresponding to the p-values is one choice, with the weights chosen so that the sum of their squares equals 1. For example, with two stages you could choose weights of $1/\sqrt 2$. To avoid inflating Type I error, you need to choose those weights at the beginning of the study. Although you can control for false-positives with such designs, they can bias the estimates of the treatment effect. For example, if the interim analysis is "lucky" in finding a much stronger effect than the true value, the revised sample sizes will be smaller and the final estimate can put undue weight on the (now over-represented) "lucky" early cases. The additional problem in your situation is that, at the sample sizes that are usually feasible for a medical thesis, these strategies might not help much. To be helpful, these strategies can need dozens to hundreds of cases available to the study, even if not all cases are ultimately tested. The FDA guidance indicates that adaptive sample sizes can decrease case numbers by about 15%. Try simulations, based on reasonable estimates of the study results, to see what you might expect to gain from them.
null
CC BY-SA 4.0
null
2023-04-21T14:53:04.507
2023-04-21T14:53:04.507
null
null
28500
null
613709
1
null
null
0
13
I am running into difficulty determining the best way to analyze my data. So, as backgroud I have a dataset containing the variables MONTH, REGION, COUNT, and YEARS. MONTH, REGION, and YEAR and all factors with respective levels. My goal is to determine if COUNT significantly changes through the levels of YEARS. However, my confusion comes from the fact that COUNT is dependent on MONTH and REGION and I am unsure how to incorporate that relationship into the analysis of COUNT on YEARS. I also belive at some point I should be using a poisson regression as COUNT is not continious, but again I am strugging to know when to incorporate this. Below is a sample of the dataset for reference. [](https://i.stack.imgur.com/JNPSH.png)
Anova with response dependent on other variables that arent the predictor
CC BY-SA 4.0
null
2023-04-21T15:35:20.660
2023-04-21T15:35:20.660
null
null
385931
[ "anova", "poisson-regression", "contingency-tables", "manova" ]
613710
1
null
null
1
20
I have a dataset with different measures of plants (height, size of leafs, etc.) in 5 localities. I want to check if there is a relationship between those variables with temperature and precipitation. However, the only climate data that I have is in the form of rasters at a very gross resolution, so each locality has only one value for those climatic variables. So If I want to check the relationship of plants height with those two climatic variables, I have the problem that, for example, temperature is repeated for all the plants in locality 1, and another temperature value is repeated for all the plants in locality 2, etc. And the same for precipitation. So I don't know if performing directly a Spearman correlation or a linear model with that data would be correct. Maybe would be more correct to perform an ANOVA/Kruskal Wallis comparison? What analysis do you reccomend? Thanks
Best analysis for checking the relationship between two variables when one of them have repeated values
CC BY-SA 4.0
null
2023-04-21T15:36:12.640
2023-04-23T18:47:34.870
null
null
269380
[ "regression", "anova", "biostatistics", "kruskal-wallis-test", "climate" ]
613712
2
null
559009
0
null
- Colormapping is nonlinear filtering. A color map is simply a transform; the breakup into three dimensions further interprets it as filtering and decomposition. turbo is preferable to jet for inspection (1 -- 2 -- 3) - which is to say, it's not arbitrary, and the human visual system favors it. In turbo (or jet), as one use case, we can quickly skim an image for peaks, which will be red, and we may wish to focus only on those - that's identical to the "R" channel. - "Image" involves efficient (and nonlinear) compression. The standard approach to STFT compression is direct subsampling (i.e. hop_size), which aliases. An improvement is decimation, i.e. lowpass filtering + subsampling, which is a linear compression. If something so simple was effective, there'd be no need for all the sophistication of JPEG. In ML terms, we can view "save as JPEG" as a highly effective autoencoder, also effective dimensionality reduction. There's more to say but I'll just share the main points for now. Note that this is completely separate from using image-excelling NNs on STFT images. That [can be detrimental](https://dsp.stackexchange.com/a/80857/50076). Also, @Ghostpunk's answer is mistaken and misleading, as I commented. It may be owed to the popular "windowed Fourier transform" interpretation of STFT. Spectrogram losses can also be measured. Relevant posts: - Equivalence between "windowed Fourier transform" and STFT as convolutions/filtering - Role of window length and overlap in uncertainty principle? ### Note I realized the question, and my answer, are ill-suited for this network, and I may not be developing my answer further here. If I develop is elsewhere, I'll link it. In the meantime, refer to [my discussion with @SextusEmpiricus](https://i.stack.imgur.com/SDjbm.png). Still self-accepting since, though elaboration is due, my answer can be understood with the right (mainly signal processing + feature engineering) background, and I believe it contains the most pertinent explanation.
null
CC BY-SA 4.0
null
2023-04-21T15:50:14.923
2023-05-12T01:28:02.737
2023-05-12T01:28:02.737
239063
239063
null
613713
1
null
null
1
57
I have been reading the book by Kimmel and Axelrod, Branching Processes in Biology to understand how branching processes are used to study tumor growth and cell mutations in biology. I have a basic understanding of the Galton-Watson and Bellman-Harris processes and their derivation. From what I remember, the subcritical and critical classes of GW processes basically go to zero over time, while the supercritical class goes to infinity over time. So I was trying to understand how biologists/probabilists adjust or bound the supercritical GW process so that they can obtain more realistic results for the growth and evolution of cell types over time? That is, I was just trying to understand the mathematical method to bounding these processes in order to derive more realistic or empirically testable hypotheses. If these processes either go to extinction or blow up, then that makes them pretty hard to use. I imagine that the Kimmel and Axelrod book discuss this somewhere, but I could not find that discussion through all of the formal derivations. If anyone can give me some intuition, that would be helpful. Does the bounding happen in the definition of the probability generating function for each cell, in the sense that by bounding the probabilities that controls the supercritical phase. Or am I just missing the explanation entirely--which is the more likely case.
How to bound super-critical Galton-Watson & Bellman-Harris branching processes in biological applications like tumor evolution
CC BY-SA 4.0
null
2023-04-21T15:50:33.433
2023-04-21T16:27:36.177
2023-04-21T16:27:36.177
13429
13429
[ "stochastic-processes", "markov-process", "branching" ]
613715
1
null
null
0
6
My post is more about ideas than one specific approach. My problem is pretty simple, I have simulated two matrices of noisy data, basically following correlated negative binomial distributions, and my main question here is how to retrieve associations that I know are true. I am in a context of benchmark of method for multi-omics data. Since my data are mainly sparse (e.g., few associations between my variables of my two datasets) and have strong within-correlations, I applied Sparse PLS and regularized CCA in order to proceed to multidimensional shrinkage. Thus, I obtain loadings of variables on the different components. If I plot these loadings on the two first components I have something like this : [](https://i.stack.imgur.com/xjKwQ.png) From this, I would like to determine which variables (from X or Y) are associated with other variables from the other dataset. I firstly thought of clustering method with the assumption that elements within the same cluster are considered as associated. However, regardless of the method (EM clustering, K-Means, HClust etc...), performance is bad. It is worth noting that for the sPLS, I am able to remove variables with loadings equal to 0 on both components, which is not the case for rCCA. So, I would be curious if you have any ideas about retrieving the associations. I am very open to general strategies but different methods should be considered as well. My sole criterion is to have something systematic with a minimum of arbitrary choices. I would update the post with replicable example if needed.
Strategies to find associations between subset of variables
CC BY-SA 4.0
null
2023-04-21T15:59:39.187
2023-04-21T15:59:39.187
null
null
223713
[ "clustering", "sparse", "multiple-response" ]
613716
1
null
null
0
22
I want to test my data (a distribution of saccade-amplitudes, N = 25092) for unimodality with the Hartigans Dip Test, more specific the dip.test() function in the diptest-package in R. A paper I read by Mergenthaler & Engbert (2010) used the Hartigans Dip Test for the exact same purpose, but they only gave the following information: "The bimodality was statistically significant based on Hartigans dip-test (sample = 5000, resulting P < 10^-8)". Because I am not an expert at statistics (actually quite far from it), I am very unsure about how to choose the arguments for the dip.test-function. The default is `dip.test(x, simulate.p.value = FALSE, B = 2000)`, which means the p-value is computed via linear interpolation. I do not know what B refers to in this case. I could also compute the p-value via a Monte Carlo Simulation (simulate.p.value = TRUE), in this case B would indicate the number of replicates used in the Monte Carlo test. What I essentially wanna know is if I should let the dip.test compute the p-value via linear interpolation or via Monte Carlo Simulation and which value I should choose for B, considering my pretty large sample size. I am also really interested in how you'd explain why you would choose the one or the other. I tried both options, computing my p-value via the default (simulate p.value =FALSE, B = 200) and one via Monte Carlo Simulation with B = 7500 replicates (because of a paper specifying that that's the number of replicates that produces robust results with the least amount of computing power needed). Both gave me the exact same results: D = 0.022962, p-value < 2.2e-16, so I know my distribution is at least bimodal and the null hypothesis can be rejected. Still, I really wanna know how to correctly apply the hartigans dip test to my data, respectively how to choose my arguments. This is my first time asking a question on here, so I am sorry in advance if it's not exactly appropriate for this community or the way I am asking the question is mildly confusing. Thank you in advance!
How do I adjust the arguments of the Hartigans Dip Test in R to fit my specific data?
CC BY-SA 4.0
null
2023-04-21T16:00:00.147
2023-04-21T16:00:00.147
null
null
386245
[ "r", "hypothesis-testing", "distributions", "bimodal" ]
613718
1
614047
null
11
250
Let's say an auction house holds $N$ exactly same copies of some valuable asset and holds an auction to sell them off. It knows that the attending public has some opinions about the value of this lot, which are distributed with a CDF $\Phi$. The auction house also decides to go for the following procedure: in each case $1\leq i \leq N$ they set some price $x_i$ and ask a random person from the public whether they want to buy it. If this offer is rejected, they again pick a person at random and they do this untill they find a buyer. Hence the probability of a deal happening within $t$ rounds is $1 - \Phi^t(x)$. My question is the following: suppose that I don't know $\Phi$ and instead I have only observations $(x_i, t_i)_{i=1}^N$ where $t_i$ is the number of rounds it took to find a buyer at a price $x_i$. How can I best estimate $\Phi$ from this data? What if I only was interested in estimating the median of $\Phi$? I know probability, but statistical methods I am not very familiar with, hence I'm not sure which tag to put. Please feel free to retag.
Auction problem
CC BY-SA 4.0
null
2023-04-21T16:40:31.193
2023-04-26T04:54:35.747
null
null
11363
[ "inference" ]
613719
1
null
null
0
22
I am doing my bachelor thesis and I want to analyse how capital structure affect profitability. I have panel data and I am unsure what kind of regression should I use. If I am only interested in variables and their coeficients, could I use multiple linear regression? Added a snip of how my data look. [](https://i.stack.imgur.com/cFvVE.png)
Panel data and regression analysis
CC BY-SA 4.0
null
2023-04-21T17:07:21.327
2023-04-21T17:07:21.327
null
null
386250
[ "regression", "panel-data" ]
613720
1
null
null
0
7
I have several ranking distributions and would, for each one, like to fit a [Zipf distribution][1], and estimate the goodness of fit relative to some standard benchmark. With the Matlab code below, I tried to do a sanity check and see if a "textbook" Zipf rank distribution passes the statistical test. Clearly something is wrong, as it does not. If that doesn't, nothing will! Using the Kolmogorov-Smirnoff test, or the Anderson-Darling test with a custom-built (non-normal) distribution in place of the chi-squared test does not change this. ``` % Define some empirical frequency distribution x = 1:10; freq = randn(1,10); % textbook zipf! % Define the Zipf distribution alpha = 1.5; % Shape parameter, 1.5 is apparently a good all-round value to start with N = sum(freq); % Total number of observations k = 1:length(x); % Rank of each observation zipf_dist = N ./ (k.^alpha); % Compute the Zipf distribution % Plot our empirical frequency distribution alongside the Zipf distribution figure; bar(x, freq); % or freq\N hold on; plot(x, zipf_dist, 'r--'); xlabel('Rank'); ylabel('Frequency'); legend('Observed', 'Zipf'); % Compute the goodness of fit using the chi-squared test expected_freq = zipf_dist .* N; chi_squared = sum((freq - expected_freq).^2 ./ expected_freq); dof = length(freq) - 1; p_value = 1 - chi2cdf(chi_squared, dof); % Display the results fprintf('Chi-squared statistic = %.4f\n', chi_squared); fprintf('p-value = %.4f\n', p_value); if p_value < 0.05 fprintf('Conclusion: The data is not from a Zipf distribution.\n'); else fprintf('Conclusion: The data is from a Zipf distribution.\n'); end ```
Testing the goodness of fit of a Zipf distribution
CC BY-SA 4.0
null
2023-04-21T17:30:17.863
2023-04-21T17:30:17.863
null
null
41307
[ "model", "matlab", "goodness-of-fit", "curve-fitting", "zipf" ]
613722
2
null
612829
2
null
It doesn't seem very useful to try and debate which of these two approaches is "right" in the abstract. Rather, different ways of analyzing race help answer different questions, and you should use whatever approach is answering the question you are trying to answer. The standard approach you lay out - leaving "white" as the reference category and including dummies for each non-white racial category, is useful if you want to answer the following types of questions "Are Black respondents significantly different from white respondents?" "Are Hispanic respondents significantly different from white respondents?" etc. These are often really important questions. For example, if you wanted to study whether "white supremacy" is causing Black, and Hispanic drivers to get pulled over more than white ones, then THIS is probably the approach you want - not because you are treating white as "normal" in some sort of moral sense, but because the theory of "white supremacy" argues that America treats "white" as normative in American society, and thus non-white Americans are oppressed/discriminated against relative to white Americans. Note however, that this approach will not help you answer the question of whether (due to the particulars of anti-black racism) Black drivers are pulled over more than Hispanic drivers. If that was your research question (and it's a potentially interesting one!) then you would need to change the reference category to either Black or Hispanic so you can test if those two groups are different from each other. Thus, you shouldn't just treat the biggest group as the reference by default, but you should choose the reference category that allows you to answer the question you are trying to answer. If that happens to be the biggest category, then so be it. It doesn't reflect your views on the normativity of whiteness or anything like that. You are just trying to answer a particular question. The alternative approach - comparing each group to "the mean" - might be better if the question you were investigating involved looking at how each racial group differed from "the average across all racial groups." This is a trickery question (partly because the overall average depends on the relative prevalence of each group), and I'm having trouble coming up with a situation in which it's actually the thing we would care about (which is why I tend not to use that approach when analyzing race). But if someone can articulate a research question that does ask how different racial groups compare to the overall average, then this is the method you should use to try and answer it. In short - when deciding how to treat "race" in a regression model, the key question is not which approach reflects our moral view about the nature of race and racism, but which approach allows us to actually answer the question we're trying to answer. That all being said, none of this really matters very much if you are just treating race as a control variable. If you are interested in some other variable, then doing any of these different approaches will prevent race from confounding your results in the same way, so you are free to use whatever approach you find the most aesthetically or morally pleasing.
null
CC BY-SA 4.0
null
2023-04-21T17:55:57.953
2023-04-21T18:28:37.267
2023-04-21T18:28:37.267
291159
291159
null
613724
1
613751
null
2
52
Say I have three jointly Gaussian random vectors $X, Y, Z$ and I know $X \perp Z \mid Y$. I also know that $\mathrm{Var}[Y_i \mid Z] \leq \epsilon \; (\forall i)$ for some $\epsilon > 0$. Let's assume for simplicity that $X$ is scalar. Using the conditional independence it follows that $$\mathrm{Var}[X \mid Y] = \mathrm{Var}[X \mid Y, Z].$$ Can this term be lower bounded by $\mathrm{Var}[X \mid Z]$ (minus some function of $\epsilon$)? Intuitively, this would make sense to me as $Z$ "almost fully" explains $Y$ but I cannot quite put my finger to it.
Bounding the Variance of Conditional Gaussians
CC BY-SA 4.0
null
2023-04-21T18:42:15.080
2023-04-22T04:08:57.283
2023-04-21T21:52:05.870
382809
382809
[ "normal-distribution", "variance", "conditional-probability", "multivariate-normal-distribution" ]
613725
1
null
null
1
57
My question is: what is the appropriate way to apply a kernel density estimator (KDE) to a 2D dataset that has a rotational symmetry? Specifically, I have the points ($x_i$, $y_i$) and want the density $\rho(x,y)$. However, I know that for this system, $\mathrm{d}\rho(r, \theta)/\mathrm{d}\theta=0$ in polar coordinates. Intuitively, I should be able to use a 1D KDE to find only the radial density ($\rho(r)$) and get improved results over a 2D KDE. However, you can't just insert the radial component of the datapoints into a KDE since the 2D area scales with $r^2$ and the samples should be biased since they will be non-negative. Some guesses for how you could proceed are to weight the datapoints with $w=1/r$. I have done this with some success for histograms before. However, I have also gotten weird artifacts when $r$ gets too close to zero. You could also replicate the datapoints with $r\to-r$ to avoid issues with the weird domain. I couldn't find anything quickly with google'ing so it would be helpful if anyone could tell me if this has been studied before or recommend a solution / reading.
kernel density estimation on 2D data with rotational symmetry
CC BY-SA 4.0
null
2023-04-21T18:56:44.253
2023-04-21T22:14:56.420
2023-04-21T19:58:43.497
233938
233938
[ "kernel-smoothing", "density-estimation" ]
613726
1
613731
null
5
137
I understand that the homoscedasticity assumption is one of the Gauss Markov assumptions to get a BLUE estimator. Why is homoscedasticity crucial for justifying the usual t and F statistics?
Relevance of the homoscedasticity assumption
CC BY-SA 4.0
null
2023-04-21T19:09:38.753
2023-04-25T13:30:02.987
2023-04-21T19:28:16.687
8013
379310
[ "regression", "hypothesis-testing", "inference", "econometrics", "heteroscedasticity" ]
613727
2
null
613122
0
null
A dummy variable coding strategy for ordinal data is to model the incremental increase from each category to the next. In this context, this can be accomplished with the following dummy variables: - d_m = 1 if good or medium (0 otherwise) - d_g = 1 if good (0 otherwise) If you run the regression with these two predictors, it will model the ordinal structure of the categorical variable. Hope this is useful.
null
CC BY-SA 4.0
null
2023-04-21T19:22:47.343
2023-04-21T19:22:47.343
null
null
199063
null
613728
2
null
613726
0
null
Homoscedasticity is NOT an assumption of the Gauss Markov theorem. The Gauss Markov theorem establishes OLS as the BLUE estimator for $\beta$ when the error is homoscedastic. The GM theorem also established WLS as the BLUE estimator when the error is heteroscedastic. WLS is weighted least squares. The weight is specifically chosen to be the inverse of the variance-covariance matrix of the error vector. We don't even require that the data are independent. It's of course nearly impossible to know the actual variance structure in practice, but the theorem should be understood in the precise terms in which it is presented. The justification for the $t$ and $F$ tests come from the asymptotic distribution of the Wald statistic $W = \frac{\hat{\beta} - \beta}{SE(\hat{\beta})}$ which has a limiting $N(0, I_p)$ distribution when the null hypothesis is true. The same test statistics and limiting distributions are obtained when, under heteroscedasticity with $V$ known, the estimator is replaced $\hat{\beta} = (X^T V^{-1}X)^{-1} X^T V^{-1} Y$. [](https://i.stack.imgur.com/rMqXj.png)
null
CC BY-SA 4.0
null
2023-04-21T19:30:55.313
2023-04-25T13:30:02.987
2023-04-25T13:30:02.987
8013
8013
null
613729
2
null
613256
5
null
You got trapped in the thicket of multiple parameterizations of the [Weibull distribution](https://en.wikipedia.org/wiki/Weibull_distribution#Second_alternative). In what Wikipedia calls the "[standard parameterization](https://en.wikipedia.org/wiki/Weibull_distribution#Standard_parameterization)", the survival function with scale parameter $\lambda$ and shape parameter $\alpha$ is: $$S(t)=\exp(-(t/\lambda)^\alpha). $$ That's the parameterization used by `rweibull()`. In that parameterization, the median is $\lambda(\ln 2)^{(1/\alpha)}$, which for $\lambda= 3$ and $\alpha = 2$ gives: ``` 3*(log(2)^(1/2)) # [1] 2.497664 ``` presumably close to the value that you found by simulation. You seem to be using what Wikipedia calls the "[first alternative parameterization](https://en.wikipedia.org/wiki/Weibull_distribution#First_alternative)."
null
CC BY-SA 4.0
null
2023-04-21T19:49:41.230
2023-04-21T19:49:41.230
null
null
28500
null
613730
1
null
null
0
18
Chuck loves playing pool. He hangs out at a sleezy bar and spends all day playing. Different people drop by to play against him at different times. Everybody who beats Chuck thinks they're the best pool player in town. The problem is that they all work at different times and can't play directly against each other. How can we create a ranking comparing all the players? How sure can we be that the player with the best win record really is the best, the player with the worst record is really the worst, etc.? Here's an example of my data in R. ``` record <- structure(list(name = c("Billy", "Danny", "Elsie", "Frank", "George", "Hector", "Cheryl"), wins = c(26L, 28L, 38L, 43L, 42L, 50L, 49L ), losses = c(46, 42, 46, 43, 42, 37, 34), win_percent = c(0.361111111111111, 0.4, 0.452380952380952, 0.5, 0.5, 0.574712643678161, 0.590361445783133 )), row.names = c(NA, -7L), class = c("tbl_df", "tbl", "data.frame" )) ```
How to rank competitors who all play the same opponent?
CC BY-SA 4.0
null
2023-04-21T19:55:54.110
2023-04-21T19:55:54.110
null
null
52328
[ "inference", "multiple-comparisons" ]
613731
2
null
613726
3
null
> Why is homoscedasticity crucial for justifying the usual t and F statistics? Homoscedasticity is not crucial for the use of the t and F statistics, because they can be used, at least asymptotically, even in the presence of heteroscedasticity. Homoskedasticity is required for the usual OLS parameters estimator. However, even with heteroscedasticity, it can be retained for pointwise estimates, but using the robust standard errors. Note that the t and F statistics still apply.
null
CC BY-SA 4.0
null
2023-04-21T20:05:52.267
2023-04-21T20:09:42.430
2023-04-21T20:09:42.430
362671
106229
null
613732
2
null
612822
1
null
Here $Y$ is the minimum of $n$ independent samples from a lognormal distribution $X$ with $\ln(X)\sim N(\mu,\sigma)$. Then $Y$ does not have a lognormal distribution, but one can still ask: What are the mean and standard deviation of $Y$? Using the Fisher-Tippett-Gnedenko [theorem](https://en.wikipedia.org/wiki/Fisher%E2%80%93Tippett%E2%80%93Gnedenko_theorem), there are some expressions which are accurate in the limit. These are complicated enough that I'll only give the mean, using the expressions for the lognormal from table 3.4.4 of Modelling Extremal Events by Embrechts, Klupperberg, and Mikosch (1997): For large $n$, the distribution of $Y$ will be close to a Gumbel distribution with mean $d_n + \gamma c_n$, or $$e^\mu\left(1+\frac{\gamma\sigma}{\sqrt{2\ln n}}\right)\left( \ln\frac{n^4\ln n}{4\pi}\right)^{\sigma/\sqrt{8 \ln n}}$$ where $\gamma$ is Euler's gamma.
null
CC BY-SA 4.0
null
2023-04-21T20:08:07.340
2023-04-21T20:08:07.340
null
null
225256
null
613733
1
null
null
1
34
I'm approaching for the first time GLS estimators. Suppose that $\operatorname{Var}(u|x)=\sigma^2 h(x)$, where $h(x)$ is some function of the explanatory variables that determines the heteroscedasticity. Since variances must be positive, $h(x)>0$ for all possible values of the independent variables. Let's say, we don't know $h(x)$. The book (Wooldridge, the introductory one) proposes the following way for modelling heteroscedasticity: $$\operatorname{Var}(u|x)=\sigma^2 {\rm e}^{\delta_0 +\delta_1 x_1 + \cdots + \delta_k x_k}\tag 1$$ where $\delta_j$ are unknown parameters. Thus, $h(x)={\rm e}^{\delta_0 +\delta_1 x_1 + ... + \delta_k x_k}$ Then, the book says, under assumption $[1]$ I can write $$u^2 =\sigma^2 ({\rm e}^{\delta_0 +\delta_1 x_1 + ... + \delta_k x_k}) \nu\tag 2$$ [](https://i.stack.imgur.com/bUis4.png) Where do equations $[2]$ and $[8.31]$ come from?
Feasible GLS estimator
CC BY-SA 4.0
null
2023-04-21T20:08:54.490
2023-04-21T21:39:02.407
2023-04-21T20:35:47.110
362671
379310
[ "econometrics", "heteroscedasticity", "weighted-regression", "generalized-least-squares" ]
613734
2
null
613725
0
null
I am not sure how to render this as a KDE task for practical purposes. Instead, I will propose an alternative regression type approach. In my example, I will use a bivariate normal point cloud to get at the rotational symmetry of your problem. $$ (x,y)\sim N(0,Diag(1)) $$ We then bin the data according to Euclidean coordinates and divide the x and y axis into bins with some equal spacing. For each bin, record the median $x,y,r,\theta$ values, and then the number of observations in each bin for a Poisson regression. In the case of the Multivariate normal I have presented above, we do not need to worry about $\theta$ because the polar representation of the the above is: $$ f(r,\theta)=\frac{1}{2\pi}r exp(-r^2/2) $$ This suggests in the log-linear regression, we need to enter $r^2$ as the relevant co-variate. Here is the implementation. ``` require(mvtnorm) require(dplyr) require(ggplot2) set.seed(1234) m=rep(0,2);s=diag(rep(0.1,2)) data=data.frame(rmvnorm(10000,mean = m,sigma = s)) colnames(data)<-c('x','y') data$r<-sqrt(data$x^2+data$y^2) data$theta<-atan(data$y/data$x) binLim<-round(max(abs(data[,c('x','y')]))) binsize=0.1 data$x_interval<-findInterval(data$x,seq(-binLim,binLim,binsize)) data$y_interval<-findInterval(data$y,seq(-binLim,binLim,binsize)) binnedData=data%>%group_by(x_interval,y_interval)%>%summarise( n=n(),x=median(x),y=median(y),r=median(r),theta=median(theta) ) binnedModel<-glm(n~I(r^2), data=binnedData, family=poisson) binnedData$fitted<-binnedModel$fitted.values plot(binnedData$fitted,binnedData$n) binnedData$density_est=binnedData$fitted/sum(binnedData$fitted) ``` For checking against the known density function in this case: take the median values we saved and compute the integral where $b$ is the binsize used for binning the data. They seem to be roughly correct. $$ \int_{x-b}^{x+b}\int_{y-b}^{y+b} N(0,\Sigma) $$ ``` binnedDensities<-apply(binnedData%>%data.frame,1,function(bin){ pmvnorm(lower = c(bin[['x']],bin[['y']])-binsize/2, upper = c(bin[['x']],bin[['y']])+binsize/2 ,mean = m, sigma = s) }) ggplot(binnedData,aes(x=x,y=y))+geom_point(aes(colour=binnedDensities),size=5)+ scale_color_viridis_b()+theme_bw()+ggtitle('Analytical Density') ggplot(binnedData,aes(x=x,y=y,colour=density_est))+geom_point(size=5)+ scale_color_viridis_b()+theme_bw()+ggtitle('Estimated Density') plot(binnedData$density_est,binnedDensities) ``` [](https://i.stack.imgur.com/vGw06.png)
null
CC BY-SA 4.0
null
2023-04-21T20:45:48.670
2023-04-21T22:14:56.420
2023-04-21T22:14:56.420
311086
311086
null
613736
2
null
613670
0
null
You can see the probability generating function, when applied to the binomial distribution, as a way to do the checks and balances in the different ways that the heads and tails can be distributed (heads and tails, as in flipping a coin $n$ times and the probability of heads is $p$ and probability of tails is $q$). The terms in the power represent the probabilities of getting heads or tails. And by writing out the product as a polynomial of $x$ we get the cases individual cases of the number of heads and tails. Let's use as example n=3. $$\begin{array}{rcl} (q+px)^3& =& (q+px)(q+px)(q+px) \\ &=& \overbrace{qqq + qqpx + qpxq + pxqq + qpxpx + pxqpx + pxpxq + pxpxpx}^{\substack{\text{8 terms for the 8 possible outcomes of heads and tails}\\\text{the number of $x$'s relate to the number of heads}}} \\ &=& \overbrace{(1 q^3) x^0 + (3 q^2p) x^1 + (3 q^1p^2) x^2 + (1 p^3) x^3 }^{\substack{\text{expression regrouped into groups with equal powers of $x$'s}}}\\ &=& \sum_{k=0}^3 a_k x^k = \sum_{k=0}^3 P(K=k) x^k \end{array}$$ It are these coefficients $a_k$ in the polynomial expansion that are of interest as they relate to the probability of $k$ heads. Computing the polynomial is the same as writing down all the possible combinations of heads and tails. You can see the power terms $x^k$ as keeping track of how many times you had heads. --- Now if you would do a subtraction of two binomial terms, for instance with $m=n=3$ then you could use a product like $$\begin{array}{rcl}(q_1+p_1x)^3(q_2+p_2x^{-1})^3& =& (q_1+p_1x)(q_1+p_1x)(q_1+p_1x)(q_2+p_2x^{-1})(q_2+p_2x^{-1})(q_2+p_2x^{-1}) \\ &=&q_1q_1q_1q_2q_2q_2 + q_1q_1q_1q_2q_2p_2/x + \text{ 62 other terms} \\&=& q_1^3p_2^3 x^{-3} + (3p_1q_1^2q_3^2+3 q_1^3p_2^2q_2^1) x^{-2} + (9p_1^1q_1^2p_2^2q_2^1+3 p_1^2q_1^1p_2^3 + 3 q_1^3p_2^1q_2^2) x^{-1} + (9 p_1^2q_1^1p_2^2q_2^1 + 9 p_1^1q_1^2p_2^1q_2^2 + p_1^3p_1^3 + q_1^3q_2^3) x^0 + (9p_1^2q_1^1p_2^1q_2^2+3 p_1^1q_1^2q_2^3 + 3 p_1^3p_2^2q_2^1) x^{1} + (3p_1^2q_1q_2^3+3 p_1^3p_2^1q_2^2) x^{2} + p_1^3q_2^3 x^{3} \\ &=& \sum_{k=-3}^3 a_k x^k = \sum_{k=-3}^3 P(K_1-K_2=k) x^k \end{array}$$ Now the power in the polynomial (or Laurent series since there are negative powers) represents the writing out of the product of the 6 terms as a sum of combinations of the 64 terms like $q_1q_1q_1q_2q_2p_2/x$ and keeps track how often in those terms we had a factor $x^{1}$ (relating to adding a heads) or a factor $x^{-1}$ (relating to subtracting a heads). --- The coefficients for the series $a_k$ relate to the probabilities and are the values that you want to know. The formula with the derivatives are just a way to 'read out' those coefficients. The formula works when you do not have negative powers in the series. But you can solve this, as Whuber already commented, by multiplying the series with $x^m$. --- The use of the probability generating function for this difference of two binomial distributions works, but it is not exciting. It becomes more interesting when we can perform some manipulations that simplify the function or expressions. An example is in this dice problem that can be solved by using a generating function: [https://stats.stackexchange.com/a/492027/](https://stats.stackexchange.com/a/492027/) where two tricks are applied (one is to rewrite a sum as a simple fraction $P(x) = \sum_{k=0}^\infty (5x+5x^2+5x^3+5x^4)^k = \frac{1-x}{1-6x+5x^5}$, a second is finding a recursive relation for the coefficients of the power series representation)
null
CC BY-SA 4.0
null
2023-04-21T21:02:55.340
2023-04-21T21:46:36.263
2023-04-21T21:46:36.263
164061
164061
null
613738
1
null
null
1
20
I was reading example 3.2 on stratified sampling in Sampling: Design and Analysis by Lohr where a stratified random sampling design is proposed and compared with a simple random sample one. The point was estimating the total of the variable of interest. The simple random sample was about the 10% of the population ($n=300$) and in the stratified random sampling each stratum was sampled with the same frequency. After the calculation of the estimated variance for each stratum and the estimated variance of the simple random sample, at about the end of the example the author claims : > the relative gain from stratification can be estimated by the ratio $$ \frac{\text{estimated variance from stratified sample, with } n =300}{\text{estimated variance from simple random sample, with } n=300 } = 0.75$$ If these figures were the population variances, we would expect that we would need only $300 \cdot 0.75 = 225$ observations with a stratified sample to obtain the same precision from an SRS of $300$ observations. Initially I didn't understand why this simple calculation gave the correct result, but then I came up with a (hopefully sound) mathematical explanation: Given that the estimated totals in the population for a variable $Y$ in each case are defined, maybe abusing a bit the notation, as $$\hat{t}_{SRS} = N\sum_{i=1}^n y_i$$ $$\hat{t}_{STR} = \sum_{h=1}^H \hat{t}_h = \sum_{h=1}^H (N_h \sum_{i=1}^{n_h} y_{ih})$$ we have that the variance of the estimators of the totals, if we sample a proportion $\alpha$ in each stratum and in the simple random sample, are $$V(\hat{t}_{SRS}) = (1-\frac{n}{N}) N^2 \frac{S^2}{n} = \frac{1-\alpha}{\alpha} N S^2$$ $$V(\hat{t}_{STR}) = \sum _{h=1}^H (1-\frac{n_h}{N_h}) N_h^2 \frac{S_h^2}{n_h} = \frac{1-\alpha}{\alpha} \sum _{h=1}^H N_h S_h^2$$ So the ratio calculated by the author is equal to $$k = \frac{\sum _{h=1}^H N_h S_h^2}{N S^2} $$ under the hypothesis that the sample variances are instead population variances. So given that the assumption for the use of the central limit theorem are satisfied we can say that to have the same precision (i.e. the same confidence interval) from a stratified sample and from a simple random sample with proportion $\alpha$, we should find $\beta$ such that $$ \frac{1-\beta}{\beta} \sum _{h=1}^H N_h S_h^2 = \frac{1-\alpha}{\alpha} N S^2 \iff \beta = \frac{\alpha k}{1+\alpha(k -1)}$$ which means also that $$ n_{STR} = \beta N = \frac{k}{1+\alpha (k -1)} n_{SRS}$$ So when $\alpha \to 0$ or $ k \to 1 $ we can say that $n_{STR} \approx k \cdot n_{SRS}$. So what the author did was calculating this approximation, indeed if I use the exact formula the result becomes $n_{STR} = \frac{300 \cdot 0.75}{1+0.1(0.75-1)} \approx 231$ instead of $n_{STR}=225$. So, since there was no further explaination in the book about this, I'd like to ask: is this reasoning sound or is there something wrong? Thank you in advance
What should be the sample size in a stratified random sample to get the same precision of a simple random sample?
CC BY-SA 4.0
null
2023-04-21T21:15:14.717
2023-04-21T21:41:30.903
2023-04-21T21:41:30.903
283515
283515
[ "variance", "sampling", "sample-size", "central-limit-theorem" ]
613739
2
null
613390
0
null
If your goal is to minimize the number of genes to test so that you aren't hampered by too large a correction for multiple comparisons, then you have to be careful not to use knowledge about the association between mutations and case/control status. Otherwise, you've already started using the outcomes to choose the model, invalidating the assumptions underlying standard hypothesis tests. That, however, is what you seem to intend when you say in comments: "if I have 1 mutation in controls for my given sample size, I would need 6 in cases. If I had 2 mutations in controls, I would need 11 mutations in patients, etc.." That would be identifying genes to test based on the mutation distribution between cases and controls. Your simulations also seem to be intended to support that type of decision making. I don't think it would be helpful to figure out the problems with your code in that case (and coding-specific questions are off-topic on this site, anyway). You certainly can, however, use information about the predictors and their associations with each other in the entire data set to restrict tests to genes that can provide adequate power to find an association with outcome of at least a desired level. In your situation you have a fixed number of samples broken down into cases and controls, so it makes sense to consider the minimum magnitude of a log-odds ratio $\pm \beta_j^{\alpha}$ for predictor $j$ that can be detected at a given level of significance $\alpha$ and power $\gamma$. [This answer](https://stats.stackexchange.com/a/396681/28500) provides an approximate formula: $$ \pm \beta_j^{\alpha} = \frac{z_{1-\alpha/2}+z_\gamma}{\sigma_{x_j}\sqrt{np(1-p)(1-\rho_j^2)}}, $$ where the z's are the corresponding quantiles of the standard normal distribution, $\sigma_{x_j}$ is the standard deviation of predictor $j$ values, $n$ is the total sample size, $p$ is the proportion of cases in the sample, and $\rho_j$ is the multiple correlation of predictor $j$ with the other predictors. With a binary yes/no predictor for mutation status, $\sigma_{x_j} = \sqrt{f_j(1-f_j)}$, where $f_j$ is the fraction of total samples with the mutation. The following R function implements that: ``` beta_minMag <- function(alpha,power,fracMarker,N,fracCase,markerCorr) { (qnorm(1-alpha/2)+qnorm(power))/ (sqrt(fracMarker*(1-fracMarker))* sqrt(N*fracCase*(1-fracCase)*(1-markerCorr^2))) } ``` with `fracMarker` the fraction of samples with the mutation, `fracCase` the fraction of samples that are cases, and `markerCorr` the multiple correlation of mutation status with all the other predictors in the model. Once you've chosen the significance level and power, with a fixed sample size and known case fraction, all that's left to specify is `fracMarker` and `markerCorr`, which you can get gene by gene (again, without looking at outcomes). A few examples follow, based on 3000 cases, 5000 controls, `alpha` of 0.01, and 90% desired power. For the minimum detectable log-odds at 1% mutation prevalence and no correlation with other predictors: ``` beta_minMag(alpha=0.01, power=0.9, fracMarker=0.01, N=8000, fracCase=3000/8000, markerCorr=0) # [1] 0.8953118 ``` As above, with 0.8 multiple correlation with other predictors: ``` beta_minMag(alpha=0.05, power=0.9, fracMarker=0.01, N=8000, fracCase=3000/8000, markerCorr=0.8) # [1] 1.253945 ``` Same high correlation, but 10% mutation prevalence in the entire sample: ``` beta_minMag(alpha=0.05, power=0.9, fracMarker=0.1, N=8000, fracCase=3000/8000, markerCorr=0.8) # [1] 0.4158866 ``` No correlation, still 10% mutation prevalence: ``` beta_minMag(alpha=0.05, power=0.9, fracMarker=0.1, N=8000, fracCase=3000/8000, markerCorr=0) # [1] 0.249532 ``` No correlation, but 0.1% mutation prevalence: ``` beta_minMag(alpha=0.05, power=0.9, fracMarker=0.001, N=8000, fracCase=3000/8000, markerCorr=0) # [1] 2.368453 ``` It's certainly a good idea to consider simulation to check this approximation. But recognize that you can't decide what genes to exclude from testing based on outcome-associated considerations like "if I have 1 mutation in controls for my given sample size, I would need 6 in cases. If I had 2 mutations in controls, I would need 11 mutations in patients, etc.." In response to comments For this approach to work, you must use only the predictor values to decide on genes to omit from hypothesis testing. If you use the above approximate formula, all you need is the prevalence of mutations in the gene and the multiple correlation of the gene's mutation status with the other predictors in the model. There's no need to evaluate the "statistical significance" of those relationships among the predictors to get the estimate of the "minimum detectable" log-odds value for that gene's association with the disease-status outcome; this is just doing the best you can with the data that you have. What you don't want to do is to look at the associations of the predictors with the outcome when you choose which genes to include or omit in your hypothesis testing. The same principles apply if you simulate data instead of using the formula: choices of genes to omit must be based only on the predictor values, not on the associations with outcome. In terms of how to use things like these "minimum detectable" log-odds estimates, you make a tradeoff between running too many tests and potentially missing some very high associations with outcome. For example, you can choose a combination of significance level and power and simply not evaluate genes that can only then be found "significant" if they have a very large log-odds value. You might then decide to exclude genes from further analysis that have a "minimum detectable" log-odds value of some particular value, say 1, or greater. The risk is that you might thus exclude a low-prevalence gene that has a very high association with outcome. If you suspect that there are such genes, you could instead test all genes (ignoring these power issues) and use false-discovery rates for multiple-comparison correction. That might find low-prevalence high-association mutations, but at the risk of missing some higher-prevalence mutations with useful outcome associations. Making those tradeoffs requires applying your knowledge of the subject matter. This situation is very closely related to what's frequently in microarray or RNAseq gene-expression studies, when genes with low variance in expression among samples are removed from the study to start. In your situation, genes having low mutation prevalence have low variance among cases, and genes whose mutations have high correlations with other predictors can only be detected as outcome-related if they have very high associations with disease status.
null
CC BY-SA 4.0
null
2023-04-21T21:18:43.380
2023-04-23T16:38:42.237
2023-04-23T16:38:42.237
28500
28500
null
613740
2
null
613733
2
null
The assumptions of the generalized linear regression model is $\mathbb E[\mathbf u\mid \mathbf X]=\mathbf 0$ and $\mathbb E[\mathbf u\mathbf u^\top\mid\mathbf X]=\sigma^2\mathbf\Omega.$ If $\bf\Omega$ is unknown owing to its dependence on unknown parameter vector $\boldsymbol\gamma, $ then we seek a consistent estimator of $\boldsymbol\gamma$ eventually to get $\boldsymbol\Omega\left(\hat{\boldsymbol\gamma}\right). $ This is the essence of Feasible GLS. Consider $$y_t=\mathbf X^\top\boldsymbol\beta+u_t, ~~~~\mathbb Eu^2=\exp(\mathbf Z_t^\top\boldsymbol\gamma),$$ where $\mathbf Z_t$ can be a function of $\mathbf X_t$ but more importantly it's based on all the exogenous variables of the information set on which conditioning is being done. For consistent estimator of $\boldsymbol\gamma, $ we would calculate the OLS residuals $\hat{u}_t$ for $\hat{\boldsymbol\beta}$ and then we would run the auxiliary linear regression $$\ln{\hat{u}^2_t}=\mathbf Z_t^\top\boldsymbol\gamma+v_t.$$ Wooldridge is also following the same line of attack: take expectation of both sides of $[2]$ conditional on $\mathbf x$ and you would reach $(8.30) $ as $\mathbb E[\nu\mid\mathbf x]=1.$ And if $\nu$ is independent of $\mathbf x, $ the regression of $\ln u^2$ on $x_i$ follows suit from $[2].$ --- ## Reference: $\rm [I]$ Econometric Theory and Methods, Russell Davidson, James G. MacKinnon, Oxford University Press, $2021, $ sec. $7.4.$
null
CC BY-SA 4.0
null
2023-04-21T21:39:02.407
2023-04-21T21:39:02.407
null
null
362671
null
613741
1
null
null
1
21
[](https://i.stack.imgur.com/DpiUp.png) I am trying to sample from a bivariate normal distribution given above. If the proposal distribution is q(theta|theta prime) and theta prime = theta + U, where U is uniformly distributed over [a,b], what are the conditions on a and b?
Metropolis Hastings Proposal Distribution
CC BY-SA 4.0
null
2023-04-21T21:45:52.163
2023-04-21T21:45:52.163
null
null
386257
[ "markov-chain-montecarlo", "metropolis-hastings" ]
613742
1
null
null
0
21
I am training an AI to learn to play a game through reinforcement learning. The procedure has produced a set of $N$ agents: $A_1, A_2, \ldots A_N$. I wish to estimate a skill rating $\beta_i$ for each agent $A_i$, according to the [Bradley-Terry model](https://en.wikipedia.org/wiki/Bradley%E2%80%93Terry_model). In the Bradley-Terry model, the probability of agent $A_i$ defeating $A_j$ in a match is given by: $$ \frac{e^{\beta_i}}{e^{\beta_i} + e^{\beta_j}} $$ If my compute budget were infinite, I could run a full round-robin tournament, playing $G$ games for each of the $\binom{N}{2}$ possible match-ups, and then estimating each $\beta_i$ by maximizing log likelihood. However, my compute budget is finite, so I would like to approximate this ideal approach with a smaller number of match-ups. My question is, what is a principled way to decide which match-ups to run? In other words, given a sequence of match-ups I have run thus far, together with their results, how can I decide on a next match-up to run, with the aim of accelerating convergence to the ratings I would have calculated in the ideal approach? Some relevant details: - When I run a match-up of $G$ games between two fixed agents, the computational cost is actually sublinear in $G$. This is due to some computational details (the ability to batch neural network inference on a GPU). In other words, we can assume that $G$ is a reasonably large constant, meaning that if we schedule one game between two agents, we can get many more "for free". - The iterative nature of the reinforcement learning procedure affords me certain expectations (but not guarantees) about the relative skill levels of the players. I can encode these expectations in an $N\times N$ bool matrix, $M$, where $M[i, j]$ is true if I expect that $\beta_i \leq \beta_j$. The transitive nature of skill in the Bradley-Terry model implies that this matrix can represent the edges of a DAG. I mention this in case this matrix can help with match-up selection.
Efficient sampling of tournament match-ups for Bradley-Terry modeling
CC BY-SA 4.0
null
2023-04-21T22:41:21.520
2023-04-21T22:41:21.520
null
null
2221
[ "bradley-terry-model" ]
613743
1
null
null
1
14
I have a model with the following formula: ``` percent_discoloration ~ cv * year + zone_line ``` where `percent_discoloration` is my response (a plant disease), `cv` is the cultivar (18 different levels), `year` (two years) and `zone_line` is a binary variable. I'm trying to compare the different level of cultivars (`cv`) to check which one has lower or higher precited mean diseases. I have all the `cv` levels replicated for each year, as you can see in the contingency table: ``` year cv 2012 2013 CPLRC5007 40 30 CPLRC5663 40 40 DK4866 40 40 DT97-4290 40 40 Exp1_Stine39LA02 40 40 Exp2_XC3810 40 40 Jack 40 40 JTN-4307 40 29 JTN-5208 40 40 JTN-5308 40 39 K07-1544 40 40 LS980358 40 39 MorsoyRT5388N 40 40 NKBrandS39-A3 40 39 Osage 40 40 Pharaoh 40 40 R01581F 40 40 Spencer 40 40 ``` As `cv` is interacting with `year`, I created adjusted means splitting by `year` as follows: ``` emm <- emmeans(mod, specs = ~cv | year, type = 'response') ``` In this way, I have two tables: one for `year=2012` and another for `year=2013`. Here are the omitted tables: ``` year = 2012: cv response SE df asymp.LCL asymp.UCL ... ... Spencer 0.2567 0.0253 Inf 0.2103 0.3093 year = 2013: cv response SE df asymp.LCL asymp.UCL ... ... Spencer 0.0736 0.0116 Inf 0.0538 0.0999 ``` So within the years, I can compare which cultivars were better or worse. The question is: Can I compare a cultivar between years? For example, is it valid to compare `cv=Spencer` between 2012 and 2013 and conclude that this cultivar had a higher predicted mean disease in 2013 than in 2012? I saw one answer ([here](https://stats.stackexchange.com/questions/425471/analysis-of-interaction-with-multiple-levels-in-each-factor-emmeans-in-mixed-mo)) and was confused about whether is the same case as mine. Following the answer above, we have: ``` emm_int <- emmeans(mod, ~cv * year, type = 'response') ``` This produces the following omitted table: ``` cv year response SE df asymp.LCL asymp.UCL ... ... Spencer 2012 0.2567 0.0253 Inf 0.2103 0.3093 ... ... Spencer 2013 0.0736 0.0116 Inf 0.0538 0.0999 ``` In this way, I have the two estimates for `cv=Spencer`. Is that a valid approach to compare between years?
Compare the factor A between levels of factor B when an interaction exists, using emmeans
CC BY-SA 4.0
null
2023-04-21T22:48:42.500
2023-04-21T22:48:42.500
null
null
252638
[ "interaction", "multiple-comparisons", "lsmeans" ]
613746
1
null
null
1
8
I am working on data analysis. I divided my samples into five different groups (group A-E) by using some variables. and then, because group A can be further reclassified into group A+ and group A- and I did that, so there were 7 groups in total. My research goal is to compare the difference of group A+ and group A-, so should I just use wilcoxon test to compare group A+ and group A- or K-W method to compare group A+, group A-, group C-E with post hoc analysis of DunnTest? Because when I used wilcoxon test, there were some variables between group A- and group A+ showing difference but when I used K-W method with post hoc analysis, tehre were no differences between two groups. Could you please clarify and assist me to figure out? Thank you so much, I really appreciate it.
Wilcoxon test for two groups or post hoc analysis of K-W test when I woud like to compare two groups and observe the difference?
CC BY-SA 4.0
null
2023-04-22T01:32:43.117
2023-04-22T01:32:43.117
null
null
386265
[ "hypothesis-testing", "wilcoxon-mann-whitney-test", "post-hoc", "kruskal-wallis-test" ]
613748
1
null
null
0
9
I was wondering if there was a way to use Bayesian framework where you might favor more recent data over older data? Say you are doing work with 3pt shooting percentages where u might used historical data for the start of the new season as a prior but may want to add more "weight" to the newer data being introduced as the new season progresses.
Question about bayes framework relative to time
CC BY-SA 4.0
null
2023-04-22T03:00:03.193
2023-04-22T03:00:03.193
null
null
386266
[ "bayesian", "mathematical-statistics" ]
613749
1
null
null
0
15
Consider an experiment where randomly selected experts from a set are asked to attribute Likert-scale ratings to randomly selected objects from two groups A and B. Now, suppose I want to test whether there is a statistically significant difference between the distributions of ratings for objects in groups A and B. Rating distributions are strongly non-normal. My first thought would have been to use Mann-Whitney U but this experimental design violates multiple independence assumptions. Within a single group, multiple ratings may come from the same expert OR refer to the same object. Between groups, some experts may have rated objects both in group A and B. What is the statistically correct approach to perform this test? I can think of several ways to proceed: - Use a different approach which does not require independence (if so, which one? Permutation test?) - Perform a Mann-Whitney test on a subset of ratings (e.g., ensure that the set of experts for group A and B is disjoint, and each expert only rated one object per group) - Give up - Something else (mixed effect model, etc.)?
Comparing ratings generated by overlapping groups of experts
CC BY-SA 4.0
null
2023-04-22T03:15:17.747
2023-04-22T15:27:00.503
2023-04-22T15:27:00.503
369668
369668
[ "independence", "wilcoxon-mann-whitney-test", "kolmogorov-smirnov-test" ]
613750
1
null
null
0
80
I was reading this article on [Logistic Regression for Rare Events](https://jstor.org/stable/2336755). Over here, a modification ("Firth's Correction") to the classical likelihood function has been proposed in which a penalty term has been added based on the square root of the Fisher Information. As we know, the square root of the Fisher Information is closely related to the Jeffreys's Prior: $$\mathcal{L}(\theta) + \frac{1}{2} \log\left|\mathbf{I}(\beta)\right|$$ I am trying to understand the logic as to why a penalty term was chosen that was based on the Jeffreys's Prior and why exactly it is useful for correcting biases associated with rare events. For instance, when it comes to Penalized Regression, I have read about penalty terms based on the L1 Norm and L2 Norms (e.g. LASSO and Ridge). Visually, I can understand why such penalty terms might be useful. The following types of illustrations demonstrate how such penalty terms serve to "push" regression coefficient estimates towards 0 and thereby might be able to mitigate problems associated with overfitting: [](https://i.stack.imgur.com/nlssM.png) However, in the case of Firth's Correction, I am not sure as to how the square root of the Jeffreys's Prior is useful in correcting biases associated with rare events - mathematically speaking, how exactly is a penalty term based on the square root of the Jeffreys's Prior able to reduce biases associated with rare events? Currently, this choice of penalty seems somewhat arbitrary to me and I can't understand how it serves to reduce bias.
Why Is Jeffreys's Prior Used to Correct Biases?
CC BY-SA 4.0
null
2023-04-22T04:06:13.760
2023-04-24T06:52:51.267
2023-04-22T06:57:53.170
7224
77179
[ "regression", "bayesian", "regularization", "jeffreys-prior" ]
613751
2
null
613724
1
null
Because everything is jointly Gaussian: - It is no loss of generality to take the marginal variances equal to 1 - By the conditional independence assumption we can write $X=\gamma Y+ A$ and $Z=\delta Y+B$ where $A$ and $B$ are Gaussian and independent of each other and $Y$. - the variance of $A$ is $1-\gamma^2$ and the variance of $B$ is $1-\delta^2$ - The condition on $\mathrm{var}[Y|Z]$ constrains $\delta$ to be near 1, but does not constrain $\gamma$. This is like the classical errors-in-variables regression problem. The regression coefficient of $X$ on $Z$ is $$\beta=\frac{\mathrm{cov}[X,Z]}{\mathrm{var}[Z]}=\frac{\gamma\delta}{1}=\gamma\delta$$ Now $$\mathrm{var}[X|Z]= \mathrm{var}[X]-\beta^2\mathrm{var}[Z]=1-\gamma^2\delta^2$$ and $$\mathrm{var}[X|Y]=1-\gamma^2$$ So, there is a lower bound depending on how far $\delta^2$ is from 1, which is some function of $\epsilon$ that I haven't bothered to work out. This bound is trivial if $\gamma=\pm 1$ and is also useless if $\gamma=0$ so that $X\perp (Y,Z)$, but it could be useful for intermediate values. None of this would hold without joint normality, since the conditional expectations don't then have any useful relationship to linear projections.
null
CC BY-SA 4.0
null
2023-04-22T04:08:57.283
2023-04-22T04:08:57.283
null
null
249135
null
613752
2
null
506298
1
null
A very hand-waiving but still could be powerful way of visualizing this property could be to list that: $$ \mu_x(t)=E[x(t) ]=\lim_{T\to\infty}\frac{1}{2T}\int_{-T}^{T}x(t) dt=\widehat{\mu_x(t) }$$ for ergodic processes. And when the autocorrelation is concerned: $$R_{xx}(\tau) =E[x(t) x(t+\tau)]\implies \lim_{\tau\to\infty}E[x(t) x(t+\tau)]=E[x(t)]E[x(\tau)]=\left(\widehat{\mu_x(t) }\right)^2$$ as tau goes to infinity, $x(t)$ and $x(\tau)$ tend to be independent hand-waivingly while we are not talking about periodic processes. And that simply becomes the square of the mean.
null
CC BY-SA 4.0
null
2023-04-22T04:32:18.730
2023-04-22T04:41:51.063
2023-04-22T04:41:51.063
362671
386271
null
613753
1
null
null
1
23
I have to perform a normal MLR but my data set is very very highly correlated now what to What are the best methods to adress this issue I my final regression equation I don't want more that 3 or 4 independent variables I tried to add reciprocal data transformation column I tried to range scale I tried log In all cases my regression not improving more than 0.3 Any other very good suggestion
For performing MLR but my data set is very highly correlated
CC BY-SA 4.0
null
2023-04-22T04:49:32.903
2023-04-22T04:49:32.903
null
null
325928
[ "multiple-regression" ]
613754
2
null
613544
0
null
In your model, you've specified that the effect of time on the outcome is a linear function of time. It isn't the case that the creators of emmeans have constrained the p.value to be the same. The results follow from the fact that you have specified a linear effect in time. I will outline below: for illustrative purposes, it is sufficient to treat the estimated coefficients as having an approximate normal distribution under the null since a $t_{30}$ is close to normal. $$ \beta\sim N(\mu,\sigma^2)\rightarrow a\beta\sim N(a\mu,a^2\sigma^2) $$ In your contrasts, you specify the contrasts of times, which has the following distribution. $$ \beta_T 12 -\beta_T 1 \rightarrow 11\beta_T\sim N(11\mu,11^2\sigma^2) $$ This will then lead to another normal distribution which is tested under a $N(0,1)$ null: $$ \Delta_{12,1}\sim N(11\mu,11\sigma^2)\rightarrow \frac{11\mu}{\sqrt{11^2\sigma^2}}\ \sim N(0,1) $$ In general, you are going to have: $$ \Delta_{T_F,T_O}\sim N(\Delta_T\mu,(\Delta_T^2)\sigma^2)\rightarrow \frac{\Delta_T\mu}{\Delta_T\sigma}\sim N(0,1) $$ From this it is clear, the differences only depend on the gap between the time intervals, and second, the test statistic will always be $\mu/\sigma$, which is what was used to compute the p-value in the original regression. The standard errors will change by some multiple of the gap in time. This can all be observed in the $T_6-T_0$ and $T_{12}-T_6$ results. If it is the case that a linear form is better for the data, then so be it. You might not have enough data to estimate the differences in the time points themselves if you treated the time points as categorical predictors. I would not focus on trying to obtain significant results by further breaking down the data ... if you have substantive reason to believe that the linear relation does not hold between time points of your data, then it might be worth pursuing your idea of computing regressions in between each group.
null
CC BY-SA 4.0
null
2023-04-22T04:50:30.053
2023-04-22T04:50:30.053
null
null
311086
null
613756
1
null
null
2
37
I have some survey data on a population described by `age, gender, and weight`. It’s quite skewed so I want to reweight it to a known target population (a larger study) using post-stratification. Problem: some bins are absent in the smaller study (e.g. `age 20-30, weight 120-150lbs, gender M`). So when I try to map the population weights from the larger study using inverse proportional fitting, there's a chunk of the population that cannot be mapped. So you end up with an imperfect fit, and even the marginals don't look right (in that example, my overall size of subjects in the age 20-25 bin is off). In addition there are some bins that have very low N so it's probably not advisable to map the full population weight of that cell onto, say, 1 person! Is there some way around this? I could imagine some technique where you have a sort of adaptive re-weighting. If the N is too small to split into both `gender M, age 20-25` and, say, `weight 120-150` and `weight 150-180`, just group those two cells together into one. But of course there are multiple directions where you could merge cells! Are there some better techniques?
Post-stratification with missing subpopulations in the survey
CC BY-SA 4.0
null
2023-04-22T05:16:51.190
2023-04-23T06:20:37.687
2023-04-22T14:57:42.317
59667
59667
[ "sampling", "survey", "survey-weights", "research-design", "poststratification" ]
613757
1
null
null
1
8
I would like to measure the effectiveness of video advertising that are of a certain length of time and divided into parts. For example, suppose there is a video advertisement that can be divided into A, B, and C parts (Each part is 5 minutes), and viewers can start watching at any time and stop at any time. (Many viewers start watching in the middle of a part and end up watching in the middle of another part.) In this case, a viewer who watched part B for 3 minutes and part C for 2 minutes is quantified as [A,B,C]=[0,3,2]. Is it possible to create a regression model with A, B, and C as explanatory variables and whether or not the purchase was made as the objective variable? Is there any problem with the above model in the first place, or If you have a use case for a similar model, I would appreciate it if you could let me know.
Measure the effectiveness of video advertising that can be divided into some parts
CC BY-SA 4.0
null
2023-04-22T06:17:53.487
2023-04-22T06:17:53.487
null
null
386076
[ "mathematical-statistics", "forecasting", "predictive-models", "modeling" ]
613758
1
null
null
1
36
I am experimenting with the `ca.jo` function of the `urca` package and I am getting confused about the `ecdet` argument. In the documentation it is mentioned that `ecdet` has three possible values: `none` for no intercept in cointegration, `const` for constant term in cointegration and `trend` for trend variable in cointegration. I have three questions. - Does "in cointegration" mean "in the error correction term"? In the urca document it seems as if the deterministic terms were outside the error correction term. - When I run the trace test with ecdet = "none", the printout says Test type: trace statistic , with linear trend. Why? - When I use ecdet = "trend", the printout says Test type: trace statistic, with linear trend in cointegration. Constant and linear trend, or linear trend through the origin?
R ecdet in ca.jo()
CC BY-SA 4.0
null
2023-04-22T06:29:17.930
2023-04-22T08:51:43.883
2023-04-22T08:51:43.883
53690
386274
[ "r", "cointegration" ]
613760
1
null
null
0
17
I am reading the book: Shahjahan Khan. Meta-Analysis Methods for Health and Experimental Studies. Springer 2020. [https://link.springer.com/book/10.1007/978-981-15-5032-4](https://link.springer.com/book/10.1007/978-981-15-5032-4) On page 23, there are formulas for conversion of effect sizes for binary outcomes. [](https://i.stack.imgur.com/d9EDj.png) I am quite lost at this point. Where do all these formulas come from? Can anyone explain please?
Strange formulas for binary outcomes in meta-analysis
CC BY-SA 4.0
null
2023-04-22T07:26:38.757
2023-04-22T07:56:54.523
2023-04-22T07:56:54.523
80704
80704
[ "meta-analysis", "binary-data", "odds-ratio", "risk-difference" ]
613761
1
null
null
0
40
I am reading the book: Terri D. Pigott. Advances in Meta-Analysis. Springer 2012. [https://link.springer.com/book/10.1007/978-1-4614-2278-5](https://link.springer.com/book/10.1007/978-1-4614-2278-5) On page 10, there is a formula for standardized mean difference: [](https://i.stack.imgur.com/Ightt.png) Is the only error there in the denominator of (2.10) and the line just after the formula? How can one explain the formula (2.12)?
Formula for standardized mean difference in meta-analysis
CC BY-SA 4.0
null
2023-04-22T07:46:26.733
2023-04-24T14:26:55.080
2023-04-22T07:56:14.103
80704
80704
[ "meta-analysis", "standardized-mean-difference" ]
613762
1
null
null
0
10
- posterior - predictive checks: a predictive model built based on a set of empirical observations $x$ so that $x$ may be compared against a set of prediction $\tilde{x} \sim p(\tilde{x} | x)$ Then, for posterior - predictive accuracy: a predictive model: the set of predictions $\tilde{x} \sim p(\tilde{x} | x)$ to be compared against a set of out - of - samples $\tilde{\tilde{x}}$. is this correct? - I read somewhere that the posterior - predictive accuracy can be summarised by the log of the posterior - predictive density value. The posterior - predictive density value is defined as $P_{p}(.) = \int d\theta p(. | \theta) p(\theta | x)$ where $p(. | \theta)$ is the point - predictive density value. So, the posterior - predictive accuracy is $l_{p} = log\left[ P_{p}(.)\right]$. Does anyone know of an explicit form to the point - predictive density value $P(. | \theta)$? What role does the point -predictive density value $p(. | \theta)$ play in the posterior - predictive density value $P_{p}(.)$? What would $P_{p}(.)$ tell me? Are there good examples to $P(. | \theta)$? Why is the significance of the logarithm operator?
posterior predictive accuracy: the role of log and the point - predictive density value
CC BY-SA 4.0
null
2023-04-22T07:49:05.417
2023-04-22T07:55:33.420
2023-04-22T07:55:33.420
109101
109101
[ "self-study", "bayesian", "model" ]
613763
1
null
null
1
15
I want to estimate a bivariate restricted probit model in R. In particular, I want to restrict the correlation between the models to 0. Using the GJRM package & function, I was already able to estimate an unrestricted model, however, I have no clue how to implement restrictions there. The biprobit command from the mets package seems to support restrictions but the documentation is so bad that I am unable to follow how to implement the model in the first place. In essence, I want a command that replicates Stata's biprobit command and permits application of the constraint command :)
Bivariate restricted probit in R
CC BY-SA 4.0
null
2023-04-22T08:00:37.763
2023-04-22T08:00:37.763
null
null
261584
[ "r", "probit", "bivariate" ]
613764
1
null
null
1
47
There are multiple sources to answer the above question. For example [here](https://jdmeducational.com/what-does-standard-deviation-tell-us-4-things-to-know/) it says > Standard deviation tells us about the variability of values in a data set. It is a measure of dispersion, showing how spread out the data points are around the mean. So far so good, but I would like a clearer explanation with an example. For example I have a list of 65 values and when I calculate their average and std I got ``` Average: 8.046153846153846 Std: 3.684557169927145 ``` and I got a plot [](https://i.stack.imgur.com/lkFfH.png) By looking at the plot we can see that it is almost always the value 10 but sometimes it varies. In this situation what does the data 3.68 (the std) give to understand the situation? And is there a better statistic to understand what is happening to the set?
What does standard deviation mean in simple terms?
CC BY-SA 4.0
null
2023-04-22T08:01:16.867
2023-04-23T09:53:55.063
null
null
73999
[ "standard-deviation", "descriptive-statistics" ]
613765
1
null
null
1
21
I'm interested in using Bayesian models to measure uncertainty in edge weights within social networks. Specifically, I have been trying to replicate how this package works but using brms: [https://jordanhart.co.uk/bisonR/articles/getting_started.html](https://jordanhart.co.uk/bisonR/articles/getting_started.html) When I use bisonR, the output generates edge weights that seem to reflect the data... here are the top five dyads in terms of how often they interacted with one another [](https://i.stack.imgur.com/qb4tm.png) And here are the edge weights that bisonR produces: [](https://i.stack.imgur.com/CRcDN.png) However, for other periods where I want to quantify networks, I have a lot of zero-inflation, which bisonR doesn't currently handle (at least when I try to use the zero-inflation option). So, I've been trying to understand how bisonR works, so I can do something using different distributions if needed. The bisonR code is: ``` fit_edge <- bison_model((duration | obs_duration) ~ dyad(actor, receiver),data=b1 ``` When I try to replicate this in brms, I code: ``` brm(duration|trials(trunc(obs_duration)) ~ 1 + (1|actor+receiver), data = b1, family = binomial() ``` Then I extract edge weights by adding together the draws for "actor", "receiver" and the "intercept", and apply "plogis" to their sum. For clarity, here is the for loop I do for this: ``` for (i in 1:nrow(network_list)){ # Extract relevant coefficients from posterior samples actor <- network_list$actors[i] receiver <- network_list$receivers[i] r_actor <- paste0("r_actor[",actor,",Intercept]") r_actor <- posterior1[c(r_actor)] r_receiver <- paste0("r_receiver[",receiver,",Intercept]") r_receiver <- posterior1[c(r_receiver)] b_intercept <- posterior1$b_Intercept # Compute expected edge weight based on posterior samples expected_weight <- cbind(r_actor,r_receiver,b_intercept) colnames(expected_weight) <- c("actor","receiver","b_intercept") expected_weight$edge <- plogis(expected_weight$actor+expected_weight$receiver+expected_weight$b_intercept) expected_weight$edge <- round(expected_weight$edge,5) edge_weight <- expected_weight$edge edge_median <- median(edge_weight) edge_lower <- as.numeric(quantile(edge_weight, probs=0.05)) edge_upper <- as.numeric(quantile(edge_weight, probs=0.95)) # Store expected edge weight in the network_list network_list$median[i] <- edge_mean network_list$lower[i] <- edge_lower network_list$upper[i] <- edge_upper } ``` However, when I do this, although the top dyad is the same, the rest are less sensible. [](https://i.stack.imgur.com/KEz3s.png) For example, ROC and CAS never interact with each other but the model, or the way I'm treating the outputs, is generating an edge weight between them. Any clues on where I'm going wrong?
Measuring uncertainty in edge weights using Bayesian modeling and brms in R
CC BY-SA 4.0
null
2023-04-22T08:46:35.107
2023-04-22T08:46:35.107
null
null
386278
[ "r", "bayesian", "uncertainty", "social-network", "brms" ]
613767
2
null
604519
2
null
Null hypothesis significance testing, NHST, is expressing an observed effect in terms of a probabilistic comparison with the hypothesis of an absence of the effect. An observation is statistically significant if there is a clear distinction in the support of the data for absence versus presence of an effect. An example of performing NHST with Bayesian techniques can be the following: Imagine there is a lady that claims that she can taste whether the tea or the milk was first added to the cup. We would like to test that claim by having her perform a blind taste test. We test the ability of the lady, by presenting her 100 cups of tea where she has to guess whether it was tea or milk first and we record the number of correct guesses. Let's for simplicity assume that the probability of a correct guess is symmetric (independent from whether the cup was tea first or milk first). Say that based on prior information we know that there is a probability of 0.99 to have a person that can't taste anything (the null hypothesis), and a probability of 0.01 probability that a person can taste something and the ability of this person will follow a uniform distribution. $$\begin{array}{RL} H_0:&p=0.5\\ H_a:&p\sim U(0.5,1) \end{array}$$ We have the following likelihoods as function of the number of heads: $$\begin{array}{rcl} \mathcal{L}(H_0,k) &=& {n \choose k} 0.5^n\\ \mathcal{L}(H_a,k) &=& \int_{0.5}^1 2 {n \choose k} p^{k}(1-p)^{n-k} dp \end{array}$$ and a likelihood ratio (where we can compute the denominator with as the incomplete beta function) $$ \Lambda = \frac{\mathcal{L}(H_0,k)}{ \mathcal{L}(H_a,k)} = \frac{1}{2^{n+1}\int_{0.5}^1 p^{k}(1-p)^{n-k} dp}$$ and posterior odds as function of $k$ $$ \frac{P(H_0;k)}{P(H_a;k)} = \frac{P(H_0)}{P(H_a)} \frac{\mathcal{L}(H_0,k)}{ \mathcal{L}(H_a,k)} $$ will look like [](https://i.stack.imgur.com/GevbY.png) If for example the lady has two thirds (67) of the tea cups guessed correctly, then this indicates an effect that she can have more than half guessed correctly. But it is not significant. The null hypothesis is just as likely as the alternative hypothesis (odds ratio around one or even slightly above it). --- The classical null hypothesis significance testing is not using these priors and are using instead probability statements based on a fiducial distribution or p-value. Those statements are independent from a prior distribution (but not on prior information, e.g. assumptions about the model describing the likelihood function), and they only regard the likelihood of the null hypothesis and aim to make this a small value in order to declare a test statistically significant. In a way the NHST has an implicit Bayesian reasoning and assumes that data that does not support the null hypothesis is supporting instead some alternative, but unknown, alternative hypothesis. Neyman and Pearson make this more explicit by defining the fiducial distribution or p-values (which can be computed in different ways) based on a specfic alternative hypothesis. --- Possibly a more simple way to regard statistical significance, and how I interpret Fisher's approach to it, is that the fiducial distribution has a probability density concentrated in a small region (and in a Bayesian analysis one could use the posterior distribution in place of the fiducial distribution). An effect is statistically significant if the highest density region (or some other region) of a certain large amount, say 95%, does not include the parameter value relating to a zero/null effect. Expressions of statistical significance are useful when people make point estimates. A point estimate could for instance be the maximum of the posterior distribution. But such point estimate alone does not give an indication of the entire posterior and of the difference of the estimate with other hypotheses. If we give a point estimate along with a region, then we can have a better idea about the information that the data contains about a particular parameter/hypothesis.
null
CC BY-SA 4.0
null
2023-04-22T09:10:05.337
2023-04-22T09:36:06.357
2023-04-22T09:36:06.357
164061
164061
null
613768
1
613770
null
10
499
Suppose there's a casino that has a game where you sample from a Cauchy (100, 1) distribution (mode is 100). If the sample is positive, then the casino pays you that amount, otherwise you'd have to pay the casino. My question is, what would be a value for which it's "rational" to play this game? Standard ways, like having a value less than the expected value, doesn't work since the Cauchy distribution has no mean. I know my question is kind of vague, because I'm not sure how to define rational. On one hand, there definitely should be a way to value this game. For example if it's free, I'm sure most people would agree to play this game. Would a fair value be at the mode at 100?
How much would you wager for a Cauchy distributed return?
CC BY-SA 4.0
null
2023-04-22T10:05:15.327
2023-04-23T10:57:08.557
null
null
351275
[ "expected-value", "decision-theory", "cauchy-distribution" ]
613769
2
null
328908
1
null
The density itself is clearly not a Gaussian one. On the other hand, marginal density $f(x) = \int f(x,y) dy = \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}x^{2}} + \frac{x}{2\pi}\int y e^{-\frac{1}{2}(x^{2} + y^{2})} e^{x^{2} + y^{2}-2}dy = \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}x^{2}}$ is Gaussian (similarly, $f(y)$ is also Gaussian density), because the second integral is zero as integral of odd function. Edit: function $f(x,y)$ in the previous post is not a probability density, since it can be negative (e.g. for $f(1,-5) < 0$). However, if we correct it to $f(x,y) := \frac{1}{2\pi} e^{-\frac{1}{2}(x^2 + y^2)}(1+xye^{-\frac{1}{2}(x^2 + y^2)})$, it is non-normal probability density, with with marginal normal distributions (by the above argument).
null
CC BY-SA 4.0
null
2023-04-22T10:06:34.267
2023-04-28T20:53:43.587
2023-04-28T20:53:43.587
158841
158841
null
613770
2
null
613768
13
null
The Cauchy distribution has an infinite range. It is difficult to imagine how you would consider payouts of large sizes that make either the player or the casino go bankrupt. The game has some decent odds for small values of pays. E.g. the odds of you paying 100 versus the casino paying 100 are 1:40001. So it seems a good bet. However the game has a devilish drawback, which is the risk that you have the pay all the wealth that you possess and even more, getting into debts that you will never be able to pay off. To estimate whether this game makes sense to play (for free) one could compute the probability of a [gambler's ruin](https://en.m.wikipedia.org/wiki/Gambler%27s_ruin). One could describe the random walk of the gains and profits after $n$ steps as a sum of Cauchy distributed variables $Y(n) = \sum_{i}^n X_i$ and have an absorbing boundary at bankruptcy of the player and of the casino. In the case of a random walk with steps of only +1 and -1, then you can regard this as a Martingale and the ratio of the probability of bankruptcy of for player and casino are equal to the amounts of money that they have at the start. E.g. $$\frac{P(\text{player bankrupt})}{P(\text{casino bankrupt})} = \frac{\text{starting money casino}}{\text{starting money player}}$$ One can imagine a similar ratio for the case of steps according to a Cauchy distribution. But, each step the odds of winning a are larger than the odds of loosing, because the Cauchy distribution has a location of 100. This makes that it is much more likely that the casino goes bankrupt, than the player going bankrupt in comparison to the simple random walk. Most bets will make the profit increase by 100 and this continues untill the casino goes bankrupt. The number of steps before the casino goes bankrupt will be roughly around $M_{casino}/100$ where $M_{casino}$ is the total money of the casino. The danger is is when during the $M_{casino}/100$ steps the player will go bankrupt. We can approximate this by multiplying the probabilities of bankruptcy by the player during each step while assuming that the steps are 100 each time. $$P(\text{player bankrupt}) \approx 1-\prod_{i=0}^{M_{casino}/100-1}1-F(-M_{player}-100i;100,1)$$ where $F$ is the CDF of the Cauchy distribution. If $M_{player} = 100000$ and $M_{casino} = 1000000$ then this probability is 0.76%. A simulation of 200 paths could look like: [](https://i.stack.imgur.com/m6pGF.png) Here 1 out of 200 players went bankrupt. Do you want to risk that? ``` sample = function(gambler = 100000, casino = 1000000) { X = c(0) while((X[1] > -gambler) * (X[1]< casino)) { X = c(X[1]+rcauchy(1,100,1),X) } X } set.seed(1) plot(-100,-100, type = "l", ylim = c(-10^5,10^6), xlab = "number of gamble's", ylab = "accumulated profit/loss", xlim = c(0,10000*1.1)) pb = 0 for (i in 1:200) { Y = sample() lines(rev(Y), col = rgb(0,0,0,0.1)) if (Y[1]<0) {pb = pb+1} } pb ### estimate computation x = seq(0,1000000/100-1) 1-prod(1-pcauchy(-100000-x*100,100,1)) ```
null
CC BY-SA 4.0
null
2023-04-22T10:24:31.800
2023-04-23T10:00:22.823
2023-04-23T10:00:22.823
53690
164061
null
613771
1
null
null
0
72
Imagine you are solving difficult Math problems and you expect to solve one every 1/2 hour. Compute the probability that you will have to wait between 2 to 4 hours before you solve four of them. I came across two solutions for this. 1st Solution: Let X be the number of 1/2-hour intervals we have to wait before we solve one theorem. Then X follows a Poisson distribution with parameter λ = 1/2, because we expect to solve one theorem every 1/2 hour. We want to find the probability that we will have to wait between 2 to 4 hours before we solve four theorems. Let Y be the total number of 1/2-hour intervals we have to wait before we solve four theorems. Then Y follows a Poisson distribution with parameter μ = 4/λ = 8. We want to find P(8 ≤ Y ≤ 16), which is the probability that we will have to wait between 2 to 4 hours (i.e., between 8 and 16 1/2-hour intervals) before we solve four theorems. We can calculate this probability using the cumulative distribution function (CDF) of the Poisson distribution: P(8 ≤ Y ≤ 16) = F(16; 8) - F(8; 8) = ~9% 2nd Solution : Gamma distribution arises naturally in which the waiting time between Poisson distributed events are relevant to each other. One theorem every 1/2 hour means we would suppose to get θ = 1 / 0.5 = 2 theorem every hour on average. Using θ = 2 and k = 4, Now we can calculate it as follows: f(x) = x^k e^(-x)/ G(k) where G is the Gamma function. Plugging in the values we get ~ 12% I think the 1st solution is correct and not the second one. What are your thoughts?
Statistics Interview Question
CC BY-SA 4.0
null
2023-04-22T10:44:41.397
2023-04-22T10:44:41.397
null
null
263033
[ "probability", "distributions", "poisson-distribution", "gamma-distribution" ]
613773
2
null
581501
1
null
The tensorflow.keras.layers.GRU uses the following formula to calculate the new state `h = z * h_old + (1 - z) * hnew` ,which is based on "[Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation](https://arxiv.org/pdf/1406.1078v3.pdf)" by Kyunghyun Cho et al. On the other hand the formula in wiki but also on "[Neural machine translation by jointly learning to align and translate](https://arxiv.org/pdf/1409.0473.pdf)" by Dzmitry Bahdanau et al. is as follows: `h = (1 - z) * h_old + z * hnew` The difference in the order of the terms does not affect the performance of the GRU cell, since the sigmoid activation function used to compute the update gate, z, produces a value between 0 and 1, making both formulas symmetric.
null
CC BY-SA 4.0
null
2023-04-22T11:51:27.350
2023-04-22T18:40:08.910
2023-04-22T18:40:08.910
22311
386240
null
613774
1
null
null
0
20
Consider the following model \begin{align} y_{i t}& =\beta x_{i t}+\varepsilon_{i t} \\\\ \Delta y_{i t}&=\beta \Delta x_{i t}+\Delta \varepsilon_{i t} \\\\ \tilde{x}_{it} &= x _{i t} + u _{i t} \\\\ \text{where} \\\\ x _{i t}&=\rho x _{i t-1}+v _{i t} \\\\ u _{i t}&=r u _{i t-1}+e _{i t} \end{align} According to the notes I'm following, we can calculate \begin{equation} \sigma_{\Delta x}^2=\operatorname{var}\left(x_t\right)-2 \operatorname{cov}\left(x_{i t}, x_{i t-1}\right)+\operatorname{var}\left(x_{i t-1}\right)=2 \sigma_x^2-2 \operatorname{cov}\left(x_{i t}, x_{i t-1}\right) = 2 \sigma_x^2(1-\rho) \end{equation} and \begin{equation} \sigma_{\Delta u}^2=\operatorname{var}\left(u_t\right)-2 \operatorname{cov}\left(u_{i t}, u_{i t-1}\right)+\operatorname{var}\left(u_{i t-1}\right)=2 \sigma_u^2-2 \operatorname{cov}\left(u_{i t}, u_{i t-1}\right) = 2 \sigma_u^2(1-r) \end{equation} Then the attenuation bias can be written \begin{equation} \operatorname{plim} \widehat{\beta}=\beta\left(\frac{\sigma_x^2(1-\rho)}{\sigma_x^2(1-\rho)+\sigma_u^2(1-r)}\right)=\beta\left(\frac{1}{1+\sigma_u^2(1-r) / \sigma_\chi^2(1-\rho)}\right) \end{equation} I'm a bit lost on how to calculate the two variances $\sigma_{\Delta x}^2$ and $\sigma_{\Delta u}^2$. I think it should be fairly straightforward, but I'm not great at calculating variances. I'll try to specify the parts that I find confusing: - Where do I start exactly? I guess I need to find the expression of $\Delta x$? - Maybe this will become clear after I solve the first steps, but I don't understand the simplification in the second and third equality in $\sigma_{\Delta x}^2$ - Is $\tilde{x}_{it}$ is used in the derivation? Is this just $x _{it}$ with the measurement error added? The derivation of $\sigma_{\Delta u}^2$ is identical to $\sigma_{\Delta x}^2$ obviously. Any help is greatly appreciated, Let me know if you need additional information!
How to calculate the variance of fixed effects Panel data model with measurement error ( attenuation bias)
CC BY-SA 4.0
null
2023-04-22T12:07:11.103
2023-04-22T12:51:42.440
2023-04-22T12:51:42.440
334202
334202
[ "time-series", "variance", "econometrics", "panel-data", "measurement-error" ]
613775
1
null
null
1
28
I have a question regarding Bayesian model averaging (BMA). Let M be a set of models 1...K. D are the data. Then the posterior probability of the Kth model has a following form: $$ P(M_{K}|D) = \frac{P(D|M_{K}) \cdot P(M_{K})}{P(D)} $$ I dont understand, how can I imagine the likelihood $P(D|M_{K})$. I know that the likelihood can be expressed as $$ P(D|M_{K}) = \int P(D| \theta_{K} ; M_{K}) \cdot P( \theta_{K} | M_{K}) d \theta_{K} $$ where $\theta$ is the set of models parameters. I understand this as an intersect of the likelihood of the data and the K-th model. Like this: [](https://i.stack.imgur.com/yLEBv.jpg) Whre X-axis stands for data points from D and Y-axis is the likelihood. Am I correct? Thanks a lot for explaining.
How can I visualize the likelihood in Bayesian model averaging?
CC BY-SA 4.0
null
2023-04-22T12:10:18.550
2023-04-23T05:53:11.437
2023-04-23T05:53:11.437
363150
363150
[ "bayesian" ]
613776
1
null
null
0
22
I am trying to write a research method which includes a difference-in-differences study. I survey people's exam results in England & Scotland at t=0, t=1. The mean values of the results are: - t=0, England - A - t=0, Scotland - B - t=1, England - C - t=1, Scotland - D This would give a causal effect of (A-C)-(B-D) for the DiD study, however, I want to confirm whether this result would be statistically significant at 5%. I'm looking at forming a hypothesis test where: H0: (A-C)-(B-D) = 0 H1: (A-C)-(B-D) ≠ 0 The observations in each category are n=100 (400 total observations). Does anyone know the appropriate way to test this significance of this study without using just dummy variables for time and area and using linear regression? I'm hoping to use STATA but if anyone knows R better then that would be appreciated too.
Performing a hypothesis test with a difference-in-differences study in STATA
CC BY-SA 4.0
null
2023-04-22T12:13:07.057
2023-04-22T12:14:53.903
2023-04-22T12:14:53.903
386290
386290
[ "hypothesis-testing", "statistical-significance", "stata", "difference-in-difference" ]
613778
1
null
null
1
25
I am trying to see if there's a relationship between academic freedom (independant variable) and university rankings (dependant variable) using a fixed effects model. I managed to get some significant results, but the problem is that since academic freedom goes from 0 to 1, the coefficients are really extreme. For example, I get a coefficient of -400, meaning that if academic freedom went from 0 (its lowest possible value) to 1 (it's highest possible value), a university would gain 400 ranks in a ranking. Such interpretations are nonsensical since no country ever goes from the lowest academic freedom to the highest. For most countries, academic freedom always varies around a certain mean value (for all years, France's academic freedom will always be around 0.9; sometimes higher, sometimes lower, but it will never dramatically decrease. Same for China, for example : it might increase or decrease, but it stays around 0.2.) Having this in mind, can I just multiply academic freedom by 100 in order to have coefficients that are easier to interpet ? Or will this transformation mess up my model and the relationship between my variables ? From what I saw, such a transformation could work, the only thing I have to take into account is that the coefficient will now be interpeted as a change of ranking when academic freedom goes up 1%. Is that true ? Or are there some other things that I haven't taken into account that forbid me from transforming my dependant variable ?
Can I just multiply my independant variable by 100 to make interpretation more clear (it is an index that takes values between 0 and 1)?
CC BY-SA 4.0
null
2023-04-22T12:21:35.133
2023-04-23T20:08:53.570
null
null
382870
[ "econometrics" ]
613779
1
null
null
0
24
## References [https://otexts.com/fpp3/non-seasonal-arima.html](https://otexts.com/fpp3/non-seasonal-arima.html) [https://otexts.com/fpp3/seasonal-arima.html](https://otexts.com/fpp3/seasonal-arima.html) ## Question According to the webpages, the formula of `ARIMA(p,d,q)(P,D,Q)[m] with intercept and drift` should be like below. However, the result of `auto_arima` function and the result I calculated using the formula below didn't match. Can anyone teach me the correct formula? ### Formula $$ (1-\Sigma^p_{i=1}\phi_iB^i)(1-\Sigma^P_{i=1}\Phi_iB^{im})(1-B)^d(1-B^m)^Dy_t=intercept+drift*t+(1+\Sigma^q_{i=1}\theta_iB^i)(1+\Sigma^Q_{i=1}\Theta_iB^{im})\epsilon_t $$ ### How to calculate - The model generated by auto_arima was ARIMA(2,0,0)(1,0,2)[12]. - The formula I calculated is below. $$ (1-\phi_1B^1-\phi_2B^2)(1-\Phi_1B^{12})y_t=intercept+drift*t+(1+\Theta_1B^{12}+\Theta_2B^{24})\epsilon_t $$ $$ y_t=\phi_1y_{t-1}+\phi_2y_{t-2}+\Phi_1y_{t-12}-\phi_1\Phi_1y_{t-13}-\phi_2\Phi_1y_{t-14}+intercept+drift*t+\Theta_1\epsilon_{t-12}+\Theta_2\epsilon_{t-24}+\epsilon_t $$ - The code is like below. ``` import pmdarima as pm model = pm.auto_arima(data, trend='ct') # `data` contains actual values, and is prepared in advance. print(model.resid()) # this is used to get \epsilon values. ``` - Simply remove $\epsilon_t$ and substitute all $\phi, \Phi, \Theta, \epsilon, y, intercept, drift$. However the result didn't match. I'm suspecting the formula I calculated was wrong.
What is the formula AUTO ARIMA (Python's pmdarima) uses?
CC BY-SA 4.0
null
2023-04-22T12:25:13.123
2023-04-22T12:25:13.123
null
null
365006
[ "python", "arima", "seasonality", "intercept" ]
613780
2
null
459279
0
null
The FDA guideline has changed, but the overall idea remains: [https://www.fda.gov/media/148910/download](https://www.fda.gov/media/148910/download) > Covariate adjustment is acceptable even if baseline covariates are strongly associated with each other (e.g., body weight and body mass index). However, adjusting for less redundant variables generally provides greater efficiency gains. and > When using this approach, adjusting for the baseline value rather than (or in addition to) defining the primary endpoint as a change from baseline is generally acceptable
null
CC BY-SA 4.0
null
2023-04-22T12:41:39.407
2023-04-22T12:41:39.407
null
null
383859
null
613781
1
null
null
1
38
I've performed an ordinary least squares on a data set with one variable. For simplicity, let's say I've fitted a polynomial function $$f(x)=a+bx+cx^2+dx^3.$$ I obtain the best fit and the standard error (via the covariance matrix) for each of the fit parameters. Now, I want to calculate the error in $f'(x)=\frac{df(x)}{dx}$ at a particular value of $x$. How should I go about doing this? I've taken a look at [this](https://physics.stackexchange.com/questions/86366/propagation-of-uncertainty-when-integrating-or-differentiating) post, but I don't quite understand how to apply it to $f'(x)$.
Propagation of uncertainty in a derivative of a function
CC BY-SA 4.0
null
2023-04-22T12:52:06.903
2023-04-22T15:04:29.563
2023-04-22T13:52:05.650
247274
386289
[ "regression", "least-squares", "linear-model", "error-propagation", "derivative" ]
613782
1
null
null
0
29
Assume the log returns of an asset follow the normal distribution with $\mu = 0.05/253$ and $\sigma = 0.23/\sqrt{253}$, I would like to know the probability the asset's returns drop below $\ln 0.95$ any time within the next three days. Also, I'm assuming the log-returns follow a random walk model. I first wanted to answer this question using simulation. The following R code simulates multiple trials ``` set.seed(4) sim.results <- numeric(1000) for(j in 1:1000){ days <- 3 # Simulation trials <- 5000 results <- numeric(trials) for(i in 1:trials){ returns.daily <- exp(rnorm(n = days, mean = 0.05/253, sd = 0.23/sqrt(253))) returns.cumulative <- cumprod(returns.daily) results[i] <- min(returns.cumulative) < 950000/1000000 } sim.results[j] <- mean(results) } hist(sim.results) ``` [](https://i.stack.imgur.com/SqLdr.png) ``` mean(sim.results) [1] 0.0211878 ``` From what I can tell, the probability is somewhere close to 0.21. Now, I wanted to solve this problem using theory. I did so by concluding the distribution of the log-returns for day $t$ was $\mu = 0.05/253 \times t$ and $\sigma = 0.23/\sqrt{253} \times \sqrt{t}$. I also know $P(A \cup B \cup C) = P(A) + P(B) + P(C) - P(A \cap B) - P(A \cap C) - P(B \cap C) + P(A \cap B \cap C)$ and since each of these returns are assumed to be independent we have $P(A \cup B \cup C) = P(A) + P(B) + P(C) - P(A)P(B) - P(A)P(C) - P(B)P(C) + P(A)P(B)P(C)$ where each event is a day where the log-returns can drop below $\ln 0.95$. Below is the code I used to answer this. ``` loss <- log(950000/1000000) probs <- pnorm(q = loss, mean = 0.05/253*(1:days), sd = 0.23/sqrt(253) * sqrt(1:days)) sum(probs) - prod(probs[1:2]) - prod(probs[2:3]) - prod(probs[c(1,3)]) + prod(probs) [1] 0.02495964 ``` which is a pretty different result. I understand this difference might not seem that significant, but as you increase the simulation size these numbers should become more similar but they do not. Also, if I use similar logic to calculate the probability over more than 3 days altering the probability union formula accordingly, then the difference grows even more. Is someone able to tell me what I am doing wrong?
I'm trying to solve for the probability of log-returns assuming a random walk model, but I'm getting different answers using simulation in R vs theory
CC BY-SA 4.0
null
2023-04-22T13:42:42.897
2023-04-22T13:42:42.897
null
null
386291
[ "r", "probability", "simulation", "finance", "probabilistic-programming" ]
613783
1
null
null
0
33
I want to compare performance a machine learning method and Autoreressive Moving Average-ARMA(p,q) for time series data. I do such a configuration: First I divide data into three part: - Trainig data(First %60 of data) - Validation data(%20 part of data) - Test data(%20 part of data) Then, for machine learning procedure: - Evaulate models for different combinations of hyper parameters using training data. - Choose the best model using a criterion among all models from the first stage by using unseen validation data. - Get the predicted results using the selected model from second stage, by a new unseen test data. For ARMA method: - Get all combinations of models such that ARMA(1,0), ARMA(2,0),..ARMA(p,0), ARMA(2,0),....ARMA(p,q) - Choose the best model using a criterion among all models from the fourth stage by using unseen validation data. - Get the predictied results using the selected model from fifth stage by a new unseen test data. Finally, I compare predicted values obtained at the 3th and 6th stages, by using a criterion and choose which method is the best. I wantto learn, is such a confiuration is true? Or should I change the the configuration. Thanks a lot for any help and taking your time.
train, test and validation data set configuration to compare a machine learing method and ARMA method
CC BY-SA 4.0
null
2023-04-22T13:53:08.700
2023-04-23T10:01:14.703
2023-04-23T10:01:14.703
53690
73724
[ "machine-learning", "cross-validation", "arima", "model-selection", "train-test-split" ]
613784
1
null
null
0
14
I encountered a problem where my PACF plot shows that the term is not significant at p=1, but significant at p=2 and p=3. Does this mean that I need to build an AR model that includes lag 2 and 3 but not lag 1?
Lag selection of AR model
CC BY-SA 4.0
null
2023-04-22T14:20:37.070
2023-04-22T14:20:37.070
null
null
386296
[ "time-series" ]
613787
1
null
null
0
18
I collected data on language attitudes to three languages in two moments of time, so the responders were the same. For two languages, the Bartlett test approved the homogeneity of variances, so then I conducted paired t-tests. For the third, the Bartlett test rejects the null hypothesis. Can I still conduct the paired t-test or could you recommend another way to look at the significance of the change in attitude?
Paired Data collected; one condition is non-homogenous; Paired t-test possible?
CC BY-SA 4.0
null
2023-04-22T15:14:08.473
2023-04-22T15:16:06.537
2023-04-22T15:16:06.537
362671
386300
[ "t-test", "heteroscedasticity", "paired-data" ]
613789
1
null
null
0
19
Suppose that we have a dataset $D$ that we split on train and test, denoted as $D_{train}$ and $D_{test}$, respectively. We want to construct a predictive model and also estimate its performance. Furthermore, assume that we can choose among many configurations $C=\{c_i\}_{k=1}^K$. A configuration is just a (possibly meta) function that given data as input, produces a model as output. That is, is not just the classifier or the regressor but all the steps such as feature selection, imputation etc. Of course a classifier or a feature selector has also hyperparameters. As such each configuration is a unique instantiation of all these imputers, selectors, classifiers etc. For example a configuration might look like this: $$ imputer (\text{mode = mean}) \longrightarrow selector (\text{Lasso}, \lambda = 10) \longrightarrow classifier (\text{Random Forest}, n_{trees}=100)$$ Lets say that we train each of these configurations on the train set and we find the configuration with the greatest performance $c^*$, as measured by the performance metric $P$ on the test set. Where bias is introduced? Note that each of these configurations have a "true" performance $P_{c_k}$ when evaluated on the whole population. In our setting we are just calculating the sample performance $\hat{P}_{c_k}$ for each configuration. Of course this estimation is unbiased and we can report it. Note that the performance $P_{c_k}$ refers to the performance of the model produced by the configuration $c_k$. What is biased (upward) and we can't report it is the performance of the meta-algorithm that selects among many configurations the best. We can prove this using the Jensen's inequality: $$ E[\max(\hat{P}_{c_1}, \ldots, \hat{P}_{c_K})] \geq \max (E[\hat{P}_{c_1}], \ldots, E[\hat{P}_{c_K}) = \max(P_{c_1}, \ldots P_{c_K}) $$ Comparing algorithms in different datasets I was reading this [paper](https://www.worldscientific.com/doi/epdf/10.1142/S0218213015400230) and the authors state the following (1st paragraph on 4th page): > More specifically, consider a typical publication where a new algorithm is introduced and its performance (after tuning the hyperparameters) is compared against numerous other alternatives from the literature (again, after tuning their hyperparameters), on several datasets. The comparison aims to comparatively evaluate the methods. However, the reported performances of the best method on each dataset suffer from the same problem of multiple inductions and are on average optimistically estimated. Can someone explain where the bias is introduced in this comparison?
Performance estimation bias in machine learning
CC BY-SA 4.0
null
2023-04-22T15:38:04.130
2023-04-22T15:38:04.130
null
null
271176
[ "machine-learning", "model-evaluation", "unbiased-estimator" ]
613790
1
613812
null
1
69
I want to compare a percentile metric for control vs treatment. Using bootstrap, I see two ways of doing so and wonder which one makes more sense. Approach 1: Every time: - Bootstrap the control group and get the percentile, - Bootstrap the treatment group and get the percentile - Compute the difference. Repeat B times to get the differences and get the CI. Approach 2: - Bootstrap the control B times, get the average and variance of the metric - Bootstrap the treatment B times, get the average and variance of the metric - Compute the difference and its variance, assuming they are independent. Below is a simulation code in R. The 2nd approach has narrower CI, but requires normal assumption. ``` A <- rnorm(1000, 0.1, 1) B <- rnorm(1000, 0.15, 1) pctA = quantile(A, 0.99) pctB = quantile(B, 0.99) # Approach 1: # 1. Bootstrap each set # 2. Compute the differences for each bootstrap sample # 3. Get confidence intervals N = 500 diffBoot = rep(NA, N) for (i in 1:N) { tempA = sample(A, size=1000, replace=TRUE) tempB = sample(B, size=1000, replace=TRUE) diff = quantile(tempA, 0.99) - quantile(tempB, 0.99) diffBoot[i] = diff } mean(diffBoot) quantile(diffBoot, c(0.025, 0.9725)) # Approach 2: # 1. Bootstrap each set # 2. Compute metrics and variances # 3. Get differences and confidence intervals N = 500 pctAboot = rep(NA, N) pctBboot = rep(NA, N) for (i in 1:N) { tempA = sample(A, size=1000, replace=TRUE) tempB = sample(B, size=1000, replace=TRUE) pctAtemp = quantile(tempA, 0.99) pctBtemp = quantile(tempB, 0.99) pctAboot[i] = pctAtemp pctBboot[i] = pctBtemp } meanboot <- mean(pctAboot - pctBboot) sdboot <- sqrt(var(pctAboot) + var(pctBboot)) meanboot / (1.96 * sdboot) ``` ```
bootstrap best practice in AB testing
CC BY-SA 4.0
null
2023-04-22T17:22:04.203
2023-05-01T19:17:06.977
2023-05-01T19:17:06.977
204397
28363
[ "r", "hypothesis-testing", "bootstrap", "ab-test" ]
613794
1
null
null
0
14
I was trying to evaluate different classification models on MNIST dataset. There are two datasets provided : `train` - 42000 images, and `test` - 28000 images. I first divided the original training dataset (42000 images) into a (80:20 split ) of `train_set` (33600) and `test_set` (8400) . I trained several models, from on `training set`, `cross-validated` them on the `training_set` only, and lastly evaluated the final model on the `test_set` for generalization error. Now that my final model is ready to generate the `submission file` using the Kaggle provided `test` set, should I train my model on the whole Kaggle provided `training` set, ie `train_set + test_set` (ie the full 42000 images provided, instead of just 33600 images that I split), since Kaggle is going to evaluate my model on its own provided `test` set ?
Should I train my final model on the (train+validation) set before final submission?
CC BY-SA 4.0
null
2023-04-22T17:55:07.360
2023-04-22T17:55:07.360
null
null
382937
[ "classification", "cross-validation", "kaggle" ]
613795
1
null
null
0
13
BIC criterion is defined as BIC=$ k \ln(n)-2 \ln(L)$ where $k$ is the number of parameters, $n$ represents the number of observations and L is the maximized value of the likelihood function of the model. Can I use BIC criterion if I have minimized value of the likelihood function? What about negative minimized value of the likelihood function?
BIC criterion with negative minimized value of the likelihood function
CC BY-SA 4.0
null
2023-04-22T19:14:11.740
2023-04-22T19:14:11.740
null
null
384330
[ "mathematical-statistics", "modeling", "likelihood", "bic" ]
613797
1
null
null
0
35
I use kalman filter algorithm, where I minimize the value of likelihood function. But after some iteration I got negative value of likelihood function. Is that a problem?
Negative value of likelihood function in Kalman filter
CC BY-SA 4.0
null
2023-04-22T20:56:06.370
2023-04-22T21:16:50.743
2023-04-22T21:16:50.743
384330
384330
[ "optimization", "likelihood", "kalman-filter" ]
613798
1
null
null
2
19
I feel like they are not synonymous, but I cannot intuitively explain the difference between "unbiased" and "exact." In other words, I am asking about the difference between the terms "unbiased" and "exact."
Kriging : when it is said kriging is an unbiaised estimator, is that synonymous with saying it is an exact interpolator?
CC BY-SA 4.0
null
2023-04-22T21:02:34.530
2023-04-22T21:02:34.530
null
null
386309
[ "unbiased-estimator", "interpolation", "kriging" ]
613799
1
613950
null
3
73
Suppose a random variable $X$ and a [strictly monotone](https://en.wikipedia.org/wiki/Monotonic_function) (and [measurable](https://en.wikipedia.org/wiki/Measurable_function)) [function](https://en.wikipedia.org/wiki/Function_(mathematics)) $f$ in which $f(X)$ is always defined. Does it necessarily hold that $f(X)$ is [statistically dependent](https://en.wikipedia.org/wiki/Independence_(probability_theory)#Independent_%CF%83-algebras) with X? --- I should emphasize that the random variable is not necessarily equipped with addition, or multiplication, but can be considered an [$(E, \mathcal{E})$-valued random variable](https://en.wikipedia.org/wiki/Random_variable#Measure-theoretic_definition). The strict monotonicity $u < v \implies f(u) < f(v)$ is from a perspective of [order theory](https://en.wikipedia.org/wiki/Monotonic_function#In_order_theory) where an arbitrary partial order $\leq$ is assumed over the domain/range of $f$. Assume the domain and range of $f$ are the same set.
Does strict monotonicity imply image variable is dependent with domain variable?
CC BY-SA 4.0
null
2023-04-22T21:20:16.360
2023-04-24T13:06:20.913
2023-04-23T15:37:58.733
69508
69508
[ "mathematical-statistics", "independence", "non-independent", "measure-theory" ]
613800
2
null
613799
2
null
Your idea of using proof by contrapositive is correct (I assume $X$ is non-degenerate, otherwise for any $f$, $X$ and $f(X)$ are trivially independent). The key argument appears in proving Kolmogorov's $0$-$1$ law as well. Suppose $X$ and $f(X)$ are independent, then as a measurable function of $X$, $f(X)$ and $f(X)$ would be independent, which implies that \begin{align} E[f(X)^2] = E[f(X) \cdot f(X)] = E[f(X)]E[f(X)]. \end{align} But this would be equivalent to say $\operatorname{Var}(f(X)) = 0$, hence $Y := f(X)$ is degenerate. Because $f$ is strictly monotone, $X = f^{-1}(Y)$ is well defined and must be degenerate as well, which contradicts with that $X$ is non-degenerate. This proves your conjecture.
null
CC BY-SA 4.0
null
2023-04-22T22:38:32.060
2023-04-22T22:38:32.060
null
null
20519
null
613801
1
null
null
0
15
I have the following data: [](https://i.stack.imgur.com/2YmbA.png) The response variable Y and the independent variable X2 were collected at state level (lowest level), while X1 are only available at country level. The general form of my mixed effect model looks like: Y = constant + x1 + x2 + random_state My challenge is that X1 is only available at country level. How can I correctly account for this, should I include X1 as random slope ? I am not sure how to handle this. Any suggestion will be very useful.
Bayesian modeling of data at different cluster level
CC BY-SA 4.0
null
2023-04-22T22:41:37.633
2023-04-22T22:41:37.633
null
null
287235
[ "bayesian", "mixed-model", "generalized-linear-model", "jags" ]
613802
1
null
null
1
15
Let $f$ be a real valued function defined on a compact set $\mathcal X\subseteq \mathbb R^d $ (and for simplicity let's say $d=1$ for now). We don't know $f$, but can observe it through the (possibly noisy) values $y_1,\ldots,y_n\in\mathbb R $ it takes at some uniformly sampled $x_1,\ldots,x_n \in \mathcal X$. A standard way to estimate $f$ is to let $$\hat f := \arg\min_{f\in\mathcal F} \frac 1 n \sum_{i=1}^n (f(x_i)-y_i)^2 + \frac\lambda 2 \mathcal P(f) \tag1$$ Where $\mathcal F $ is a family of candidate functions and $\mathcal P$ some penalty. Now, assume that $f$ is smooth (i.e. differentiable many times) and that the functions in $\mathcal F $ are differentiable. If we are interested in computing $f'$,the pointwise derivative of $f$, a naive way to go about it would be to simply set $$\widehat{f'} := \hat f' \tag2 $$ i.e. simply set our estimate of $f'$ as the pointwise derivative of $\hat f'$ (I call it the naive derivative estimator). Now, it is pretty clear that $(2)$ is (in theory at least) a bad idea : depending on the noise, the sample size, and our choice of $\mathcal F$, $\hat f$ might be a poor estimate of $f$ already, and even if it weren't, controlling $\|\hat f - f\|_{L^2} $ a priori doesn't tell us anything about $\|\hat f' - f'\|_{L^2} $ (consider a function like $x\mapsto \varepsilon \sin(Mx)$ for small $\varepsilon$ and large $M$). However, in the literature, there are cases where the naive derivative estimator is provably a "good" estimator : - If $\hat f$ is a local polynomial estimator, then $\hat f'$ is provably consistent. See here, here, or here. - If $\hat f$ is a smoothing spline, then again $\hat f'$ is provably consistent. See here or here (and references therein) - Lastly, if $\hat f$ is a kernel ridge regression estimator, it can also be proved that $\hat f'$ is consistent, as shown here and here. Judging from this, it would seem that if the hypothesis space $\mathcal F$ has been reasonably chosen, $\hat f'$ would always be consistent. My question is thus : are there examples (in the literature) where the least-square estimator (1) is consistent while the naive derivative estimator (2) fails to be consistent ? In all the literature I've seen so far, $\hat f'$ is always said to have high variance, but nothing about it ever being inconsistent. Note that I purposefully didn't define "consistent" here (maybe could have used "convergent" instead ?), but think of it as convergence of $\hat f$ to $f$ in some appropriate sense (in probability, in $L^2$, uniformly...) as $n\to\infty$. --- For context, this question is motivated by [this older one](https://stats.stackexchange.com/questions/158348/can-a-neural-network-learn-a-functional-and-its-functional-derivative) about the ability of Neural Networks to simultaneously learn a function and its derivative. My naive intuition is that if the hypothesis space $\mathcal F$ is "good enough" (in the sense that functions in $\mathcal F$ are differentiable and can approximate $f$ well), then the derivative estimator should also be "not too bad". The goal of this question is to check whether that intuition is true or not. (I also have [an open question on this topic](https://math.stackexchange.com/questions/4593432/when-can-a-neural-network-simultaneously-approximate-a-function-and-its-derivati) over at MSE.)
How bad can the naive derivative estimator be?
CC BY-SA 4.0
null
2023-04-23T00:02:06.160
2023-04-23T00:02:06.160
null
null
305654
[ "regression", "neural-networks", "mathematical-statistics", "references", "nonlinear-regression" ]
613803
1
null
null
0
42
Suppose we have real-valued random variables $X$, $Y$, with noise $\epsilon$ that is independent of $X$ and $Y$ and $\mathbb{E}[\epsilon] = 0$, and measurable function $f$. I am thinking about comparing estimates of $\beta$ in $$Y = f(X;\beta) + \epsilon$$ with estimates with $\beta$ in $$\operatorname{rank}(Y) = f(\operatorname{rank}(X); \beta) + \epsilon$$ where $\operatorname{rank}$ is the [dense ranking](https://en.wikipedia.org/wiki/Ranking#Dense_ranking_(%221223%22_ranking)) function applied to a given sample. Assume that the same estimator of $\beta$ is used for both regression equations. I'm thinking about the inequality $$\rho \left[\hat \beta_{\operatorname{original}}, \hat \beta_{\operatorname{ranked}} \right] > 0$$ where $\rho [\cdot, \cdot ]$ is the Spearman rank correlation and the estimates $\beta_{\operatorname{original}}, \hat \beta_{\operatorname{ranked}}$ are paired by sample. Does the inequality hold over finite samples? --- Here is a toy example using simple linear regression. Suppose $X \sim \mathcal{N}(0,1)$ and $Y = 3 X + 10 + \epsilon$ where $\epsilon \sim \mathcal{N}(0,1)$. Below I used sample sizes of 10, and repeated for $10^4$ samples. ``` import matplotlib.pyplot as plt import numpy as np from scipy.stats import rankdata from scipy.stats import spearmanr def rank(X): return rankdata(X, method='dense') fits = [] rfits = [] m = 10 for k in range(10000): x = np.random.normal(size=m) y = 3 * x + 10 + np.random.normal(size=m) fits.append(np.polyfit(x, y, deg=1)) rx = rank(x) ry = rank(y) rfits.append(np.polyfit(rx, ry, deg=1)) fits = np.array(fits) rfits = np.array(rfits) plt.scatter(fits[:,0], rfits[:,0], alpha=0.5) plt.xlabel('Slope Estimate') plt.ylabel('Slope on Ranks Estimate') plt.title(f'Slope\n{spearmanr(fits[:,0], rfits[:,0]).correlation}') plt.show() plt.scatter(fits[:,1], rfits[:,1], alpha=0.5) plt.xlabel('Intercept Estimate') plt.ylabel('Intercept on Ranks Estimate') plt.title(f'Intercept\n{spearmanr(fits[:,1], rfits[:,1]).correlation}') plt.show() ``` [](https://i.stack.imgur.com/GIMVf.png) [](https://i.stack.imgur.com/1sJHm.png) These plots give me the impression that there is Spearman correlation between the slope parameters but not the intercept parameters. This would seem to falsify my conjecture. It makes me wonder what are sufficient conditions for a parameter to satisfy the conjecture. One guess is that such parameters can be identified from the regression model's computation graph. ``` from graphviz import Digraph d = Digraph() d.edge('beta_1', 'beta_1 * X') d.edge('X', 'beta_1 * X') d.edge('beta_0', 'beta_1 * X + beta_0') d.edge('beta_1 * X', 'beta_1 * X + beta_0') d.view() ``` [](https://i.stack.imgur.com/5q1BM.png) Or maybe just intercepts don't work? I'm not sure. --- Let us suppose that our estimator is least squares via (vanilla) gradient descent, thus taking on the assumption that $f$ is smooth in its parameters.
Does the rank transform preserve signum of Spearman correlation between parameter estimates across samples?
CC BY-SA 4.0
null
2023-04-23T00:58:34.943
2023-04-23T17:38:42.857
2023-04-23T17:38:42.857
69508
69508
[ "regression", "estimators", "ranking", "spearman-rho", "ranks" ]
613804
1
null
null
2
30
It is an easy computation to show that given some data sampled from the binomial, gaussian, Poisson, or exponential distribution, the MLE of the mean of the distribution is the sample mean. However, for the uniform distribution, this is not true (the MLE is the average of the maximum and the minimum of the data). So, the question is: is there any sort of characterization of distributions for which the sample mean is the MLE of the mean?
Maximum likelihood of sample mean
CC BY-SA 4.0
null
2023-04-23T01:54:13.303
2023-04-23T01:54:13.303
null
null
55548
[ "maximum-likelihood", "sampling", "mean" ]
613805
1
null
null
0
10
We crossed fruit flies in genetics class and I ended up with very different results than the expected values. For some values, I had 0 observed when the expected was more than 0. Would I go through with the statistical analysis and just do (0-E)^2/E? And I also got an unexpected phenotype not anticipated in the punnett square. Do I incorporate it into the chi square statistical analysis, and if so how? Since it doesn't have a ratio provided by the punnett square.
Chi-square in genetics with fruit flies. Getting 0 observed when the expected was more than 0, and got an unexpected phenotype. How to handle?
CC BY-SA 4.0
null
2023-04-23T02:11:43.267
2023-04-23T02:11:43.267
null
null
386314
[ "chi-squared-test", "genetics" ]
613806
1
null
null
1
16
Can anyone provide me with a link or a book or steps to collect proofs for the upper bound, lower bound of a problem or how u come up with a formulation and prove it? I think my math foundation is not good but idk where to begin. The only thing of math proofing I had to learn was in secondary school. Smt like an equation is right with k, then prove it right with k+1. Or in university is NP hard. Which book i can begin with to learn with proof things? Thank you.
Book for minimax bound proof?
CC BY-SA 4.0
null
2023-04-23T03:00:49.743
2023-04-23T03:00:49.743
null
null
386315
[ "machine-learning", "mathematical-statistics" ]
613810
2
null
613756
2
null
I think merging cells is a fairly common approach -- you need to merge cells that are similar a priori or based on the population information, not similar based on the sample. [McConville and Toth](https://arxiv.org/abs/1712.05708) have a recursive partitioning approach that goes in the other direction: start with a single group and recursively split into smaller and smaller post-strata.
null
CC BY-SA 4.0
null
2023-04-23T06:20:37.687
2023-04-23T06:20:37.687
null
null
249135
null
613811
1
null
null
0
41
I have to do some distribution fitting of 120 data subsets. They take the form of financial transactions amounts and timestamps. Timestamp, BTC, EUR, USD I know from some other analysis that each subset will have different distributions, but the format of the data is the same for all data sets. My goal is to get a best fit and the parameters for each data set from the exercise. Because I know I want to build a mixture model from these distributions, so ideally this gives me a 'bank' of distributions and parameter ranges to draw from, when producing a mixture model in the future. I'm happy to check results by hand, but I'm hoping there's some automation tricks I can use to distribution fit all of them at once, as well as some collective wisdom about how to do distribution fitting of many sample sets simultaneously. What python libraries should I consider for the distribution fitting, and which should I avoid? When comparing a grouping of samples that share fitted distributions, how shall I try to conceive of the range of parameters as part of a 'generative function' for producing sub-populations? For example, say I get distribution fits for Exponential, with four different parameterisations. Do I take the min and max values as boundaries, or could I turn the four of them into some sort of probability distribution for the subpopulation generating functions?
What is a good way to automate distribution fitting in python?
CC BY-SA 4.0
null
2023-04-23T07:03:12.033
2023-04-24T12:25:49.227
2023-04-24T07:53:41.137
386320
386320
[ "python", "computational-statistics", "parameterization", "finite-mixture-model" ]
613812
2
null
613790
0
null
If we calculate the statistics of approach 1 and approach 2 condition on the same sample (this can reduce the fluctuation of the simulation itself), the two approaches will give the same diff estimator and very close variance estimation, while the difference in variance is due to the random correlation between pctAtemp and pctBtemp(pctAtemp and pctBtemp is independent). ``` set.seed(1) sample_cnt = 1000 tempA = rnorm(sample_cnt, 0.10, 1) tempB = rnorm(sample_cnt, 0.15, 1) N <- 500 ## this give p <- 0.99 ## test percentile diffBoot <- rep(NA, N) pctAboot <- rep(NA, N) pctBboot <- rep(NA, N) for (b in 1:N) { tempA = sample(A, size = sample_cnt, replace = TRUE) tempB = sample(B, size = sample_cnt, replace = TRUE) # tempA = rnorm(sample_cnt, 0.10, 1) # tempB = rnorm(sample_cnt, 0.15, 1) pctAtemp = quantile(tempA, p) pctBtemp = quantile(tempB, p) ## For each sample, two approach statistics are calculated simultaneously, which can reduce the variance of the simulation diffBoot[b] = pctAtemp - pctBtemp pctAboot[b] = pctAtemp pctBboot[b] = pctBtemp } est1 <- mean(diffBoot) var1 <- var(diffBoot) sprintf("approach 1 est: %.6f, var: %.6f", est1, var1) est2 <- mean(pctAboot - pctBboot) var2 <- var(pctAboot) + var(pctBboot) sprintf("approach 2 est: %.6f, var: %.6f", est2, var2) sprintf("var diff(%.6f) is var(pctAtemp, pctBtemp) * 2 = %.6f", var2 - var1, var(pctAboot, pctBboot) * 2) [1] "approach 1 est: -0.009805, var: 0.025579" [1] "approach 2 est: -0.009805, var: 0.024857" [1] "var diff(-0.000722) is var(pctAtemp, pctBtemp) * 2 = -0.000722" ``` Another difference is: approach 1 is close to percentile bootstrap CI, approach 2 is closer to Standard interval bootstrap.[Is it true that the percentile bootstrap should never be used?](https://stats.stackexchange.com/questions/355781/is-it-true-that-the-percentile-bootstrap-should-never-be-used?noredirect=1&lq=1)
null
CC BY-SA 4.0
null
2023-04-23T07:09:56.080
2023-04-23T07:09:56.080
null
null
347393
null
613813
2
null
613764
1
null
It is quite clear from the plot that the data consist of integer values between 0 and 11 and the most frequent values are 0 and 10. Intermediate values are quite infrequent. Knowing how the data was generated would probably explain why the data is almost dichotomous. The most informative initial summary of the data would simply be a table of how often each value appears. In R, the table would be created by ``` table(x) ``` Computing the mean and standard deviation for such data is of little interpretative value.
null
CC BY-SA 4.0
null
2023-04-23T08:12:01.447
2023-04-23T09:53:55.063
2023-04-23T09:53:55.063
129321
129321
null
613814
1
null
null
1
25
Reading Helsel STATISTICS FOR CENSORED ENVIRONMENTAL DATA USING MINITAB AND R, at page xviii the author writes: > The Figure i4 shows concentration (y) levels versus distance (x) downstream. What happens when the data are reported using two detection limits of 1 and 3, and one-half the limit is substituted for the censored observations? The result (Figure i5) includes horizontal lines of substituted values, changing the slope and dramatically decreasing the correlation coefficient between the variables. Looking only at these numbers, the data analyst obtains the (wrong) impression that there is no correlation, no increase in concentration. I don't get how the censored observations are obtained. In particular, in Figure i5, the circled observations (circle is mine) are below detection limit (DL) 1. So, if my understanding is correct, they should get $(1/2)\times 1 = 0.5$ and not $1.5$. What am I missing here? Even more confusingly, there are observations below 3 that get mapped to (1/2)DL and others that do not.... I am really confused and hope somebody can help me understand what's going on here. [](https://i.stack.imgur.com/6ALV5.png) [](https://i.stack.imgur.com/au1wi.png)
Confused about the censoring procedure in Helsel (2012) book
CC BY-SA 4.0
null
2023-04-23T08:12:45.680
2023-04-23T08:12:45.680
null
null
371555
[ "censoring", "interval-censoring", "environmental-data" ]