Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
614686
1
null
null
1
10
I have the following data: For a group of patients, the number of hospital visits in a 6 month period both before and after receiving a treatment has been measured. I wish to demonstrate that the treatment is effective by showing that the number of hospital visits after receiving the treatment is significantly lower than the number of visits before receiving it by using a hypothesis test. I don't believe I can use a paired t-test as the data is not normally distributed. Is a paired Wilcoxon test the best I can do here? I have read about the possibility of using a Poisson or Negative Binomial GLM - I have briefly used GLMs before but I am unsure about how to apply them to perform a hypothesis test of this nature. As a further question - my data is quite zero-inflated. Can I do better by applying an appropriate zero-inflated model such as a zero-inflated Poisson regression? Thanks for the help!
Hypothesis Testing for Paired Count Data
CC BY-SA 4.0
null
2023-05-02T17:22:30.960
2023-05-02T17:22:30.960
null
null
302968
[ "hypothesis-testing", "count-data" ]
614687
2
null
614665
1
null
Here is the information for a null hypothesis statistical test (NHST) a single-sample test comparing 2 correlations (this comes from Kleinbaum, Kupper, Nizam & Rosenberg, 5th ed.). If you wish to test the claim that two single-sample correlations (same variable correlated with 2 other variables) are different from each other, $$H_o : \rho_{y,x_1} = \rho_{y,x_2}$$ we can using the following test statistic (for sufficiently large $n$): $$Z = \frac{(r_1-r_2)\times\sqrt{n}}{\sqrt{(1-r_1^2)^2+(1-r_2^2)^2-2r_{12}^3-(2r_{12}-r_1r_2)(1-r_1^2-r_2^2-r_{12}^2)}}$$ where $r_i$ is the correlation of $y$ and $x_i$ and $r_{12}$ is the correlation of $x_1$ and $x_2$.
null
CC BY-SA 4.0
null
2023-05-02T17:22:33.230
2023-05-02T17:25:39.760
2023-05-02T17:25:39.760
44269
199063
null
614689
2
null
614680
2
null
You can assemble two useful pieces of information to arrive at the interpretation of the linear and quadratic term in a Poisson GLM: - As you already stated, $\exp(\beta)$ for any first-order coefficient $\beta$ in a Poisson model can be interpreted as a relative rate of the Poisson process comparing groups differing by 1 in the corresponding variable being modeled. - In a linear model with identity link, like OLS, the interpretation of the quadratic term $\beta_2$ is readily seen as the rate of change of the effect, that is for every unit difference in $X$ the slope relating $X$ and $Y$ changes by a factor of $\beta_2$. Similarly, the linear term $\beta_1$ can be seen as the instantaneous rate of change relating $X$ and $Y$ when $X=0$. That is to say, since the shape of the predicted trend between $X$ and $Y$ is quadratic, $\beta_1$ is the slope of the tangent curve at $X=0$. So, in your model with $\beta_1 = 0.8$ and $\beta_2 = -0.1$, comparing groups differing by 1 unit "near 0" (ignoring the issue of prediction at the means), has a relative rate ratio of 2.22, and groups differing by 1 unit have a ratio of relative rate ratios of 0.91
null
CC BY-SA 4.0
null
2023-05-02T17:44:38.813
2023-05-02T18:05:27.247
2023-05-02T18:05:27.247
8013
8013
null
614691
1
null
null
0
11
I have a fuzzy cognitive model of inter-organizational collaboration that is represented by a 27x27 matrix. I want to analyze the effects of individual variables. I've read I can do this best with a SOBOL sensitivity analysis. However, I am open to any and all other ways of accomplishing this task. I am a total newb here, do not assume I have prior knowledge. I need simple answers with lots of pictures :) Cheers,
Sensitivity analysis of a fuzzy cognitive model
CC BY-SA 4.0
null
2023-05-02T18:02:08.360
2023-05-02T18:02:08.360
null
null
387075
[ "simulation", "covariance-matrix", "matrix", "matrix-decomposition", "sensitivity-analysis" ]
614692
2
null
613194
0
null
Finally, I realized how to correctly answer this simple question. First, a cumulative distribution function gives the probability that a variable takes values less or equal to some specific value. Therefore, it is correct to use this function. Second, as Joe stated in his answer, it is indeed better to use a binomial distribution instead of the Poisson distribution. `binom.cdf(X, N, P)` will return a probability that X or less machines are will be broken. In order to get the probability that more than X will be broken, I need to subtract it from 1: `1-binom.cdf(X, N, P)`. In other words, I was right in choosing a `cdf` function, but I did not think about the binomial distribution.
null
CC BY-SA 4.0
null
2023-05-02T18:12:37.463
2023-05-02T18:12:37.463
null
null
310856
null
614693
1
null
null
0
22
In an exam question I was asked to consider the following data related to education loan taken or not for pursuing higher education and formulate a suitable hypothesis and validate it by using an appropriate statistical procedure. |Education Loan |Boys |Girls | |--------------|----|-----| |Loan Taken |300 |100 | |Not Taken |150 |250 | The most obvious hypothesis was the Chi-Square Test of Independence which determines whether there is an association between categorical variables (Gender and Education Loan taken or not), but I used the two-sample z-test for the alternate hypothesis (H1) that the proportion of boys taking education loan is higher than proportion of girls. With the given data in the table, I decided that my hypothesis was more suitable. Please tell me if there is a standard way to determine which hypothesis is suitable?
Which statistical test is more suitable?
CC BY-SA 4.0
null
2023-05-02T18:14:31.680
2023-05-02T18:38:43.857
2023-05-02T18:38:43.857
164936
227849
[ "self-study", "chi-squared-test", "z-test", "two-sample" ]
614695
1
null
null
1
25
I am performing regression analysis based on a full sample and two sub samples from the full sample. Is it always the case that the coefficient of the full sample lies between the coefficients of the two sub samples?
Regression Coefficients
CC BY-SA 4.0
null
2023-05-02T18:27:11.540
2023-05-02T18:27:11.540
null
null
387078
[ "regression", "least-squares", "regression-coefficients", "sample" ]
614696
1
614738
null
0
29
I have a time series with realized sales prices on monthly basis in a large European city which comes as an index and I would like to do 1 period ahead forecasting. I have run ADF and KPSS for unit root / stationarity as well as Ljung-box for white noise, I get the following results: Detrended series: - ADF p-value: 0.826796 - KPSS p-value: < 0.01 - Ljung-box p-value: all 12 lags has p-value of basically 0 1st difference series: - ADF p-value: 0.001352 - KPSS p-value: > 0.1 - Ljung-box p-value: all 12 lags has p-value of basically 0 - ACF shows lags 1,2,3,4 being significant - PACF shows lags 1,2,3,4 being significant 2nd difference series: - ADF p-value: 0.003257 - KPSS p-value: > 0.1 - Ljung-box p-value: all 12 lags has p-value of basically 0 - ACF shows lags 1,2,4,5,6 being significant. - PACF shows lags 1,2,3,6 being significant From what I can tell, this is just a plain non-stationary process, since the Ljung-box test rejects the p-values of all 12 lags. Am i doing something wrong? As I understand it, it should be possible to forecast a housing market index to some extend, I therefore thought that at least the housing index would be difference stationary. I might do something wrong, can anyone help me understand?
ACF and PACF vs Ljung Box test
CC BY-SA 4.0
null
2023-05-02T18:28:47.553
2023-05-03T06:46:30.773
2023-05-03T06:46:30.773
53690
283049
[ "time-series", "autocorrelation", "stationarity", "acf-pacf", "white-noise" ]
614699
1
null
null
0
16
Could someone explain me in simpler terms what a cross sectional dependence and panel unit root tests does in practice? How is panel unit root tests different from any other unit root tests such as ADF for instance? What if I get a result that indicates cross sectional dependence (Pesaran CD) but is stationary according to the Breitung Variance Ratio Test? To provide some context: I am running a quantile regression with day ahead price as dependent variable with different explanatory variables. Due to all of the 24 prices being set the day before, the data should be treated as panel data rather than a time series (subsetting for each hour). Therefore, I want to explore the characteristics of these panels. I am using the packages `plm` and `egcm` in R.
Cross-sectional dependence and second generation unit root tests
CC BY-SA 4.0
null
2023-05-02T18:53:13.037
2023-05-03T07:42:48.463
2023-05-03T07:42:48.463
53690
383228
[ "panel-data", "stationarity", "cross-correlation", "unit-root" ]
614700
1
null
null
1
6
To what extent can I apply lessons learnt about the model performance on a simple "cheap" model (e.g. Naive Bayes, LDA) to a more flexible model (i.e. ensemble of trees, neural networks)? For example: > How irresponsible is it to evaluate preprocessing strategies (e.g. should I use Fourier transform? PCA? Which signal segment length? Which robust standardization method? Which hyper-parameters etc.) using a simple, cheap model and then assume the same principles will also hold for flexible models? - On the one hand this seems like a practical solution where you could rapidly prototype in a computationally cheap way. The "information" the data contains after preprocessing would, after all, be the same for a simple and a more sophisticated model? - On the other hand, I can recognize that a more flexible/expensive model might be able to transform the input data in a more intricate way. As a result, you might make a mistake in assuming that the optimal pre-processing procedure for the simple model is also near-optimal for the expensive model.
Training computationally cheap models for feature selection and then extending to more flexible models
CC BY-SA 4.0
null
2023-05-02T18:56:14.870
2023-05-02T18:56:14.870
null
null
327553
[ "feature-selection", "data-preprocessing" ]
614701
1
null
null
1
30
I can't understand some steps in this example, my question is about the encircled steps in red. How to find these values ($x_1, x_2, x_{11}, x_{21}, x_{22} $), and what is its relation with that of $Y_1$ and $Y_2$ Please illustrate steps of finding what to do, to find these values. [](https://i.stack.imgur.com/8oYDl.png)
Joint PDF, Independence of Random variables
CC BY-SA 4.0
null
2023-05-02T19:19:17.213
2023-05-02T19:19:17.213
null
null
384723
[ "probability", "random-variable", "independence", "variable" ]
614702
1
614704
null
0
22
I have 4 categories for cytogenetic risk to find the survival I ran this ``` TMB_NONSYNONYMOUS = survfit(Surv(OS_MONTHS, Status)~ RISK_CYTO, data = Final_cox_146) ``` Where RISK_CYTO is Good, Intermediate, Poor and N.D The output figure is this [](https://i.stack.imgur.com/NHHWs.png) When I check the survfit object I see this ``` TMB_NONSYNONYMOUS Call: survfit(formula = Surv(OS_MONTHS, Status) ~ RISK_CYTO, data = Final_cox_146) n events median 0.95LCL 0.95UCL RISK_CYTO=Good 31 10 NA 46.5 NA RISK_CYTO=Intermediate 79 56 18.5 11.3 28.4 RISK_CYTO=N.D. 3 2 5.7 1.3 NA RISK_CYTO=Poor 33 24 7.7 4.2 52.7 ``` and when I check the summary of the object i see this output ``` summary(TMB_NONSYNONYMOUS) Call: survfit(formula = Surv(OS_MONTHS, Status) ~ RISK_CYTO, data = Final_cox_146) RISK_CYTO=Good time n.risk n.event survival std.err lower 95% CI upper 95% CI 0.2 31 1 0.968 0.0317 0.908 1.000 0.7 30 1 0.935 0.0441 0.853 1.000 1.0 29 1 0.903 0.0531 0.805 1.000 4.5 28 1 0.871 0.0602 0.761 0.997 8.0 27 1 0.839 0.0661 0.719 0.979 19.2 26 1 0.806 0.0710 0.679 0.958 26.3 25 1 0.774 0.0751 0.640 0.936 30.6 20 1 0.735 0.0807 0.593 0.912 45.8 13 1 0.679 0.0922 0.520 0.886 46.5 12 1 0.622 0.1004 0.454 0.854 RISK_CYTO=Intermediate time n.risk n.event survival std.err lower 95% CI upper 95% CI 0.1 79 1 0.987 0.0126 0.963 1.000 0.3 78 1 0.975 0.0177 0.941 1.000 0.5 77 1 0.962 0.0215 0.921 1.000 0.8 76 2 0.937 0.0274 0.885 0.992 2.3 74 1 0.924 0.0298 0.867 0.984 2.4 73 1 0.911 0.0320 0.851 0.976 3.1 72 1 0.899 0.0339 0.835 0.968 4.6 71 1 0.886 0.0357 0.819 0.959 5.2 70 1 0.873 0.0374 0.803 0.950 5.3 69 1 0.861 0.0390 0.788 0.941 5.5 68 1 0.848 0.0404 0.773 0.931 5.7 67 1 0.835 0.0417 0.758 0.921 6.3 66 2 0.810 0.0441 0.728 0.901 6.6 64 1 0.797 0.0452 0.714 0.891 7.1 62 1 0.785 0.0463 0.699 0.881 7.4 61 1 0.772 0.0473 0.684 0.870 7.5 60 2 0.746 0.0491 0.656 0.849 7.7 58 1 0.733 0.0499 0.642 0.838 7.9 57 1 0.720 0.0506 0.628 0.827 8.1 56 2 0.695 0.0520 0.600 0.804 8.2 54 1 0.682 0.0526 0.586 0.793 8.4 53 1 0.669 0.0532 0.572 0.782 9.3 52 1 0.656 0.0537 0.559 0.770 10.2 51 2 0.630 0.0546 0.532 0.747 10.7 49 1 0.617 0.0549 0.519 0.735 11.2 48 1 0.605 0.0553 0.505 0.723 11.3 47 1 0.592 0.0556 0.492 0.711 11.5 46 1 0.579 0.0558 0.479 0.699 15.4 44 1 0.566 0.0561 0.466 0.687 16.3 43 1 0.552 0.0563 0.452 0.675 16.4 42 1 0.539 0.0565 0.439 0.662 17.0 41 1 0.526 0.0566 0.426 0.650 18.1 40 1 0.513 0.0567 0.413 0.637 18.5 39 1 0.500 0.0568 0.400 0.624 19.0 38 1 0.487 0.0568 0.387 0.612 20.5 37 2 0.460 0.0567 0.362 0.586 22.3 35 1 0.447 0.0566 0.349 0.573 24.1 33 1 0.434 0.0564 0.336 0.560 24.8 32 1 0.420 0.0563 0.323 0.546 25.8 31 1 0.407 0.0561 0.310 0.533 27.0 28 1 0.392 0.0559 0.296 0.519 27.4 26 1 0.377 0.0558 0.282 0.504 28.4 25 1 0.362 0.0555 0.268 0.489 30.0 24 1 0.347 0.0552 0.254 0.474 32.3 23 1 0.332 0.0549 0.240 0.459 34.0 21 1 0.316 0.0545 0.225 0.443 46.8 14 1 0.293 0.0551 0.203 0.424 53.9 11 1 0.267 0.0561 0.177 0.403 55.4 10 1 0.240 0.0565 0.151 0.381 56.3 9 1 0.213 0.0562 0.127 0.357 RISK_CYTO=N.D. time n.risk n.event survival std.err lower 95% CI upper 95% CI 1.3 3 1 0.667 0.272 0.2995 1 5.7 2 1 0.333 0.272 0.0673 1 RISK_CYTO=Poor time n.risk n.event survival std.err lower 95% CI upper 95% CI 0.3 33 1 0.970 0.0298 0.913 1.000 0.5 32 1 0.939 0.0415 0.861 1.000 0.6 31 1 0.909 0.0500 0.816 1.000 1.2 30 1 0.879 0.0568 0.774 0.998 1.3 29 1 0.848 0.0624 0.735 0.980 1.4 28 1 0.818 0.0671 0.697 0.961 1.6 27 1 0.788 0.0712 0.660 0.940 1.9 26 1 0.758 0.0746 0.625 0.919 2.4 25 1 0.727 0.0775 0.590 0.896 3.9 24 1 0.697 0.0800 0.557 0.873 4.0 23 1 0.667 0.0821 0.524 0.849 4.2 22 1 0.636 0.0837 0.492 0.824 4.6 21 1 0.606 0.0851 0.460 0.798 5.6 20 1 0.576 0.0860 0.430 0.772 6.6 19 1 0.545 0.0867 0.399 0.745 7.0 18 1 0.515 0.0870 0.370 0.717 7.7 17 1 0.485 0.0870 0.341 0.689 9.3 16 1 0.455 0.0867 0.313 0.661 11.0 14 1 0.422 0.0864 0.283 0.630 11.8 13 1 0.390 0.0856 0.253 0.599 12.2 12 1 0.357 0.0844 0.225 0.568 21.5 11 1 0.325 0.0827 0.197 0.535 26.3 10 1 0.292 0.0806 0.170 0.502 52.7 5 1 0.234 0.0830 0.117 0.469 ``` QUESTIONS - From the graph which it seems to me the intermediate patients have longer survival as compared to Good and Poor, is that correct interpretation? - Why there is NA in the median survival for Good Cytogenetic case as seen in the first data frame?
Interpretation of survplot output summary and the figure
CC BY-SA 4.0
null
2023-05-02T19:59:29.913
2023-05-02T20:28:03.683
null
null
334559
[ "survival" ]
614703
1
615240
null
1
40
I'm currently working on a fraud detection problem with a dataset of 300,000 rows and 500 columns, 70 of which are categorical with over 10 categories each. I'm facing memory constraints and exploring target encoding as a solution to deal with the categorical columns. I've recently come across the [CatBoost Encoder](https://arxiv.org/abs/1706.09516), which is often praised for its ability to prevent target leakage. However, I'm struggling to understand why this method prevents target leakage. What's the intuitive explanation of this method? Edit: My question is similar to this one: [How do Ordered Target Statistics work for CatBoost?](https://stats.stackexchange.com/questions/559051/how-do-ordered-target-statistics-work-for-catboost). It's different because the author in that question is asking about the meaning of the history. I understand that, but I don't understand why the method stops target leakage.
Why doesn't CatBoost Encoding cause target leakage?
CC BY-SA 4.0
null
2023-05-02T20:10:59.250
2023-05-08T16:04:25.157
2023-05-02T20:25:42.577
363857
363857
[ "categorical-encoding", "data-leakage", "catboost", "target-encoders" ]
614704
2
null
614702
2
null
The lines on a KM plot show what percentage of each group of patients survived to each time point. Lines near the top show "better" survival, with a higher proportion of patients surviving for longer, while lines that drop steeply to the bottom show worse survival, with only a small percentage of the group surviving even a short while. In this plot, we see the red "Good" line has the best survival, with the highest proportion of patients surviving the longest. The blue "Intermediate" line is the next best, followed by the (fairly similar) green "ND" and grey "Poor" groups. The left-right length of the lines just shows the follow-up time for different groups, but that doesn't indicate better survival. The blue line happens to be the longest meaning there was at least one person who was followed-up and survived longer than anyone else we have data for, but that doesn't mean the blue group is better overall than others. How far the line stretches to the right depends only on the survival time of the last person left in that group at the end, and doesn't really represent aggregate group behavior Many people in the red line simply got censored, meaning they were still alive when the trial ended - they may well have survived as long or even longer than that person in the blue line had the trial continued. As for median survival, people often misinterpret this as the "median of survival times", which is incorrect. The "median of survival times" always exists, but is actually a fairly useless number, as it does not respect if patients got censored or had an event at their follow-up time. The median survival is the time at which 50% of the population survives. Here, the "Good" group never has its survival rate drop to 50%, so the median survival time is NA.
null
CC BY-SA 4.0
null
2023-05-02T20:15:46.850
2023-05-02T20:28:03.683
2023-05-02T20:28:03.683
76825
76825
null
614705
2
null
435215
0
null
To find the joint PDF of two random variables U and V that are functions of two other random variables X and Y, we can use the change of variables technique. In this example, we have $U = X^2 - Y^2$ and $V = XY$. The first step is to find the inverse functions of $U$ and $V$ in terms of $X$ and $Y$. For $U = X^2 - Y^2$, we can solve for $X$ and $Y$ as follows: $X = \sqrt(\frac{U + Y^2}{2})$, $Y = \sqrt(\frac{Y^2 - U}{2})$ For $V$ = $XY$, we can solve for $X$ and $Y$ as follows: $X = \frac{V}{Y}$, $Y = \frac{V}{X}$ The next step is to compute the Jacobian determinant of the inverse transformation. The Jacobian determinant is given by: $J = |\frac{\partial(X,Y)}{\partial(U,V)}| = \frac{1}{2XY}$ Using the inverse functions and the Jacobian determinant, we can write the joint PDF of $U$ and $V$ as: $f_{U,V}(u,v) = f_{X,Y}(x(u,v), y(u,v))\times|J|$ where $x(u,v)$ and $y(u,v)$ are the inverse functions of $U$ and $V$ in terms of $X$ and $Y$, and $f_{XY}(x,y)$ is the joint PDF of $X$ and $Y$. Substituting the inverse functions and the Jacobian determinant, we get: $f_{U,V}(u,v) = f_{XY}(\sqrt(\frac{u+v^2}{2v}), \sqrt(\frac{v^2-u}{2v}))\times \frac{1}{2v^2}$ This is the joint PDF of U and V in terms of u and v.
null
CC BY-SA 4.0
null
2023-05-02T20:23:27.957
2023-05-02T20:23:27.957
null
null
384723
null
614706
2
null
530077
1
null
> Random forest is better for problems where random forest does better. Multinomial logistic regression is better for problems where multinomial logisitc regression is better. -Sycorax (in the comments) This is really all there is to it, but it might help to unpack this comment. If you have a relationship that (at least approximately) follows the multinomial logistic model, then that is the (approximately) correct model. It might be that a random forest can achieve high in-sample performance, but when it comes to making out-of-sample predictions, it will have achieved that awesome performance by fitting to coincidences in the training data rather than the true relationship between the features and the outcome. In such a scenario, the logistic regression outperforms the random forest, even in terms of pure predictive ability, contradicting your source. To be fair to your source, however, relationships in real life tend to be complicated. Simple models like multinomial logistic regressions are likely to miss important relationships that flexible models like random forests are able to detect. If you control for overfitting concerns, then letting a random forest "go do its thing" might make for a better out-of-sample predictor. Yes, there are theorems like Stone-Weierstrass saying that (generalized) linear models with polynomial features can be as good at approximating nonlinear relationships as is demanded (under decent conditions), but you have to engineer those polynomial features and know which ones to engineer. For a random forest, all you do is program considerable flexibility and let it take care of the rest (for better or for worse).
null
CC BY-SA 4.0
null
2023-05-02T20:27:50.987
2023-05-02T20:27:50.987
null
null
247274
null
614707
1
null
null
0
36
We know that in econometrics it is common to work with population models and relationships. Thus, when we are faced with the data, we appeal to the analogy technique to emulate the population condition. This is the principle of analogy (see, for example [this](https://stats.stackexchange.com/questions/272803/principle-of-analogy-and-method-of-moments)). Well, I'm reading a little about Principal Component Analysis (PCA) and one motivation is when we have more regressors ($n$) than the number or sample size ($T$). But first, the population model is: $$y= x'\beta + u_t, \quad x= (x_1,..., x_n)$$ Denote the covariance matrix of $x$ as $\Sigma= [cov(x_i,x_j)]_{n\times n}$. If we have a sample $(x_{t}= (x_{1t},.., x_{nt}))_{t = 1}^T$ with $n>>T$, tipically the problem of PCA begins first to find $\gamma$ such that \begin{equation} \max_{\gamma \in \mathbb R^n} \gamma' \hat{\Sigma} \gamma, \quad \hbox{ s.t. } |\gamma|^2 = \gamma' \gamma = 1 \end{equation} where $\gamma' \hat{\Sigma} \gamma$ is nothing more that the covariance matrix of $X\gamma$ with $X$ being the matrix sample of the $x_t$. Moreover, $\hat \Sigma = \frac{1}{T} X'X$. As you can see, this is a problem that uses the available samples given by $X$. But I would like to know if there is behind this a maximization problem with population objects only.
A population model for PCA
CC BY-SA 4.0
null
2023-05-02T20:29:48.057
2023-05-02T22:22:27.843
2023-05-02T22:22:27.843
373088
373088
[ "self-study", "pca", "econometrics", "dimensionality-reduction" ]
614708
1
null
null
1
14
Say that we have a cross-sectional dataset with two variables, A and B. Also suppose that A and B are related to each other in some way. Now, there are some rows for which only A is missing, and some for which only B is missing. There are no rows for which both A and B are missing. Now say that I want to impute these missing values by just taking the average of A and the average of B. Would this be problematic, given that A and B are related to each other?
Is there any issue in imputing missing observations when the missing observations are related to each other?
CC BY-SA 4.0
null
2023-05-02T20:32:33.443
2023-05-02T20:38:32.293
2023-05-02T20:38:32.293
324902
324902
[ "data-imputation" ]
614709
1
null
null
0
23
Many questions seem to ask something similar, but I could find none that seemed to really match. Suppose there are two potentially distinct archaeological cultures "East" and "West". Both East and West produce statues that are either red or blue. Five eastern sites are known, and six western sites are known. The percentage of statues that are blue at each site are as follows: Eastern site 1: 91.7% (n=24) Eastern site 2: 75.0% (n = 24) Eastern site 3: 88.5% (n=61) Eastern site 4: 81.3% (n = 16) Eastern site 5: 100% (n = 34) Mean frequency of blue statues at Eastern sites (equal weighting): 87.3% Western site 1: 47.8% (n=23) Western site 2: 66.7% (n=15) Western site 3: 73.3% (n=75) Western site 4: 73.7% (n=19) Western site 5: 92.3% (n=26) Western site 6: 77.2% (n=22) Mean frequency of blue statues at Western sites (equal weighting): 71.8% Suppose an archaeologist believes there is a difference in the mean blue statue percentage between eastern and western sites. How could I investigate this belief statistically? What statistical test could be used to test for a significant difference in the mean site-wide percentages in eastern and western sites? Because these are percentages, values are bounded between 0 and 100% so the data are not normal. And because there are not many sites, the distributions cannot really be observed. Is there a test I could use that would not require the normality assumption? Or would it still be fine to use a t-test anyway? I read that when the normality assumption is violated, the t-test can be too conservative. I don't want a test that is too conservative because then I would be unfairly concluding that the archaeologist's belief is ill-founded. A Welch's t-test comes out at p = 0.065. I would like to weigh sites equally, rather than give greater weight to sites with a greater sample size, because the choice of colour of statues at one site are perhaps linked. In the extreme, for example, one artisan could choose to produce 50 blue statues at one site over a week. These 50 blue statues would ultimately come down to a single choice made by one artisan. So 50 blue statues at one site cannot really be assumed to be 50 separate datapoints. What statistical test could be used for this?
How to test for a difference in the mean percentages of two populations
CC BY-SA 4.0
null
2023-05-02T20:38:55.547
2023-05-02T21:12:50.387
2023-05-02T21:12:50.387
46128
46128
[ "t-test", "percentage", "archaeology" ]
614710
1
null
null
0
19
I need some help for a statistical analysis. I have the following contingency table : [](https://i.stack.imgur.com/4BBET.png) I would like to compare p3 and p2, that is the comparing the proportion of event in G2 and the proportion of not-event in G1. How it is possible ? I have a small sample size : p1 = 3, p2 = 4, p3 = 8 and p4 = 2. Thanks for your help
Comparing two proportions within contingency table for small sample size
CC BY-SA 4.0
0
2023-05-02T20:42:35.310
2023-05-02T20:42:35.310
null
null
381165
[ "r", "p-value", "proportion", "fishers-exact-test", "z-test" ]
614712
2
null
614600
1
null
tl;dr you should probably just compute speed and use it as your response variable; you can do this on the fly in the formula, i.e. `Flighttime/Distance ~ Temperature + ...` Following up on @AdamO's comment,this sounds like it might be a miscommunication between you and your advisor; you probably want to use a multiplicative offset, or (more easily) just use a ratio of (flight time/distance) as your response variable. I'll start by explaining what your current model does. I'm going to leave out the random effects, since they're not relevant to the particular question. I'm also going to include the offset term in the formula itself; this is equivalent but I find it slightly easier to understand. ``` Flighttime ~ Temperature + offset(Distance) ``` This fits the model $$ F = \beta_0 + \beta_T \textrm{Temperature} + \textrm{Distance} + \epsilon $$ (where $\epsilon$ is a residual Gaussian error term). The first clue that this doesn't make sense it that the units don't match. If flight time is measured in minutes and distance in meters, then we have different units on each side. (The units of $\beta_0$ would normally match the units of the response [time in this case], the units of $\beta_T$ would be (time/temperature) so that the units of $\beta_T \textrm{Temperature}$ would be [time], and would match the response. Forgetting about the (probably incorrect) offset term for the moment, the meaning of the parameters you're getting in the range from 300-600 is (assuming that these are the units for $\beta_T$) that a 1 °C change (assuming that you're measuring temperature in °C) would lead to an expected change in flight time of 300-600 minutes. Either your insects are very sensitive to temperature, or something is messed up (if you post some plots of your data or at least the results of `summary(fitted_model)` we might be able to say more ...). You could fit a multiplicative offset by using a logarithmic link model: ``` glmer(Flighttime ~ Temperature + (1|IndividualID) + offset(log(Distance)), data=KMIdata, family = gaussian(link = "log")) ``` This is fitting the model $F = \exp(\beta_0 + \beta_T T + \log D + \epsilon)$, which if you do the algebra (separate out $\exp(\log D) = D$ from the $\exp()$ term and divide both sides by $D$) is equivalent to $F/D = \exp(\beta_0 + \beta_T T + \epsilon)$. This assumes an exponential relationship between flight time (scaled by distance) and temperature, and a log-Normal rather than a Normal response distribution, but this could very well be reasonable for your situation. People don't usually use offsets for models like this (i.e., continuous responses/linear models) because there's nothing stop you from fitting $F/D = \beta_0 + \beta_T$ directly. This doesn't work right for Poisson models used for count data (you can only fit a Poisson model to discrete counts, not to ratios), which is why people use offsets in this case.
null
CC BY-SA 4.0
null
2023-05-02T21:55:03.933
2023-05-02T21:55:03.933
null
null
2126
null
614713
1
null
null
0
18
I have several questions regarding equation (2) from the paper: [Calibrated Structured Prediction (Kuleshov, 2015)](https://papers.nips.cc/paper_files/paper/2015/file/52d2752b150f9c35ccb6869cbf074e48-Paper.pdf). I believe most of my questions stem from not understanding the quantity: $T(x)=\mathbb{E}[y|F(x)]$ defined as "the true probability $y=1$ given that $x$ received a forecast $F(x)$. Q1: How would you write "$y-T(x)$ has expectation 0 conditioned on $F(x)$"? $$ \mathbb{E}[y-T(x)|F(x)] =0$$ or $$\mathbb{E}[(y-T(x))|F(x)]=0$$ Q2: Why is this statement true: "$y-T(x)$ has expectation 0 conditioned on $F(x)$"? And does the validity of this statement depend on having a "good" forecaster, $F(x)$? Q3: How is $T$ used to decompose the $l_2$ prediction loss? The authors decompose it into... $$\mathbb{E}[(y-F(x))^2] = \mathbb{E}[(y-T(x))^2]+\mathbb{E}[(T(x)-F(x))^2]$$ But starting from the equation below, I'm not sure where to go... $$\mathbb{E}[(y-F(x))^2] = \mathbb{E}[y^2-2\cdot y \cdot F(x) + F(x)^2]$$ I have tried to work backwards by also expanding the authors' original equation to ... $$\mathbb{E}[y^2-2 \cdot y \cdot T(x) + T(x)^2 + T(x)^2 -2 \cdot F(x) \cdot T(x) + F(x)^2]$$ then setting that equation to the expanded form of $\mathbb{E}[(y-F(x))^2]$: $$\mathbb{E}[y^2-2 \cdot y \cdot T(x) + 2 \cdot T(x)^2 -2 \cdot F(x) \cdot T(x) + F(x)^2] = \mathbb{E}[y^2-2\cdot y \cdot F(x) + F(x)^2]$$ $$\mathbb{E}[-2 \cdot y \cdot T(x) + 2 \cdot T(x)^2 -2 \cdot F(x) \cdot T(x)] = \mathbb{E}[-2\cdot y \cdot F(x)]$$ $$\mathbb{E}[- y \cdot T(x) + T(x)^2 - F(x) \cdot T(x)] = \mathbb{E}[- y \cdot F(x)]$$ But at this point, I am stuck. I do not see how to leverage "$y−T(x)$ has expectation 0 conditioned on $F(x)$" to understand or simplify the equation further. Thank you for reading! Any and all guidance is greatly appreciated.
Question on Calibrated Structured Prediction (Kuleshov, 2015)
CC BY-SA 4.0
null
2023-05-02T22:27:58.273
2023-05-02T22:27:58.273
null
null
380242
[ "machine-learning", "mathematical-statistics", "calibration" ]
614714
1
null
null
0
16
Let us assume we have the data modeled $D|\pi\sim Binomial (N,\pi)$, where we assume $N$ is given throughout. We also have that $\pi|\theta \sim V( \theta )$ where $V$ is a known distribution and $\theta$ is a vector of parameters. Let us assume we know $f(\theta)$ and it is easy to compute. I tried applying this model through the following process in Stan: - Sample $\theta$ from $f(\theta)$ - Sample $\pi$ from $V(\theta)$ given the above value of $\theta$. - Sample $D$ from $Binomial(N,\pi)$. The following is the code I used in the model part of Stan: ``` theta ~ prior(prior_params); pi ~ V(theta); defaults ~ binomial(credits,pi); ``` However, when I run the chains, the parameters $\theta$ don't seem to respond to the data. In fact, $f(\theta|D) \approx f(\theta)$. I don't know why this happens, probably due to a misspecification of the model. I think it might have to do with the fact that the likelihood is responding only to $\pi$, this is, $f(D|\pi,\theta) = f(D|\pi)$, but I thought that through the sampling of $\pi|\theta$ the model would "take into account" $\theta$.
Intermediate variables in Bayesian model for binomial data
CC BY-SA 4.0
null
2023-05-02T22:43:01.763
2023-05-02T22:43:01.763
null
null
221201
[ "bayesian", "binomial-distribution", "likelihood", "stan" ]
614716
1
null
null
1
9
I am trying to determine which group (A or B) would be better suited for my final analysis. Group a was derived from certain occupations in a specific industry while group B was derived from different types of industries (including the specific industry in group A). I am just trying to figure out how big of a difference this will make on my outcome if I used 1 group versus the other and therefore need to test this. Since observations are not independent and not normal, I would assume that the mann whitney u and t test are not plausible. Suggestions? Thanks in advance. Edit: outcome is numeric and continuous
trying to determine the difference between two groups, what statistical test should I use? They are not independent from each other
CC BY-SA 4.0
null
2023-05-02T23:08:17.057
2023-05-02T23:13:49.917
2023-05-02T23:13:49.917
387089
387089
[ "hypothesis-testing", "statistical-significance", "inference" ]
614717
2
null
307776
0
null
When you calculate in theory, you are acting as if you know the true parameter values. When you fit to data, the fitting in your way of guessing what those parameter values truly are. Much of statistical inference is about how to put yourself in a position to consistently make good guesses, but you still need to make a guess and estimate those parameters. The variance calculation with the $n-p-1$ denominator is an unbiased estimate of the true error variance. This is a desirable property for our estimate (guess) to have. COnversely, a variance calculation with an $n$ or $n-1$ denominator makes for a biased estimator. While there can be reasons to like biased estimators over unbiased estimators, all else equal, we like unbiased estimators (but that "all else equal" matters).
null
CC BY-SA 4.0
null
2023-05-02T23:38:47.813
2023-05-02T23:38:47.813
null
null
247274
null
614718
2
null
446549
0
null
These represent different estimation techniques, arguably different models. For `lm`, this is the classic OLS linear regression that minimizes the sum of squared residuals to estimate the regression parameters. $$ \hat y_i = \hat\beta_0 + \hat\beta_1x_{i1} + \dots + \hat\beta_px_{i,p} \\ \hat\beta = \underset{\hat\beta}{\arg\min}\left\{ \overset{N}{\underset{i=1}{\sum}}\left( y_i - \hat y_i \right)^2 \right\} $$ For `quantreg::rq`, this estimates all kinds of quantile models. Explicitly, quantile models estimate conditional quantiles instead of conditional means. They do this by calculating parameter estimates by minimizing a different criterion that the sum of squared residuals. Define the following for an individual observation and its prediction. $$ l_{\tau}(y_i, \hat y_i) = \begin{cases} \tau\vert y_i - \hat y_i\vert, & y_i - \hat y_i \ge 0 \\ (1 - \tau)\vert y_i - \hat y_i\vert, & y_i - \hat y_i < 0 \end{cases} $$ Use this to define the optimization. $$ \hat y_i = \hat\beta_0 + \hat\beta_1x_{i1} + \dots + \hat\beta_px_{i,p} \\ \hat\beta = \underset{\hat\beta}{\arg\min}\left\{ \sum_{i=1}^Nl_{\tau}(y_i, \hat y_i) \right\} $$ Finally, for `MASS::lqs`, the various methods represent different ways of estimating the regresion coefficients. The documentation gets into more detail and gives references for learning more about robust regression. Briefly, the estimation techniques in this function are supposed to fit the model to just the "good" points (in the words of the authors). > Fit a regression to the good points in the dataset, thereby achieving a regression estimator with a high breakdown point. The various methods that can be passed to the `method` argument represent different ways of determining the "good" points and how to do the estimation with them.
null
CC BY-SA 4.0
null
2023-05-02T23:56:21.167
2023-05-02T23:56:21.167
null
null
247274
null
614720
2
null
418800
0
null
The easiest way to do this is to change the classification threshold. As you develop your machine learning knowledge, you will see that you destroy much potentially useful information by having any threshold at all, but for now, just changing the threshold sounds like a reasonable first idea. When you run, for instance, a neural network, you get predictions on a continuum. The standard continuum in this situation is on the interval $[0,1]$. Many software functions turn these into hard classifications by setting a threshold of $0.5$: above the threshold gets classified as one category, and below the threshold gets classified as the other category. However, you do not have to use $0.5$ as the threshold. If you want more specificity, so harder to be classified as positive, you might want to up the threshold. Perhaps set the threshold to $0.6$ or $0.8$ to get the specificity you desire. This will improve your specificity at the expense of sensitivity. The tradeoff can be visualized in receiver-operator characteristic (ROC) curves, such as those implemented by `pROC::roc` in `R` software. This function even gives a printout of what sensitivity and specificity are achieved at each threshold. Below, I give a quick demonstration of this, and I discuss in more detail [here](https://stats.stackexchange.com/a/603278/247274). ``` library(pROC) N <- 25 p <- rbeta(N, 1, 1) y <- rbinom(N, 1, p) r <- pROC::roc(y, p) d <- data.frame( threshold = r$thresholds, sensitivity = r$sensitivities, specificity = r$specificities ) d ################################################################################ # # OUTPUT # ################################################################################ threshold sensitivity specificity 1 -Inf 1.00000000 0.00000000 2 0.03986668 1.00000000 0.07142857 3 0.05315755 1.00000000 0.14285714 4 0.07842079 1.00000000 0.21428571 5 0.12086679 1.00000000 0.28571429 6 0.14478513 1.00000000 0.35714286 7 0.16003195 1.00000000 0.42857143 8 0.21402721 1.00000000 0.50000000 9 0.26453714 1.00000000 0.57142857 10 0.31080317 1.00000000 0.64285714 11 0.35289509 0.90909091 0.64285714 12 0.37692100 0.90909091 0.71428571 13 0.43799047 0.81818182 0.71428571 14 0.49503947 0.81818182 0.78571429 15 0.54152179 0.81818182 0.85714286 16 0.58273907 0.81818182 0.92857143 17 0.60398583 0.72727273 0.92857143 18 0.63121729 0.63636364 0.92857143 19 0.66352988 0.63636364 1.00000000 20 0.73563750 0.54545455 1.00000000 21 0.84121309 0.45454545 1.00000000 22 0.89278788 0.36363636 1.00000000 23 0.92770504 0.27272727 1.00000000 24 0.96569430 0.18181818 1.00000000 25 0.98375395 0.09090909 1.00000000 26 Inf 0.00000000 1.00000000 ``` (There are critics of ROC curves and even sensitivity and specificity in general, among them being [Frank Harrell](https://stats.stackexchange.com/users/4253/frank-harrell), whose criticisms of these are worth reading.) Most "classifiers" actually make predictions on a continuum that are then binned according to some threshold to make categorical predictions. If `caret` does not allow you to access those continuous predictions, the package is less user-friendly than it first seems. (My guess is that you can get them, however.)
null
CC BY-SA 4.0
null
2023-05-03T00:21:24.807
2023-05-03T00:21:24.807
null
null
247274
null
614721
1
null
null
0
18
I'm testing a number of ideology scales for IVs and seeing how they predict a number of well-being scale DVs. Turns out that my ideology scales individually seem to predict well-being, but when taken together, they don't, as the lack of fit table shows a Pillai's trace for the combined effect on the DVs as non-significant (p =.544). I'm struggling to understand why this is, and what I should do about it. I've tested the assumptions and there doesn't seem to be any multicollinearity, and the sample size is fairly large (~400). One thing that comes to mind is that the ideologies basically measure separate, opposing things, and so one of the predictors has a positive association with well-being, and the other has a negative association. Is it possible that they're 'cancelling eachother out' and this is the problem? Or is that nonsense? Are the individual predictors still relevant, or does the lack of model fit ruin everything? Any guidance would be much appreciated!
Multivariate multiple regression - individual predictors are significant, but overall model isn't?
CC BY-SA 4.0
null
2023-05-03T00:27:10.037
2023-05-03T00:34:08.920
2023-05-03T00:34:08.920
339056
339056
[ "regression", "multivariate-analysis", "model" ]
614722
1
614728
null
1
53
I know how to calculate $E[Y|X=x]$ from knowing how to calculate $P(Y|X=x)$. What I don't understand is the meaning of $P(Y=y|X)$. Wouldn't be $P(X)$ = 1 since it is taken over the entire random variable? Or is it constraining X to some implicit set like $x_1 < X < x_2$, so you can then apply Bayes' theorem? I have seen people suggest calculating $E[Y|X] = \int y p(y|X) dy$ which I don't understand. My understanding on calculating $\text{Var}(E[Y|X)$ as it stands is calculating $E[Y|X=x]$ and then marginalizing over X for the variance part, but I think I am missing something when it comes to $P(y|X)$ or going from $E[Y|X]$ to $\text{Var}(E[Y|X)$.
How does one calculate $\text{Var}(E[Y|X])$? and other related quantities
CC BY-SA 4.0
null
2023-05-03T00:49:05.397
2023-05-03T04:14:44.147
null
null
234463
[ "probability", "conditional-probability", "conditional-expectation" ]
614723
1
614840
null
5
147
We are usually told the following: - In the Frequentist Probability Approach, we are told that: the data is random but the parameters being estimated are fixed - In Bayesian Probability Approach, we are told that: the data is fixed but the parameters being estimated are random Conceptually, both approaches have their advantages: - Often in the real world, the data we collect can be thought of as random - for example, if we collect the same data on a different day, the data might not be exactly the same as the data from another day. Therefore, treating the data as random might have its benefits. - On the other hand, the parameters we are trying to estimate can also be thought of as random - many times the "true" values of these parameters are not directly observable and might inherently have some level of randomness encoded within themselves. Therefore, treating the parameters as random might also have its benefits. I was wondering about the following question: Is it possible to combine both of these approaches together such that both the data and the parameters are considered as random? This way we can get the best out of both worlds? Thanks!
Combining Bayesian and Frequentist Estimation into a Single Model?
CC BY-SA 4.0
null
2023-05-03T00:51:55.517
2023-05-04T05:46:14.197
2023-05-03T12:10:11.757
343075
77179
[ "bayesian", "frequentist" ]
614724
1
null
null
2
38
This question was flagged as offtopic, but I am sorry, this is NOT a question which "focuses on programming, debugging, or performing routine operations, or it asks about obtaining datasets". This is the question about the very essence of statistical quality control. Again: I got stuck on a simple application of ISO 2859 and Dodge-Romig and other sampling tables. Let's take a simple example of sampling according to ISO 2859 as featured on [www.sqconline.com](http://www.sqconline.com) - I am trying to use a calculator to determine the sampling plan and enter the batch size 3201 to 10000, AQL=10%, Inspection Level = II, Normal inspection. The calculator gives me the following values for Single Sampling Plan: The Single sampling procedure is: Sample 125* items. If the number of non-conforming items is 21 or less --> accept the lot. 22 or more --> reject the lot. NOW, I am sorry but 21 per 125 is 0.168, which is 16.8%. How is this possible that by making sure that there are no more than 21 defects on the sample of 125 items, which is making sure that the sample average error rate is not worse than 16.8%, I am ensuring that AQL does not exceed 10%? The simple logic says that if I want to get low AQL my sample should be BETTER than AQL on the population, not worse. I would be extremely grateful for any explanation of this "paradox".
Need help to interpret sampling numbers
CC BY-SA 4.0
null
2023-05-03T01:28:05.373
2023-05-04T14:20:33.440
2023-05-04T05:06:38.847
387095
387095
[ "sampling" ]
614725
1
614767
null
1
61
Using R language, I was mainly trying to understand if 0.25 quantile means `value < 25 percentage of the values` or `value <= 25 percentage of the values` And similarly for 0.75 quantile I tried the following code : ``` test <- c(1, 2, 3, 4, 5, 6, 7, 8) quantile(test) 0% 25% 50% 75% 100% 1.00 2.75 4.50 6.25 8.00 ``` I'm unable to explain why 25% is 2.75 and not 2 or 2.5 or 3 I checked the documentation [https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/quantile](https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/quantile) and found there are multiple algorithms used. However I don't really understand type=7, in simple terms. Could someone please explain this ? I checked this question : [Understanding The Algorithms Behind Quantile() in R](https://stats.stackexchange.com/questions/189986/understanding-the-algorithms-behind-quantile-in-r) but example used there seems to be different (meaning it results in a value from the numbers provided). Also based on the answer to that question "and the proportion greater than or equal to qp is at most 1−p", 7 numbers are greater than or equal i.e. 7/9 = 0.778, 1-1/4 = 0.75. 0.778 is not atmost 0.75. So the definition in that answer is not really correct ?
Understanding quantile function in R in simple terms
CC BY-SA 4.0
null
2023-05-03T02:18:57.863
2023-05-04T02:36:09.990
2023-05-03T02:40:58.373
74839
74839
[ "r", "quantiles" ]
614726
2
null
614566
0
null
There are six different regions (in the first quadrant of the $x$-$y$ plane) to consider depending on the values of $X, Y, C$. - If $Y > C > X$, then $Z=\max\left\{g(X,Y),g(X,C)\right\} = \max\left\{0, 0 \right\} = 0.$ - If $Y > X > C$, then $Z=\max\left\{g(X,Y),g(X,C)\right\} = \max\left\{0, \frac XC \right\} = \frac XC.$ - If $X > Y > C$, then $Z=\max\left\{g(X,Y),g(X,C)\right\} = \max\left\{\frac XY, \frac XC \right\} = \frac XC.$ - If $X > C > Y$, then $Z=\max\left\{g(X,Y),g(X,C)\right\} = \max\left\{\frac XY, \frac XC \right\} = \frac XY.$ - If $C > X > Y$, then $Z=\max\left\{g(X,Y),g(X,C)\right\} = \max\left\{\frac XY, 0 \right\} = \frac XY.$ - If $C > Y > X$, then $Z=\max\left\{g(X,Y),g(X,C)\right\} = \max\left\{0,0 \right\} = 0.$ Can you take it from here?
null
CC BY-SA 4.0
null
2023-05-03T03:45:06.543
2023-05-03T03:45:06.543
null
null
6633
null
614727
1
null
null
2
28
I have a dataset of blood pressure recordings measured over 36 hours, which I plotted with BP (aka MAP) on the Y axis, and minutes on the X axis. I then plotted the blood pressure percentiles for age (horizontal lines - 1st%ile, through 99th%ile) running across the graph. I filled in the entire area under the blood pressure tracing using geom_area, and then shaded in the area under the 10th% and 50th%iles. I would like to calculate how much time the person spent with a blood pressure in each "zone". Ie. how many minutes did this patient have a blood pressure less than the 50th%ile? less than the 10th? Is this possible with this stacked area graph? [](https://i.stack.imgur.com/AQZWP.png) I am a beginner beginner. Any help would be appreciated!
How to calculate the area (or Time) spent in each "zone" on a stacked area graph?
CC BY-SA 4.0
null
2023-05-03T03:46:49.787
2023-05-03T03:46:49.787
null
null
387100
[ "r", "data-visualization", "dataset" ]
614728
2
null
614722
4
null
When you specify the conditional expectation (or other moment) of a random variable $Y$ given a specific value of the conditioning variable $X=x$, the result is a function of the argument $x$. In generic function notation, we have: $$\mathbb{E}(Y|X=x) = f(x).$$ When you replace the argument value $x$ with the actual random variable $X$ (by declining to specify a value for the conditioning variable) the result is a function of the random variable $X$. In generic function notation, we have: $$\mathbb{E}(Y|X) = f(X).$$ Because this latter form is a function of a random variable, it is itself a random variable. That is, the conditional expectation of $Y$ conditional on $X$ is a random variable. The distribution of this random variable is obtained through the appropriate transformation rules using the distribution of $X$. --- An example: Suppose we have $Y|X=x \sim \text{N}(2x, 1)$ and $X \sim \text{N}(\mu, 1)$. Then we have the conditional expectation function: $$f(x) \equiv \mathbb{E}(Y|X=x) = 2x.$$ The random variable version of this is: $$f(X) = \mathbb{E}(Y|X) = 2X \sim \text{N}(2 \mu, 4).$$ So in this case you would have: $$\mathbb{V}(\mathbb{E}(Y|X)) = \mathbb{V}(2X) = 4.$$
null
CC BY-SA 4.0
null
2023-05-03T04:08:20.913
2023-05-03T04:14:44.147
2023-05-03T04:14:44.147
173082
173082
null
614730
1
null
null
1
39
Suppose I have a feed forward neural network which approximates a value, say $Y_0$. The analytical value of $Y_0$ is given. The plot of the network approximation of $Y_0$ each step is given as follows. [](https://i.stack.imgur.com/wA3fW.png) We can see (visually) that the approximation of $Y_0$ converges to its analytical value. But, how can we mathematically say that the approximation converges? Note: There is a definition of the convergence of a sequence. > A sequence $\lbrace x_n \rbrace_{n=1}^{\infty}$ in $\mathbb{R}$ is said to converge to $x \in \mathbb{R}$ if for every $\varepsilon > 0$ there exists a natural number $K(\varepsilon)$ such that for all $n \geq K(\varepsilon$), the terms $x_n$ satisfy $\lvert x_n - x \rvert < \varepsilon$. In this definition, the sequence is infinite and usually has certain rules, for example $\lbrace x_n \rbrace_{n=1}^\infty = \lbrace \frac{1}{n} \rbrace_{n=1}^\infty$. In fact, usually the approximation result of a feed forward neural network (or deep neural network in general) is a finite sequence which has the same number as the epoch/step. Is there a definition of convergence in the context of deep neural networks? Thank you.
What is the definition of convergence in the context of deep neural networks?
CC BY-SA 4.0
null
2023-05-03T04:29:15.613
2023-05-03T05:29:41.053
null
null
387019
[ "neural-networks", "convergence" ]
614731
2
null
614730
1
null
The term is often used by the machine learning community loosely. When people say that “neural network converged” they mean that the training stopped improving. This could mean that they picked some arbitrary $\epsilon$ threshold and some metric that they use for tracking the progress of the training is changing from epoch to epoch by less than $\epsilon$. In other cases, it could mean that someone eyeballed the plot showing the training progress and decided that it looks “flat enough”. Finally, there are no convergence guarantees as often in mathematics. The training can reach a minimum as measured with validation loss and then the loss can start getting worse as the model starts overfitting. Also reaching some point where it does not improve much does not mean that if you trained longer you would not get a better performing model or that other model would not be better.
null
CC BY-SA 4.0
null
2023-05-03T05:29:41.053
2023-05-03T05:29:41.053
null
null
35989
null
614733
1
null
null
1
47
I am a beginner in ML and in my company i have been asked to come up with models that can check if there are data quality issue in any given table. It will be an unsupervised learning task and I only need to do univariate analysis ie at a time i will look for data quality issue in a single column of a given table. Now here are my two questions - Is this problem same as anomaly detection so can I apply those models for data quality checks?? - What is the difference between data quality and anomaly detection if there is any.
Difference between anomaly detection and data quality check
CC BY-SA 4.0
null
2023-05-03T05:44:53.437
2023-05-07T11:11:45.240
null
null
382761
[ "anomaly-detection", "quality-control" ]
614735
1
615300
null
2
116
I am trying to understand the interpretation of an "Odds Ratio calculated the Simple Way" vs the interpretation of an "Odds Ratio calculated from a Logistic Regression". Suppose I have a (sample) dataset that contains information on medical information on some patients and if they have a certain disease or not: ``` gender age weight disease patient_id m 42 73.89227 y 1 m 39 78.01266 n 2 m 42 84.91308 y 3 f 49 95.78418 n 4 m 63 80.91756 n 5 f 42 71.42108 n 6 ``` (Simple Way) Approach 1: Based on the above dataset, suppose I were to summarize the number of male patients and female patients who have the disease and don't have the disease in a contingency table: $$ \begin{array}{c|c|c} & \text{Disease} & \text{No Disease} \\\hline \text{Male} & a & b \\\hline \text{Female} & c & d \end{array} $$ Based on this information, I can calculate the Odds Ratio for Gender as: $$ \text{Odds Ratio}_{\text{gender}} = \frac{a \times d}{b \times c} = x $$ To me, this means that if all other variables (e.g. age, weight) are not taken into consideration and the dataset we have is well representative of the population - if we were to pick a random man from the population and a random woman from the population, the odds of the man having this disease vs the odds of the female having this disease is $x:1$ . In other words, the odds of having the disease increases by a factor of $x$ if someone is a male compared to a female. (note: if you were to randomly pick a person from the population with this disease - the ratio of this person being a male vs being a female is the Relative Risk) (Logistic Regression) Approach 2: Based on the above dataset, I could also fit a Logistic Regression Model to this data and calculate the Odds Ratio for Gender as: $$ \text{log}\left(\frac{\text{P(disease = yes)}}{1 - \text{P(disease = yes)}}\right) = \text{logit}(\text{P(disease = yes)}) = \beta_0 + \beta_1 \times \text{age} + \beta_2 \times \text{gender} + \beta_3 \times \text{weight} $$ $$ \text{Odds Ratio}_{\text{gender}} = \frac{\exp(\beta_0 + \beta_1 \times \text{age} + \beta_2 \times (\text{gender} = \text{Male} = 1) + \beta_3 \times \text{weight})}{\exp(\beta_0 + \beta_1 \times \text{age} + \beta_2 \times (\text{gender} = \text{Female} = 0) + \beta_3 \times \text{weight})} = e^{\beta_2} = z $$ To me, this means that if all other variables are held as fixed (i.e. also not taken into consideration) and the sample is considered representative of the population - the odds of having the disease increases by a factor of $z$ if someone is a male compared to if someone is a female. My Question: For the same data, I can clearly calculate Odds Ratio using a "simple approach" and using a more complicated approach using "Logistic Regression" - yet in both approaches, I am still estimating the same quantity (i.e. increase odds relative to some change in variable when all other variables are equal). What are the advantages of using either approach? In my opinion, the advantage is that the Odds Ratio calculated using Logistic Regression is "adjusted" to take into account the influence of other variables - whereas the Odds Ratio calculated using the simple way does not take into account the influence of other variables. For example, suppose when I fit a Logistic Regression model to this data, I find that the "age" variable contributes a lot more (e.g. size of "age" regression coefficient, p-value) to the probability of having the disease compared to the "gender" variable. As a result, the size of the $\beta_1$ coefficient is likely to greater than $\beta_2$. Thus, if I do decide to estimate the increase in Odds of having disease for a man vs a woman, the value of this Odds Ratio will be toned down seeing as the influence of gender is not that important. On the other hand, when I interpret the Odds Ratio for Gender using the "simple approach" - since I will not be taking into consideration the contributions of other variables, I might end up overcompensating or undercompensating the effect of Gender on the increase in odds for developing the disease. This is akin to Omitted Variable Bias or Variable Confounding. To me, this sounds like the following example : Suppose I frequently attended basketball games for the Chicago Bulls - someone could count the number of times that the Chicago Bulls win vs losing when I am at the game vs not at the game. If the Chicago Bulls win a lot of games in general, someone could falsely conclude that my presence increases the odds of them winning! However, someone could add another variable to the analysis, such as "if Michael Jordan was playing" - now they would see that when the contribution for both of us is jointly taken into consideration, the Odds of the Chicago Bulls increases very little solely based on my presence alone! Can someone please tell me if my understanding of the above concepts is correct? Thanks!
Interpretation of Odds Ratio: Logistic Regression vs Simple Way
CC BY-SA 4.0
null
2023-05-03T06:07:24.727
2023-05-31T18:08:23.517
2023-05-31T18:08:23.517
77179
77179
[ "regression", "logistic", "odds-ratio" ]
614736
2
null
614683
1
null
> can we calculate the covariance matrix based on these models? No, we cannot, because these models tell nothing about the dependence between the series, only about the time dependence within each the series. There are multivariate conditional mean models such as VAR that deal with cross correlations (and multivariate GARCH models such as BEKK or DCC that deal with corresponding relationships for the second moments).
null
CC BY-SA 4.0
null
2023-05-03T06:39:54.880
2023-05-03T06:39:54.880
null
null
53690
null
614737
1
null
null
0
45
This question is more generally related to block averaging but to set the scene: I am doing molecular dynamics simulations (e.g. computer simulations of atoms) wherein i have 3 repeats of a system. The repeats have identical starting structures but they have had their starting velocities generated independently. From these simulations i do various analyses for each "frame" of the simulation, leading to a situation where data for each frame is dependent on the previous frame. To account for this i decided to use block averaging. I am using a python package called "pyblock" to obtain the optimal number of blocks per repeat. The optimal number of blocks is however not the same for each repeat. Note that i am not highly skilled in statistics, so some of my questions may seem trivial to you, or be a bit unclear. My questions: - Is there any statistical problem related to comparing multiple sets of block averaged data that contain different number of blocks (but the same number of underlying data points). - What would be the "correct" way to go about doing statistics across all data sets (e.g. mean, standard deviation, variance, creating a box plot, etc.). - Should one simply use the same number of blocks per dataset instead, and if so, then how would one choose which number of blocks to use, when the optimal number of blocks varies. Edit made on request for clarification on block averaging: Block averaging is the process of dividing a dataset into a number of smaller datasets, such that (in my case) the time-dependence of individual data points is eliminated. I took the following image from a quick google search which clearly shows the original data (blue), blocks (dashed vertical lines) and the block averages (yellow). The reason for why i am using block averaging is that i need to go from multiple sets of time-dependent data to a single time-independent statistical representation, such as a box plot. A more thorough explanation can be found in the documentation for [pyblock](https://pyblock.readthedocs.io/en/latest/tutorial.html): [](https://i.stack.imgur.com/2LYm2.png)
Compare blocks of different sizes from block averaging
CC BY-SA 4.0
null
2023-05-03T06:40:51.777
2023-05-03T06:54:33.190
2023-05-03T06:54:33.190
387105
387105
[ "time-series", "blocking" ]
614738
2
null
614696
1
null
Based on the ADF and KPSS tests, the first-differenced series is already stationary. (Note that presence of autocorrelation does not imply nonstationarity.) Thus you have no reason to take second differences. Differencing is used for accounting for unit roots, not just for any old autocorrelation. The second-differenced series will suffer from overdifferencing and will have a unit-root moving average (MA) pattern. Meanwhile, to deal with autocorrelation that is not due to a unit root, you can introduce autoregressive (AR) and MA terms or seasonal terms such as Fourier terms or seasonal dummies. Thus you get an ARIMA model (possibly with external regressors) or a regression with ARIMA errors. For the model to be statistically adequate, the residuals should be close to white noise.
null
CC BY-SA 4.0
null
2023-05-03T06:46:15.540
2023-05-03T06:46:15.540
null
null
53690
null
614739
1
null
null
0
33
I'm using sklearn.feature_selection.f_classif(X, y) function to compute the ANOVA F-value for a sample. I noticed that the returned F-statistic is not in a specific range, that is, it can take on very large values. This makes it difficult to evaluate how relevant a feature is to the target variable. For example, Pearson Correlation returns a value within the range -1 and 1, so that if I got a value like 0.99 I can conclude that there is high correlation (of course given that p-value is < 0.01). I know that Pearson correlation is the scaled version of covariance, it is found by dividing the covariance by the product of the stds of both variables. In a similar vein, is it possible to scale Anova-F value to a specific range? You might say that for feature selection ranking the variables is enough, it is true, but my aim is not just to make feature selection, but to understand the magnitude of the relationship between the input features and the target variable. An unbounded F-value does not help in this regard.
Can Anova F-Value be scaled to a specific range?
CC BY-SA 4.0
null
2023-05-03T06:53:09.943
2023-05-03T06:53:09.943
null
null
29475
[ "anova" ]
614740
1
null
null
0
7
Is it valid to test the following hypothesis with a binomial test? Or should a chi2-test be applied in this case? A factory produces different products (A-G) with different rates of production errors in % (i.e., a product can be defect or okay). The question of interest is, if a newly introduced product is more prone to production errors than the average of the old products: ``` Product Total % error A 28 14 B 49 10 C 33 6 D 44 9 E 68 10 F 28 10 G 55 14 New 60 12 ``` Thus my hypothesis would be: ``` H0: There is no difference in production errors (avg(A-G) ~= New) H1: New product is more error prone (New > avg(A-G)) ``` Then my intuiton would be to apply a simple binom test ``` Avg error % = (0.14 * 28) + (0.10 * 49)+ (0.06*33)+ (0.09*44) + (0.1*68)+ (0.1*28) + (0.14*55) / 305 ==> 0.105 from scipy.stats import binom_test binom_test(x=12, n=60, p=0.105, alternative="greater") ==> 0.02 ``` Which would tell me that there is a statistically significant different thus i could reject H0. Is it valid to test in this way or would a chi2-test be more suited? Or are there even other possibilities to approach this?
Hypothesis testing with binomial test vs Chi2
CC BY-SA 4.0
null
2023-05-03T07:43:06.893
2023-05-03T07:53:41.003
2023-05-03T07:53:41.003
82610
82610
[ "hypothesis-testing", "statistical-significance", "binomial-distribution" ]
614741
1
null
null
1
13
[](https://i.stack.imgur.com/7Bt4K.png) i am appling k means and DBSCAN to the same data set which is in csv format, but i'm unable to get good results. I have attached the DBSCAN results.I,m getting overlapped clusters and can't figure out why is that so. The dimensionality of my data set is 10+ and i have plotted 2D graphs . When i did pre-processing i plotted some correlation plots on my raw data and found out that many of the attributes are having rectangular shaped graphs with one another, so i shifted to DBSCAN as i read somewhere that K-Means is good for spherical shaped datasets. I'm new to this field of analytics so i'm unable to figure out why my clustering results are not good. following graph is plotted without noise points. Thanks [](https://i.stack.imgur.com/7Bt4K.png)
unable to figure out why overlapped clustering is occuring
CC BY-SA 4.0
null
2023-05-03T07:59:51.390
2023-05-11T10:18:14.040
2023-05-04T04:47:22.893
387109
387109
[ "machine-learning", "clustering", "unsupervised-learning", "dbscan" ]
614742
1
null
null
0
10
Let $f_n(x)$ by the probability distribution function of a continuous r.v. $X_n$. $X_n$ converges in distribution to $X$, i.e. $|P(X_n < x) - P(X < x)| \rightarrow 0$. On the top of that, $E[|X_n|^p]$ converges to the $k$th moment of $X$, i.e. $\int |x|^p (f_n(x) - f(x))dx \rightarrow 0$. With these ingredients, I would like to prove a non-uniform bound of the type: $$(1 + |x|^p)|f_n(x) - f(x)| = o(1)$$
General Non-Uniform Berry-Essen
CC BY-SA 4.0
null
2023-05-03T08:04:45.683
2023-05-03T08:04:45.683
null
null
365245
[ "convergence", "polynomial", "bounds" ]
614743
1
null
null
0
9
Suppose that we want to test a restriction $\alpha=0$. Unfortunately, we cannot directly estimate the parameter of interest $\alpha$. Instead, we can calculate the log-likelihood functions with and without the restriction. That is, we have $\mathcal{L}_r(\beta,\gamma)$ and $\mathcal{L}_u(\beta)$ (the restricted and unrestricted log-likelihood functions). As the notation implies, we are facing a very special case: $\mathcal{L}_r$ contains one more parameter $\gamma$. In this setup, if we implement the score (or lagrangian multiplier) test, we can use only the common parameter $\beta$. That is, we have to derive the score function from $\mathcal{L}_u$, and then plug in the estimate $\hat{\beta}$ from $\mathcal{L}_{r}$ into the score. Here, my question is that "does the score test statistic follow a chi-square distribution even in this special case?"
The distribution of a score test statistic in a special case
CC BY-SA 4.0
null
2023-05-03T08:30:38.063
2023-05-03T08:30:38.063
null
null
375224
[ "maximum-likelihood", "chi-squared-test", "lagrange-multipliers" ]
614745
2
null
614673
1
null
Your Level-2 $N=30$ is too small to reliably estimate a covariance matrix among 18 items (i.e., $18*19/2 = 171 unique (co)variances among country-level intercepts). If your items are designed to measure constructs that are characteristics of the Level-1 units, you can still expect the factor scores to have random intercepts across clusters (and have ICCs), just like the items themselves do. Fitting an overparameterized between-level model can frequently lead to convergence problems. [https://doi.org/10.1080/10705511.2018.1534205](https://doi.org/10.1080/10705511.2018.1534205) [https://doi.org/10.3390/psych3020012](https://doi.org/10.3390/psych3020012) Even fitting the same factor structure at Level 2 will be overparameterized for such a small sample of countries, unless you specify equality constraints that are consistent with your hypothesized factor structure and the clustered structure of the data. Specifically, metric/"weak" invariance across clusters implies that the factor loadings are equal across levels, and scalar/"strong" invariance across clusters implies that the Level-2 residual variances are zero. [https://doi.org/10.3389/fpsyg.2017.01640](https://doi.org/10.3389/fpsyg.2017.01640) Fitting a model with these constraints will reduce the overparameterization at Level 2 because the loadings are also informed by your huge Level-1 N = 18K. Thus, the only parameters to estimate at Level 2 will be the $6*7/2=21$ factor (co)variances. However, 21 is still a lot of parameters relative to 30 countries, so I imagine the SEs will be quite large if it does converge. Modification indices can be used to test the cross-level equality constraints on loadings, or to test the $H_0$ that Level-2 residual variances = 0. As the open-access publications linked above explain, freeing a residual variance implies partial scalar invariance across clusters, but unequal loadings across levels is difficult to interpret.
null
CC BY-SA 4.0
null
2023-05-03T08:49:19.183
2023-05-03T08:49:19.183
null
null
335062
null
614747
1
null
null
0
7
I have conducted research as to if problem based learning is effective in language acquisition. I have a set of 6 questions used in a pre and post test There was a control group and a test group. So for each question i have 4 sets of data taken from 2 groups of students, before and after intervention. What test is appropriate for checking if the learning in the test group was more significant than the learning in the control group? Must I run a separate test for each question? How would I aggregate the data if there should be separate tests for each question in assessing significance in learning?
Appropriate test for checking signifigance in learning
CC BY-SA 4.0
null
2023-05-03T09:05:02.957
2023-05-03T09:05:02.957
null
null
387111
[ "bayesian", "inference", "p-value" ]
614748
2
null
614113
1
null
That code returns model performance for a cv.glmnet process, only if there is a model with the desired number of variables. It may not exist if you are in ridge. So only if elsatic net or lasso. ``` #Calculation of model performance for a model with a given number of parameters alf <- YOUR_ALPHA X <- YOUR_EXPLANATORY_VARIABLES Y <- YOUR_RESPONSE_VARIABLE nbVarFixe <- YOUR_FIXED_NUMBER_OF_DESIRED_VARIABLES_IN_FINAL_MODEL lasso.cv <- cv.glmnet(X,Y,alpha=alf,type.measure = "mse",nfolds=10) if (length(which(lasso.cv$nzero == nbVarFixe)) == 0) { #If no model meets the requirements, nothing to return R2_lasso_nbVarFixe <- NA n_var_lasso_nbVarFixe <- NA liste_variable_lasso_nbVarFixe <- NA }else{ #If a model with desired number of variables exists lambda.select <- min(lasso.cv$lambda[which(lasso.cv$nzero == nbVarFixe)]) # choice of model with X variables liste_coef_lasso <- coef(lasso.cv, s = lambda.select) # liste des coef liste_variable_lasso_nbVarFixe <- liste_coef_lasso@Dimnames[[1]][which(liste_coef_lasso != 0 ) ] # list of selected variables n_var_lasso_nbVarFixe <- length(liste_variable_lasso_nbVarFixe) #Number of selected variables in the model (just in case) liste_variable_lasso_nbVarFixe <- if (n_var_lasso_nbVarFixe>5) {" > 5 "} #irrelevant here, but it came from my script where it is useful else {liste_variable_lasso_nbVarFixe} liste_variable_lasso_nbVarFixe <- paste( liste_variable_lasso_nbVarFixe , collapse = " ") #printout des variables séléctionnées MSE_lasso <- lasso.cv$cvm[which(lasso.cv$lambda==lambda.select)] # calcul de l'erreur MSE EQM_lasso <- sqrt(MSE_lasso) # erreur en valeur terrain R2_lasso_nbVarFixe <- 1-(MSE_lasso/var(Y)) # variance expliquee } ``` The solution here is that "lasso.cv$nzero" contains the number of non-zeros regression coefficients, it means that it contains for each produced models the number of variables selected by EN or LASSO. This is a code i copy pasted from one of my projects and it seems to me there is a confusion in terminology between many packages when it comes to the parameters of EN or LASSO. Here, "alpha" is also referred to as "gamma" in other packages.
null
CC BY-SA 4.0
null
2023-05-03T09:21:49.833
2023-05-10T09:45:45.940
2023-05-10T09:45:45.940
386070
386070
null
614751
1
null
null
2
31
I want to find a method to compare two dendrograms. I have documents that were read by some people and they made a dendrogram that indicates which documents they think are more similar. Now I want to make dendrograms based on those documents using a few different document similarity scores starting for example from tf–idf. I want to know which similarity score gives dendrogram the most similar to what those people think. How should I compare those dendrograms? I would prefer Python libraries, the only thing I found is this tree edit distance [https://github.com/timtadh/zhang-shasha](https://github.com/timtadh/zhang-shasha)
How to compare two dendrograms
CC BY-SA 4.0
null
2023-05-03T10:12:11.553
2023-05-03T20:25:15.737
null
null
387115
[ "clustering", "similarities", "dendrogram" ]
614754
1
null
null
6
153
Suppose we take draw $k$ IID samples from multinomial distribution $$\mathbf{p}=p_1,p_2,\ldots,p_d$$ What is the smallest $k$ such that probability of drawing the same class twice is at least 50%? In numerical simulations, the following appears to be a good fit $$k\approx \sqrt{\frac{\pi}{2\|\mathbf{p}\|^2}}$$ Why? --- Here's the [simulation](https://www.wolframcloud.com/obj/yaroslavvb/nn-linear/forum-birthday-problem.nb), I take various distributions and plot the smallest group size at which half the trials get a collision against the value of $R=\frac{1}{\|\mathbf{p}\|^2}$. [](https://i.stack.imgur.com/67Be0.png)
Birthday paradox for non-uniform probabilities
CC BY-SA 4.0
null
2023-05-03T10:43:22.800
2023-05-04T19:05:49.570
2023-05-03T14:24:30.587
511
511
[ "probability", "mathematical-statistics", "birthday-paradox" ]
614756
1
null
null
0
31
For GLM model, we have the linearity assumption that the transformed response variable (through) the link function) is linearly dependent on the variables. I'm checking this assumption using the Residuals vs Fitted plot (plot the Deviance residuals against the Transformed fitted value eta_i) and have the plot shown below. I wonder if the plot below means that this linearity assumption is not valid? [](https://i.stack.imgur.com/Twdfz.jpg)
Generalised Linear Model (GLM) linearity assumption checking
CC BY-SA 4.0
null
2023-05-03T11:14:52.473
2023-05-04T18:47:34.103
2023-05-04T18:42:08.517
7290
336490
[ "generalized-linear-model", "data-visualization", "inference", "residuals", "assumptions" ]
614757
1
null
null
0
22
I am suffering from to understand a proof of a paper. (Nádas, Arthur. "The distribution of the identified minimum of a normal pair determines' the distribution of the pair." Technometrics 13.1 (1971): 201-202.) Consider a normally distributed random vector $[X_0,X_1]$ (mean: $\mu_0,\mu_1$, variance: $\sigma^2_0,\sigma^2_1$, correlation: $\rho$). We cannot observe the random vector, but we can only observe $Z=\min\{X_0,X_1\}$ and $I=\mathbf{1}[X_0>X_1]$. That is, we know the value of the minimum, $Z$, and what was the minimum, $I$. --- The purpose of the paper is to show that using the information in $[Z,I]$, we can know the joint distribution of $[X_0,X_1]$. The main proposition states "The distribution $H$ of the observable pair $[Z,I]$ uniquely determines the distribution of the unobservable pair $[X_0,X_1]$." I think, this proposition says that we can identify the distribution (the normal distribution parameters $[\mu_0,\mu_1,\sigma_0,\sigma_1,\rho]$) examining the observable data $[Z,I]$. --- The proof is as follows: We can show that the conditional density of $Z$ given $I=i$, for all $z$ and $i=0,1$, is given by \begin{align*}\tag{1} f_i(z)=\begin{cases} p_i^{-1}n(z|\mu_i,\sigma^2_i)[1-N((z-\mu^*_i)/\sigma^*_i|0,1)] \quad if \quad \rho\sigma_{1-i}\neq \sigma_i \\[2pt] n(z|\mu_i,\sigma^2_i)\quad otherwise \end{cases} \end{align*} where $f_i(z)=\Pr[Z=z|I=i]$, $p_i=\Pr[I=i]$, $n(\cdot|a,b^2)$ is the normal density with mean $a$ and variance $b^2$, $N(\cdot|a,b^2)$ is the CDF. ($\mu^*_i$ and $\sigma^*_i$ are proper parameters.) With the given condition $(1)$, the main part of the proof says that Let $[\mu'_0,\mu'_1]$ and $[\sigma'_0, \sigma'_1, \sigma'_{01}]$ be parameters defining any bivariate normal density for which $[Z,I]$ has the distribution $H$. From $(1)$, we see that \begin{align*} \lim_{z\rightarrow -\infty}p_if_i(z)/n(z|\mu_i,\sigma^2_i)=\lim_{z\rightarrow -\infty}p_if_i(z)/n(z|\mu'_i,\sigma'^2_i)=1 \end{align*} so that $n(z|\mu_i,\sigma^2_i)/n(z|\mu'_i,\sigma'^2_i)$ tends to unity. It follows that $\mu_i=\mu'_i$ and $\sigma_i=\sigma'_i$ and then $\rho=\rho'$ based on the fact that \begin{align*} p_0=\Pr(X_0-X_1<0)=\int^A_{-\infty}n(x|0,1)dx \end{align*} where $A=(\mu_0-\mu_1)/(\sigma^2_0+\sigma^2_1-2\rho \sigma_0\sigma_1)^{1/2}$. --- Here, I am pretty confused especially from the "limit is one" statement. The purpose of the proof should may be to show the 1:1 relationship between the distribution of $[Z,I]$ and $[X_0,X_1]$. But, I have no idea the logit of this proof.
Identifiability of a bivariate normal distribution with identified minimum
CC BY-SA 4.0
null
2023-05-03T11:15:22.393
2023-05-03T11:15:22.393
null
null
375224
[ "mathematical-statistics", "normal-distribution", "extreme-value", "bivariate" ]
614758
1
null
null
0
9
I had a minor course in Statistical Quality Control (SQC) for manufacturing industries. Here I learnt about Exponential Weighted Moving average Charts, X-bar R charts and CUSUM charts to name a few. All of these methods are generally used for manufacturing process control. I want to know if such SQC methods exist for data quality control. Specifically what I want to know is if given a univariate time series data for 2years can i develop a SQC model which flags any data point which is an anomaly? Out of the top of my head I can think of a time series ARIMA setup which will give me prediction intervals which can be used to flag any further data which lies outside these. I just want to know wether some research has been done in SQC regarding data quality control.Sorry if my question is a bit vague. Any links to research papers or blogs would be helpfull.
Statistical Quality Control methods for data quality
CC BY-SA 4.0
null
2023-05-03T11:23:43.310
2023-05-03T11:23:43.310
null
null
382761
[ "time-series", "quality-control" ]
614760
1
614765
null
1
45
I have tried to follow the steps indicated on this page and it doesn't work as it doesn't identify the function "recipe" (which I fail to understand anyways...). I am trying to find the best Box-Cox transformation to stationaries in variance my time series. [https://recipes.tidymodels.org/reference/step_BoxCox.html](https://recipes.tidymodels.org/reference/step_BoxCox.html) Here is my code : ``` rec <- recipe(~., data = as.data.frame(prod)) bc_trans <- step_BoxCox(rec, all_numeric()) bc_estimates <- prep(bc_trans, training = as.data.frame(prod)) bc_data <- bake(bc_estimates, as.data.frame(prod)) plot(density(prod[, "Production"]), main = "before") ``` And here is the structure of my dataset for reproducibility : ``` structure(list(date = c("2022-11", "2022-10", "2022-09", "2022-08", "2022-07", "2022-06"), production_brute_nucleaire = c(22951.429, 21465.026, 19334.531, 19319.365, 19923.664, 21275.248)), row.names = c(NA, 6L), class = "data.frame") ```
How to apply the Box-Cox transformation to a univariate time series in R?
CC BY-SA 4.0
null
2023-05-03T11:46:21.870
2023-05-03T12:41:01.180
2023-05-03T12:41:01.180
56940
364061
[ "r", "time-series", "data-transformation", "univariate", "boxcox-transformation" ]
614761
1
null
null
1
9
[](https://i.stack.imgur.com/EGkyN.png) The image attached shows the amount of accidents per speed limit by year for 1 region. I have two other tables showing similar data. I want to do a three-way ANOVA on these sets of data to see if there is any variance in the mean amount of accidents under each speed limit compared to each of the regions I am looking at. My issue is that, are we able to assume that there is independence in the number of accidents that occur in one year compared to that of another? Because I believe this is something called "Time series" but I have no knowledge of that yet. However, my professor did say 3-way ANOVA is possible but it just doesn't seem right to me because I feel like the independence assumption might be violated.
I want to do a three-way ANOVA of the mean amount of accidents under specific speed limits by region but I don't know if it is possible
CC BY-SA 4.0
null
2023-05-03T11:59:35.503
2023-05-03T12:00:26.670
2023-05-03T12:00:26.670
387125
387125
[ "hypothesis-testing", "anova", "mean" ]
614762
2
null
257121
0
null
I just wrote a blog article on this topic, tried to search for it, and found this question! (Yeah, should have done that first.) @JeremyMiles is correct, but for anyone who wants to see exactly what happens, it's a pretty short article: [What happens when you rotate confirmatory factor analysis loadings?](https://medium.com/@baogorek/what-happens-when-you-rotate-confirmatory-factor-analysis-loadings-d597811a6870). To summarize, if your CFA model is identifiable (no cross-loadings, etc.), then the rotation will just give you back your loadings matrix unchanged. Even one cross-loading though changes that; a rotation will change the loadings matrix, albeit slightly if the CFA model is parsimonious.
null
CC BY-SA 4.0
null
2023-05-03T12:07:58.897
2023-05-03T12:07:58.897
null
null
35131
null
614763
1
null
null
0
49
I've segmented my data using UMAP (dimensionality reduction, with a fixed random seed) and subsequently HDBSCAN to generate a number of clusters. I've also looped through different values to fine-tune both UMAP and HDBSCAN to generate the optimal solution based on the relative_validity score embedded within HDBSCAN. I also split the data into train and test to ensure I wasn't overfitting. My problem is that the solution seems to be a bit unstable - when I feed completely new data, UMAP and HDBSCAN produce slightly different solutions. Whilst I understand there is an stochastic element to both algorithms, intuitively I would expect to always get roughly the same clustering structure. Is this expected? Does anyone know how to solve this? Is there any step I'm missing? Thanks in advance! ``` # UMAP projection = umap.UMAP(n_neighbors=, n_components=, min_dist=, metric=, random_state=42) train_projection = projection.fit_transform(train_data) # Fit HDBSCAN cluster = hdbscan.HDBSCAN(min_cluster_size=, min_samples=, gen_min_span_tree=True, prediction_data=True) clusterer = cluster.fit(train_projection) ``` ```
Unstable HDBSCAN & UMAP clustering results
CC BY-SA 4.0
null
2023-05-03T12:12:34.900
2023-05-03T12:13:04.520
2023-05-03T12:13:04.520
288680
288680
[ "clustering", "unsupervised-learning", "dbscan" ]
614764
1
null
null
0
26
I am trying to understand why the `impliedConditionalIndependencies` function of the `rethinking` package returns the same value for a mediator and confounder. They both return that D is independent of M, given A. Here is some reproducible code to show the output: ``` library(rethinking) library(dagitty) mediator <- dagitty('dag{M -> A -> D}') drawdag(mediator) ``` ![](https://i.imgur.com/kmLHSaO.png) ``` impliedConditionalIndependencies(mediator) #> D _||_ M | A confounder <- dagitty('dag{M <- A -> D}') drawdag(confounder) ``` ![](https://i.imgur.com/sJcVdHf.png) ``` impliedConditionalIndependencies(confounder) #> D _||_ M | A ``` Created on 2023-05-03 with [reprex v2.0.2](https://reprex.tidyverse.org) As we can see both return the same output. I can imagine the output of a confounder because when M changes, it doesn't affect the output of D So D is independent of M. But when M changes it affects A and A affects D which means that D may be dependent on M, right? So I was wondering if anyone could explain why the output of mediator and confounder is the same here?
Why implied Conditional Independencies of mediator and confounder are the same?
CC BY-SA 4.0
null
2023-05-03T12:19:26.390
2023-05-03T12:55:22.227
2023-05-03T12:55:22.227
56940
323003
[ "r", "self-study", "bayesian-network", "dag", "conditional-independence" ]
614765
2
null
614760
2
null
Running your code with minor fixes ``` product <- structure(list(date = c("2022-11", "2022-10", "2022-09", "2022-08", "2022-07", "2022-06"), production_brute_nucleaire = c(22951.429, 21465.026, 19334.531, 19319.365, 19923.664, 21275.248)), row.names = c(NA, 6L), class = "data.frame") > str(product) 'data.frame': 6 obs. of 2 variables: $ date : chr "2022-11" "2022-10" "2022-09" "2022-08" ... $ production_brute_nucleaire: num 22951 21465 19335 19319 19924 ... library(recipes) rec <- recipe(~., data = product) bc_trans <- step_BoxCox(rec, all_numeric()) > bc_estimates <- prep(bc_trans, training = product) Warning message: In optimize(bc_obj, interval = limits, maximum = TRUE, dat = dat, : NA/Inf replaced by maximum positive value bc_data <- bake(bc_estimates, product) plot(density(product$production_brute_nucleaire), main = "before") ``` gives this [](https://i.stack.imgur.com/DkqBU.png) Remark 1 You have to be careful when applying the Box-Cox transformation to a time-series variable though. The derivation of the likelihood function in the Box-Cox transformation (G.E.P. Box and D.R. Cox, An Analysis of Transformations, Journal of the Royal Statistical Society. Series B (Methodological), Vol. 26, No. 2 (1964), pp. 211-252) presumes independent observations, an assumption which may not be met in a time-series context.
null
CC BY-SA 4.0
null
2023-05-03T12:22:51.770
2023-05-03T12:35:21.677
2023-05-03T12:35:21.677
56940
56940
null
614767
2
null
614725
2
null
The documentation you linked is almost self-explanatory: what you need is just substituting numeric values obtained from your data into the parameters as documented. In your case, for Type 7 linear interpolation: \begin{align} & n = 8, p = 0.25, m = 1 - p = 0.75, \\ & j = \lfloor np + m \rfloor = \lfloor 2 + 0.75\rfloor = 2, \\ & g = np + m - j = 2.75 - 2 = 0.75 = \gamma. \\ \end{align} which gives: \begin{align} Q_7(0.25) = (1 - \gamma)x_2 + \gamma x_3 = 0.25 \times 2 + 0.75 \times 3 = 2.75, \end{align} matching the R output. While the above breakdown explains your confusion, it is more important to understand why R sets up so many different "types" for a sample quantile with the same $p$. This is because "value" that satisfies "value < 25 percentage of the values or value <= 25 percentage of the values" in your proposed statement is not unique due to the discreteness of data.
null
CC BY-SA 4.0
null
2023-05-03T12:40:13.623
2023-05-04T02:36:09.990
2023-05-04T02:36:09.990
20519
20519
null
614768
1
null
null
0
11
I am struggling to prove that the quantile function of Irwin-Hall distribution is subadditve over n (the number of uniform distributed random variables that the Irwin-Hall distribution is created of). $$ Q_{n+m}(p)\leq Q_{n}(p)+Q_{m}(p) $$ in which $Q_n(p)=F_n^{-1}(z)$ and $F_n(z)$ is the CDF of the random variable $\textbf{Z}=\sum_{i=1}^n\textbf{X}_i$ and $\textbf{X}_i$s are all uniform iid random variables in $[0,1]$. It could be easily proven through the simulation but I'm searching for a analytical proof. I would really appreciate your suggestions.
Subadditivity of quantile function of Irwin-Hall distribution over n
CC BY-SA 4.0
null
2023-05-03T12:43:09.187
2023-05-03T12:44:07.863
2023-05-03T12:44:07.863
387130
387130
[ "probability", "quantiles" ]
614769
2
null
614210
0
null
If I understood correctly, you are looking for a non-parametric alternative to multivariate ANOVA. Although I have never used this, this R package might do exactly what you are looking for [npmv](https://www.jstatsoft.org/article/view/v076i04).
null
CC BY-SA 4.0
null
2023-05-03T12:59:40.017
2023-05-03T12:59:40.017
null
null
239947
null
614771
1
null
null
1
17
So [gglasso](https://cran.r-project.org/web/packages/gglasso/vignettes/Introduction_to_gglasso_package.html) in R appears to allow the user to implement adaptive group lasso by specifying group-wise penalty factors as you would with a typical adaptive lasso. Typically, when I want to train an adaptive lasso then I train a ridge regression and use the ridge coefficients more or less like [this](https://rpubs.com/kaz_yos/alasso), but in this situation I'm a little unsure how to get my penalty factors since I'm using a group-wise penalties. Can anyone enlighten me? I believe it's possible because I've seen several papers on the topic but haven't been able to access any of them. Thanks!
Penalty Factor for Adaptive Group LASSO
CC BY-SA 4.0
null
2023-05-03T13:29:49.500
2023-05-03T13:29:49.500
null
null
383976
[ "lasso" ]
614772
1
null
null
0
10
Having looked at the discussions for similar questions I find none quite equate with this apparently simple case I present here. It has been calculated that the average age between female generations in the past is 23.2 years +/-3 (1 sigma). So if I go back two generations this will be on average 46.4 years, but what is the standard deviation now? Is it just the sum of the variance (ie the variance of 9 doubled) that results in an SD of 4.2? If I go back three generations the time difference will be 69.6 years +/-? Should be very straightforward? Thanks in advance.
Summing two equal means with equal standard deviations and calculating the sum's resultant standard deviation
CC BY-SA 4.0
null
2023-05-03T13:31:15.523
2023-05-03T13:59:29.447
2023-05-03T13:59:29.447
387134
387134
[ "standard-deviation" ]
614773
1
null
null
0
6
I want to investigate the statistical properties of the signal from a radar sensor, both simple quantities (e.g. mean, std dev, PSD) as well as investigating distributions between different measured quantities. My question mostly relates to sampling frequency dependent metrics, like power spectral density. If we say the sensor returns measurement packets at 10Hz where each packet can contain an unknown amount (e.g. 10-200) of individual measurements of a quantity, e.g. doppler speed, is it then meaningful to try and calculate the PSD with sampling frequency of 10Hz?
Statistics of signal with regularly spaced bursts of measurements
CC BY-SA 4.0
null
2023-05-03T13:37:12.633
2023-05-03T13:37:12.633
null
null
269972
[ "signal-processing", "sensor-data" ]
614774
1
null
null
0
42
I have a set of two-dimensional shapes, each represented by one or more closed paths, which enclose one continuous area. My objective is to establish a measure of similarity between these shapes. I do not have a rigid definition of similarity; it might encompass characteristics such as slightly jagged edges or a small hole in one of the shapes. The quality of the comparison will be tested empirically. I am seeking a method for shape comparison that invariant under rotation, translation and mirroring, but not scaling. My current idea is to use the Radon transformation (creating a sinogram) as a descriptor for each shape, based on the area enclosed by the paths. I can ensure rotation invariance by employing circular cross-correlation to align the sinograms along the angle axis. I plan to calculate the distance to a mirrored version of the sinogram which would make my calculation mirror-invariant. While I am not time-constrained when initially calculating the sinograms, I need to be able to quickly find potential matches when given a new shape. This is why I want to use a metric. My goal is to use this to sort the shapes by their descriptors (potentially relative to an all-zero histogram?) and find potential matches more quickly. My database could contain up to tens of thousands of shapes. At the moment, I am employing Earth Mover's Distance to compare the sinograms. However, I am curious if there are any alternative metrics that I could use?
What metrics could I use to compare multiple 2-dimensional histograms/sinograms?
CC BY-SA 4.0
null
2023-05-03T13:40:59.910
2023-05-10T14:27:10.667
2023-05-10T09:25:42.173
382779
382779
[ "histogram", "metric", "wasserstein" ]
614775
1
null
null
0
19
Cureently, I am conducting a regression study of household expenditure (target variable) from a set of determiants (income, household size, ...) in Malaysia using OLS and Random Forest. It is a long question... In my study, result shows that OLS regression (0.853) performs slightly better than Random Forest regresion (0.850) using r2 score. However, Random Forest is another good choice as part of my determinants having non-linear relationships (such as age) with target variable, which are classified as insignifcant variables (using t-test) in OLS. I found a study here: [https://www.sciencedirect.com/science/article/pii/S0957417421000312?via%3Dihub](https://www.sciencedirect.com/science/article/pii/S0957417421000312?via%3Dihub) The authors performed a comparison study between Random Forest and hedonic regression. Here is the conclusion written in the paper (RF is slightly bad in term of accuracy for my study): In terms of accuracy, the ML methods are superior to all of the regression methods, although their capacity to explain a socioeconomic variable can be found to be spurious when the variables are highly correlated. In terms of the effects on quantification, the regression models are superior as they are capable of precisely identifying each characteristic’s particular effect and defining the range of effects. I am looking for a way that could utilized the strength from both of my models. Here are my assumptions: ## 1) Develop a hyrbid model To combine the strengths from my OLS (accuracy) and Random Forest model (able to distinguish the non-linear relationships), I tried the stacking regression. The result shows that the r2 score improved a litte(0.855) using Random Forest as base and OLS as final estimators. I am wondering do the stacking regression actually integrated both the strengths that I mentioned? Following is the python code used to develop the stacking regression: ``` level0 = list() level0.append(('rf', RandomForestRegressor(n_estimators = 1000, min_samples_split= 30, min_samples_leaf= 4, max_features= 'auto', max_depth= 10, bootstrap= True))) level1 = LinearRegression() # define the stacking ensemble stack_model = StackingRegressor(estimators=level0, final_estimator=level1, cv=5, passthrough=True) # fit the model on all available data stack_model.fit(x_train, y_train) ``` the `x_train` consists of the variables that having the non-linear relationships. I fed directly the x_train to the stacking regression, I am not sure is it a right or wrong practice. Following is the SHAP summary plot for Random Forest: [](https://i.stack.imgur.com/8A5Uk.png) And following is the statistics summary for OLS: [](https://i.stack.imgur.com/o8WKL.png) The variable x4 is 'Sex' (insignifcant) and x5 is 'Age' (less significant compared to others). ## 2) Using RF as factor analysis and OLS as regression model? Since 'Sex' and 'Age' having non-linear relationships, my second assumption is using the RF model for Factor Analysis and as a tool to detect variables having the non-linear relationship with household expenditure. Then, using a quantile regression model (as how the hedonic model study did) to desribe the relationships in quantile forms. ## Conclusion I am looking for a way to combine both the Random Forest and OLS regressions, either develop a hyrid model or any other methods, rather than a comparison study. Appericate for any guidance. Thanks
Building a hybrid model? From a Random Forest and a OLS linear regressions
CC BY-SA 4.0
null
2023-05-03T14:10:41.430
2023-05-03T14:10:41.430
null
null
380642
[ "linear-model", "random-forest", "nonlinear-regression", "stacking" ]
614776
1
null
null
2
19
I'm developing a mathematical (SIR-style) model for cholera transmission in the Bengal Delta which is fit to data using MCMC. The purpose of the model is more explanatory rather than predictive, and I would like to find out which parameters are most influential to the output (and especially how particular combinations of parameters may be important). To do this I think I need to do a global sensitivity analysis and get some second order effects (I am using the R Package 'sensobol' for this). My question is should I allow the parameters to vary according to the prior distribution or the posterior distribution? In the examples from the sensobol vignettes, they appear to be using the prior, but would it not make more sense to use the posterior? A particular problem I have when using the prior for this analysis is that, for most sampled parameter values, the output is just a time series of NAN - what should I do with that? Ignore them, and just take the non-NAN values?
How to do a sensitivity analysis for a Bayesian Model? Prior or Posterior?
CC BY-SA 4.0
null
2023-05-03T14:20:44.613
2023-05-03T14:20:44.613
null
null
367831
[ "bayesian", "markov-chain-montecarlo", "sensitivity-specificity" ]
614777
1
null
null
2
77
My question is around applying undersampling methods to an imbalanced, and highly dimensional dataset, with mixed data. Lets say as an example, I have 150 features, also a highly imbalanced binary target variable. I want to apply a sophisticated undersampling method, not random undersampling. Lets say something like OSS, Tomek, CNN from Imblearn. I have 2 questions for applying undersampling to this dataset: 1-) Do I need to run PCA to reduce number of 150 features before undersampling? 2-) Do I need preprocessing? Like scaling to numerical features and one-hot encoding to categorical ones etc? Thanks
Do we need preprocessing before applying a sophisticated undersampling method?
CC BY-SA 4.0
null
2023-05-03T14:26:00.327
2023-05-03T15:50:38.403
null
null
248382
[ "machine-learning", "unbalanced-classes", "data-preprocessing", "under-sampling" ]
614778
1
null
null
0
18
How do we calculate sample sizes (and power) for an outcome that is discrete and is only measurable between two individuals of a group? Suppose we have $n_A$ individuals in group A, and $n_B$ individuals in group B. The outcome we measure is the number of metabolites that are common between any two individuals within group A, or between any two individuals within group B. We end up with nice boxplots showing that the number of metabolites that are common to two individuals within group A is indeed higher than metabolites shared between any two individuals within group B. So then the question is: what's the power for our study if we have 20 indivdiuals in group A and 20 individuals in group B, and similarly, how many individuals do we need in groups A and B if we want 80% power? Is there a formula or calculator of sorts for this?
Sample size calculations for a pair-dependent discrete outcome
CC BY-SA 4.0
null
2023-05-03T14:37:01.297
2023-05-05T12:41:07.880
null
null
288378
[ "sample-size", "statistical-power", "discrete-data", "clinical-trials" ]
614779
2
null
517372
1
null
Citing Gelman et al. [http://www.stat.columbia.edu/~gelman/book/](http://www.stat.columbia.edu/%7Egelman/book/) (page 20), where the second formula is always used: "Some authors use different notations for distributions on parameters and observables—for example, $\pi(\theta), f(y|\theta)$—but this obscures the fact that all probability distributions have the same logical status in Bayesian inference. We must always be careful, though, to indicate appropriate conditioning; for example, $p(y\theta)$ is different from $p(y)\,.$"
null
CC BY-SA 4.0
null
2023-05-03T14:39:38.303
2023-05-03T14:39:38.303
null
null
374292
null
614780
1
614810
null
0
28
I am using `proc autoreg` in SAS to conduct an ITS analysis and I have a question about stationarity. `Proc autoreg` is able to perform the augmented Dickey-Fuller (ADF), the Phillips-Perron (PP), and the KPSS test for stationarity. I believe I have a good understanding of the difference in their interpretations between them. My question is whether I test for stationarity across the interruption. For example, if I had a 36 month study period with an interruption at month 25, would I look at stationarity as separate parts (months 1-24 and 25-36) or as a whole (1-36)? My thought process is, that the interruption would likely cause it not to be stationary and thus ran separately. The code I am using is the following: ``` proc autoreg data=one; model outcome = /stationarity=(kpss=(kernel=qs auto))stationarity=(phillips); where 1<= month <= 24; run; ```
Stationarity in an interrupted time series
CC BY-SA 4.0
null
2023-05-03T14:45:18.227
2023-05-03T19:11:25.440
2023-05-03T19:11:25.440
53690
365631
[ "time-series", "stationarity", "augmented-dickey-fuller", "intervention-analysis", "kpss-test" ]
614782
1
null
null
1
11
New to meta-analysis. I am conducting a systematic review and meta-analysis which is investigating a single health score across both interventional studies and observational studies in different patient populations. I am unsure of how to conduct the meta-analysis as the reviewed studies do not posses a control group where the figure is different and there is no pre-post effect size. I have the mean and sample size and most standard deviations from around 50 studies. I've read other posts but don't think its right to follow advice for prevalance meta analysis as the figure can be measured for each patient and the included papers are reporting means or medians of this figure for each population. furthermore the figure is not a simple scale [e.g weight (Kg)] it is logarthimic (dB). Am i over thinking this? any helpful directions or tips? Keen to use R if possible packages are suitable
Conducting meta analysis of a single figure without control- but its not a proportion
CC BY-SA 4.0
null
2023-05-03T15:09:07.397
2023-05-03T15:09:07.397
null
null
387140
[ "meta-analysis" ]
614783
2
null
614193
1
null
I find this question easier after replacing $A$ by $e^X$ and $\Delta A$ by $Y$: > If $X$ and $Y$ are independent normals and $Y$ has mean $0$, are the moments of $\ln(e^X+Y)$ bigger or smaller than the moments of $X$? This only makes sense when $Y$ is small by comparison with $e^X$, so that the cases where $e^X+Y$ is negative are negligible -- which is the same as requiring that the errors are small by comparison with the measurements, so that the geometric mean mentioned in the question is meaningful. With that mild assumption, the answer is: - $E[\ln(e^X+Y)]<E[X]$ - $Var[\ln(e^X+Y)]>Var[X]$ The claim about means follows from the concavity of $\log$, or in more detail: \begin{align} (e^x+y)(e^x-y)&<(e^x)^2\\ \frac12[\ln(e^x+y)+\ln(e^x-y)]&<x\\ E[\ln(e^x+Y)]&<x\\ E[\ln(e^X+Y)]&<E[X] \end{align} The claim about variances follows, at least approximately, from $$Var[\ln(e^X+Y)]-Var[X]\simeq Var[e^{-X}Y]$$ which is accurate to second order in $Y$.
null
CC BY-SA 4.0
null
2023-05-03T15:13:18.160
2023-05-03T15:48:10.840
2023-05-03T15:48:10.840
225256
225256
null
614785
1
null
null
0
15
I am super stuck on the question. I looked up on how to find the mean on a Box-and-whisker plot, and never got a clear answer. [](https://i.stack.imgur.com/4auF9.jpg)
How to find all the data on a box-and-whisker plot? Most importantly the mean
CC BY-SA 4.0
null
2023-05-03T15:31:56.033
2023-05-03T15:33:24.083
2023-05-03T15:33:24.083
387144
387144
[ "algorithms", "boxplot" ]
614786
2
null
614754
2
null
### Approach 1 Let's assume that $d$ is large and the distribution of birthdays on day $i$ can be modelled/approximated as independent Poisson variables. Each day has a frequency of $\lambda_i = n \cdot p_i$ birthdays and the probability of no double birthday on day $i$. $$P(X_i \leq 1) = e^{-\lambda_i}(1+ \lambda_i)$$ The probability of no double birthday on all days is $$\begin{array}{rcl}\prod_{i=1}^d e^{-\lambda_i}(1+ \lambda_i) &=& e^{-n} \prod_{i=1}^d (1+ p_i n) \\ & \approx & \left(1-n+0.5n^2\right) \left( 1 + n \sum_{i=1}^d p_i + n^2 \left[ \sum_{\forall i,j} p_i p_j \right]\right) \\ & \approx & \left(1-n+0.5n^2\right) \left( 1 + n + n^2 0.5 \left[ 1- \sum_{\forall i} p_i^2 \right]\right) \\ & \approx & 1 + 0.5 n^2 \sum_{ i = 1}^n p_i^2 \\ \end{array}$$ and to put that equal to $0.5$ leads to $$n^2 \sum_{ i = 1}^n p_i^2 = 1$$ or $$n = \frac{1}{ \sqrt{\sum_{ i = 1}^n p_i^2 }}$$ ### Approach 2 We can convert this in a waiting time problem and consider adding birthdays untill there is a double one. Then the probability to have at least a double birthday among $n$ birthdays is equivalent to 1 minus the probability that we have to wait at least $n$ birthdays untill we have a double birthday. If the days have equal probabilities then the probability to 'hit a double birthday' is linearly increasing as more and more birthdays are being added $$P(\text{hit on day $k+1$| no hit yet}) = k/d$$ and the probability of no hits yet is $$P(\text{ no hit yet on day $k+1$}) = 1 - \prod_{i=1}^k (1 - i/d)$$ or $$\log\left(1-P(\text{ no hit yet on day $k+1$})\right) = \sum_{i=1}^k \log(1 - i/d) \approx \int_0^{k} \log(1-x/d) dx = (k - d)\log(1-k/d) - k \approx -\frac{k^2}{2d}$$ In your case $d=100$ you will get the $0.5$ probability for $k \approx 11.77$, which seems close to the asymptote in your graph. [](https://i.stack.imgur.com/dqhTv.png) In the case of equal probabilities of a birthday on a day, we consider this integral $\int_0^{k} \log(1-x/d) dx$ as the probability of at least a double birthday. With unequal probabilities, the path $ \log(1-x/d)$ will not be fixed and will be stochastic instead. That may cause discrepancies. Possibly this problem can be solved as a random walk with drift, and each step there is a probability of ending the walk, based on the position of the walk.
null
CC BY-SA 4.0
null
2023-05-03T15:46:53.157
2023-05-04T19:05:49.570
2023-05-04T19:05:49.570
164061
164061
null
614787
2
null
614777
5
null
First, [you probably do not need to do anything about the class imbalance](https://stats.stackexchange.com/questions/357466/are-unbalanced-datasets-problematic-and-how-does-oversampling-purport-to-he). Much of the apparent issue with class imbalance comes from using improper scoring rules like accuracy that depend on a (typically arbitrary) threshold. If you are sick of your imbalanced problem returning that all or almost all of the predictions belong to the majority category, the first thought should be to change the threshold instead of the data. (Probably even better would be to consider one of the proper scoring rules discussed in the link, but if you must use a threshold, I suspect you can change it to suit your needs.) Further, downsampling to fix what is likely a non-problem is probably the worst idea of all. Aside from issues of experimental design, before you collect data, which is discussed in the excellent answer by Dikran Marsupial to the linked question, downsampling discards precious data to solve a non-problem. (Perhaps there could be computational reasons for downsampling to allow your dataset to fit into memory or to run in a reasonable amount of time, though the better plan might be to upgrade your hardware.) [However, even when an idea is a poor one, it can be helpful to know how it would fit into a reasonable workflow, even if that is just to show its drawbacks.](https://stats.stackexchange.com/a/578545/247274) I see arguments in favor of either order. - Fiddle with the data to create new data. Then you have your synthetic data that you are treating as if they were real, and you do the rest of the modeling as if the synthetic data were real. - Do your preprocessing on the real data. Then synthesize new data based on the preprocessed real data, since the undesirable features of the data will have been removed. The former makes the most sense to me. You are doing the rest of the modeling as if the synthetic data were real. Why not pretend the synthetic data were real when it comes to the preprocessing? This also has the advantage of assuring that your synthetic data adhere to the properties you want, whereas the latter idea allows for nastiness to creep in that you then are unable to preprocess away. However, I do believe you can treat this as a hyperparameter and go with whatever leads to the best performance. [Overall, though, the best move is probably not to undersample (or even oversample) at all, and to use proper statistical methods on data that represent the reality of your situation.](https://twitter.com/f2harrell/status/1062424969366462473?lang=en)
null
CC BY-SA 4.0
null
2023-05-03T15:50:38.403
2023-05-03T15:50:38.403
null
null
247274
null
614788
2
null
614778
0
null
From what I understand, you are going to take all possible combinations of two people in group A and take the distribution of the number of metabolites shared by both people in a random pair of people from A, and compare that distribution to the distribution that you would get from the same statistic with group B, and show that a random pair of individuals in group A usually have more metabolites than a random pair of individuals in group B, regardless of how large is the average difference when group A pairs have more metabolites or when group B pairs have more metabolites? You will want the Wilcoxon rank sum test. I caution you in the interpretation of this highly non-parametric test. Just because group A pairs have more metabolites the majority of the time, group B pairs can still have a higher average number of metabolites, if the group B distribution is more skewed right. With that out of the way, I don't think you can specify the power without specifying the underlying distribution of metabolites in the population of people who could end up in sample A and the population of people who could end up in sample B. I could give you the power if you made some simple assumptions about those distributions, but it would require more than the mean and variance, because the nature of non-parametric estimation is we start without assumptions about the distribution. The test statistic is $U=\underset{i=1}{\overset{n_A}{\sum}} \underset{j=1}{\overset{n_B}{\sum}} .5*(x_i \geq y_j)+.5*(x_i>y_j)-\frac{n_A*n_B}{2}$, where $x_i$ and $y_j$ are random pairs from groups A and B and $n_A$ and $n_B$ are the number of possible pairs from groups A and B, $\frac{20*19}{1*2}=190$. To get the p-value, take $z=\frac{U}{\sigma_u}$, $\sigma_u=(\frac{n_1*n_2*(n_1+n_2+1)}{12})^{.5}$, because z is approximately standard normal.
null
CC BY-SA 4.0
null
2023-05-03T15:58:48.080
2023-05-03T16:10:49.860
2023-05-03T16:10:49.860
387086
387086
null
614789
1
null
null
0
47
There are two random variables defined as follows: - $X \in \mathbb{R}^n$ ($n$-dimensional random vector) and $X \sim N(0, I)$ where $I$ is an identity matrix - $Y \in \mathbb{R}^m$ ($m$-dimensional random vector) and $Y|x \sim N(\mu+W x, \Psi)$ where $\Psi=\sigma I$ is a diagonal matrix - $n \ll m$ What are the ways to show that components of $X$ and $Y$ are uncorrelated?
Showing uncorrelatedness of related random variables
CC BY-SA 4.0
null
2023-05-03T16:14:05.843
2023-05-16T19:33:35.660
2023-05-16T19:33:35.660
387133
387133
[ "correlation", "conditional-probability", "factor-analysis" ]
614790
1
null
null
1
32
One of the hyperparameter tunning approaches that I came across recently is Sequential Model-Based Optimization (SMBO), which is a very smart approach that uses previous iterations in order to find the best values for the hyperparameter. I just want to make sure I understand it right. So there are several steps and factors for this algorithm, I'll talk about them and at the end I'll add a scheme to summarize it, please correct me if I'm wrong: - Build the domain that has the ranges of values that we pick for each hyperparameter. Nothing special here, excpet that we can refer to this domain as a probability distribution that will change with each iteration (when new data comes along). - Objective function: this is the function that we want to minimize or maximize (like loss function for example). But we don't use it that much because it's computationally costly and slow, so we use the surrogate. - The surrogate "mimics" the objective function, and it evaluates the current values of the hyperparameters. - Selection function: based on the surrogate function evaluations, the selection function will now pick new values for the hyperparameters to test. - Now, we test those new values in the real objective function and not in the surrogate function as in step 3. - we get the REAL score, and we add it to the "History" - we repeat. Now the surrogate and the selection functions are updated based on the first round (the History). Here is a scheme that I prepared to sum it up more or less: [](https://i.stack.imgur.com/ChInw.png) Did I describe the steps correctly? is the scheme ok?
Understanding Sequential Model-Based Optimization for machine learning
CC BY-SA 4.0
null
2023-05-03T16:25:37.730
2023-05-05T20:25:42.613
null
null
362803
[ "machine-learning", "probability", "bayesian", "hyperparameter" ]
614791
1
null
null
1
13
I'm developing a mathematical (SIR-style) model for cholera transmission which I am fitting to data using MCMC. The model is reasonably complex (4 compartments, 6 parameters, simulation over 11200 days, fit to 270 data points) and is externally forced by time-varying temperature and precipitation time-series. When I run the MCMC model (I'm using NIMBLE) using differential equations, the MCMC solver takes over a day to run 10,000 iterations. However, when I use difference equations (with a one-day time step) I can run 1,000,000 iterations in about half that time. I showed my model to a senior colleague who was very critical about use of difference equations, and was convinced that differential equations were more appropriate. I also note that the use of differential equations over difference equations is the norm in the literature. My question is, why? My assumption was that there must be a significant difference in the model output between the two approaches. But for my model they almost identical (and well within the bounds of data uncertainty). In the below image, the black line is from the difference equation, and the red from the differential equation (using deSolve). [](https://i.stack.imgur.com/VvcGp.png) So, in my case, am I able to get away with difference equations? Or is there some other reason why the use of difference equations is inappropriate when doing disease modelling inference.
What's wrong with difference equations (disease modelling)?
CC BY-SA 4.0
null
2023-05-03T16:29:37.437
2023-05-03T16:29:37.437
null
null
367831
[ "inference", "modeling", "differential-equations" ]
614792
1
null
null
0
8
[](https://i.stack.imgur.com/rIk61.png) The number of significant terms in ACF = 17 The number of significant terms in PACF = 8 We are going to use AR in this model since PACF is less than ACF. AR is 8 and MA is 17. Since we only took the first difference for the trend, then d is 1. P is the number of significant terms in PACF for trend. Q is the number of significant terms in ACF for trend. So, ARIMA(8,1,17) is the most appropriate model for the residuals. Would this be a correct interpretation?
Based on the ACF and PACF, identify the most appropriate ARIMA(p,d,q) model for the residuals and and explain your reasoning
CC BY-SA 4.0
null
2023-05-03T16:39:01.943
2023-05-03T16:39:01.943
null
null
387147
[ "acf-pacf" ]
614793
1
null
null
0
21
If a stochastic process is generated by a vector autoregressive process of order $d$, can it be strictly stationary. I know that under the stability condition, that this is weakly stationary process. $\begin{align} \mathbf{y}_t = A_1\mathbf{y}_{t-1} + \dots + A_d\mathbf{y}_{t-d} + \boldsymbol{\epsilon}_t, \quad t \in \mathbb{Z} \end{align}$ I was also wondering, if anyone had examples of stochastic process that are strictly stationary, alpha-mixing, and have heavy tails?
Can a VAR(d) process be strictly stationary?
CC BY-SA 4.0
null
2023-05-03T16:40:23.800
2023-05-03T18:55:28.667
2023-05-03T18:55:28.667
53690
283493
[ "time-series", "stochastic-processes", "stationarity", "vector-autoregression" ]
614794
1
null
null
1
31
I am trying to formally evaluate and visualize the satisfaction of the positivity assumption when estimating the ATT using R and I am having a tricky time figuring out how to do so. As [Greifer and Stuart](https://arxiv.org/abs/2106.10577) point out, the positivity assumption can be relaxed a bit for estimating the ATT because we would only need counterfactuals for the treated units. In their paper, along with others I have seen, positivity can be visually assessed with density plots overlaying the density of treated and non-treated units for some X variable. If we have several variables, what can be done to evaluate the distribution of treatment across a combination of covariates? Can the propensity score be used for these purposes? If so, is the solution as simple as: - Estimate propensity scores - Create a density plot of propensity scores for treated units - Overlay density plot of propensity scores for non-treated units - For ATT, check the degree to which the two density plots overlap
How to Evaluate and Visualize the Positivity Assumption for the ATT
CC BY-SA 4.0
null
2023-05-03T16:45:48.393
2023-05-03T20:47:58.603
null
null
360805
[ "causality", "propensity-scores", "treatment-effect" ]
614795
1
null
null
0
60
If $\boldsymbol{X} \sim \mathcal{N}_N(\boldsymbol{\mu},\boldsymbol{\Sigma})$ is an $N$-dimensional gaussian vector, where $\boldsymbol{\mu} \in \mathbb{R}^N$ and $\boldsymbol{\Sigma} \in \mathbb{R}^{N \times N}$, what is the distribution of $$Y=\lVert X \rVert^2$$ where $Y=\lVert \cdot \rVert$ denotes the $L_2$-norm (Euclidean norm) ? Note that following question was answered [here](https://math.stackexchange.com/q/2723239). And claims that it has [generalised chi-squared distribution](https://en.wikipedia.org/wiki/Generalized_chi-squared_distribution). But I couldn't manage to find parameters. If $ Y \sim \tilde{\chi}^2(\boldsymbol{w},\boldsymbol{k},\boldsymbol{\lambda},m,s)$ How to find its parameters from $\boldsymbol{\mu}$ and $\boldsymbol{\Sigma}$?
Distribution of Squared Euclidean Norm of Gaussian Vector
CC BY-SA 4.0
null
2023-05-03T16:59:30.747
2023-05-04T13:19:56.863
2023-05-03T18:28:11.517
387145
387145
[ "distributions", "mathematical-statistics", "multivariate-normal-distribution" ]
614796
1
null
null
2
25
I have some data that I have modeled as a function of two Gaussian Process Regression models $X_i(p_i)$, where $p_i$ is a parameter, so my regression models is: $y(p_1, p_2) = f(X_1, X_2) = X_1(p_1)*(1 - exp(a/X_2(p_2)) $ I know how to compute confidence intervals for $X_i(p_i)$ for a given value of parameter $p_i$, but how do I compute a confidence interval for $y(p_1, p_2)$??
How to compute confidence interval of function of Gaussian Process Resgression
CC BY-SA 4.0
null
2023-05-03T17:02:26.453
2023-05-03T17:02:26.453
null
null
256955
[ "regression", "confidence-interval", "nonlinear-regression", "gaussian-process" ]
614798
1
null
null
0
31
I'm conducting an experiment in which participants receive either an object A or an object B. Next, we ask participants if they want to trade their objects with the alternative object (i.e., object B for participants receiving object A, and Object A for participants receiving object B). I want to compare the proportion or frequency of participants that refuse to trade A with B, with the proportion or frequency of participants that accept to trade B with A. Is it possible to conduct such an analysis? Especially, a Bayesian analysis? Such an analysis looks like an A/B test, except that I want to compare non-occurrence in group A with occurrence in group B. Thanks
Compare two proportions with a Bayesian analysis
CC BY-SA 4.0
null
2023-05-03T17:14:24.223
2023-05-03T17:14:24.223
null
null
381165
[ "r", "proportion", "fishers-exact-test", "z-test", "bayes-factors" ]
614800
1
null
null
0
29
[](https://i.stack.imgur.com/Khevv.png) The above picture is an excerpt from a paper I am reading. (Roads and Loans, Review of Financial Studies). I think based on the reporting of the results from the authors, beta1 is their interest? Is that correct? But even if so, what are the other variables doing here? Also, if one wants to include an interaction effect, shouldn't the authors also include the front term of beta2 as a control? (Similar to beta3 being the front term of beta4).
Can someone help me understand this regression model?
CC BY-SA 4.0
null
2023-05-03T17:40:44.470
2023-05-03T17:40:44.470
null
null
355204
[ "regression", "interaction", "interpretation", "regression-discontinuity" ]
614801
1
null
null
0
19
I am running a hypothesis test, and I am trying to share the statistics on this test to business user. I have calculated effect size using Cohen's d. Now I want to find a way to convert this effect size back into original units of measurement, something more consumable for the business user. Means of the two groups are defect rates.
How to translate calculated Effect size into original units of measurement?
CC BY-SA 4.0
null
2023-05-03T18:00:29.760
2023-05-03T18:28:10.997
2023-05-03T18:28:10.997
378323
378323
[ "hypothesis-testing", "statistical-power", "effect-size", "cohens-d" ]
614802
1
null
null
0
13
I have a dataset containing geolocation and time of purchase information for buyers of a new product (Product X). I'd like to determine the unit of spatial provision when this product was offered to customers. In other words, I want to find out whether: - The product was provided to all potential customers at once, - The product was rolled out in a staggered manner at the city level, - The product was rolled out in a staggered manner at the neighborhood level, - The product was rolled out in a staggered manner at the postal district level How can I provide evidence or statistically analyze this data to determine which of these cases is more likely? Are there any specific methods or tests that I should consider using for this purpose? Any guidance on approaching this problem would be greatly appreciated!
Analyzing geolocation and time of purchase data to determine the spatial provision strategy for a new product
CC BY-SA 4.0
null
2023-05-03T18:01:55.163
2023-05-03T18:01:55.163
null
null
334187
[ "hypothesis-testing", "anova", "clustering", "spatial", "spatio-temporal" ]
614803
1
null
null
0
10
I'm new to econometrics and this might be a dumb question.. but if I were to include an interaction term, do I also have to input the variables separately as a control variable? For example, Y = a + beta1 + beta2 + beta1beta2. This contains all of the variables separately as a control. Is this always the right way to go? I used to think it is, but sometimes, I get confused because some papers have a model like Y = a + beta1 + beta1beta2. Is this bad practice? When would some one include all of the variables separately as controls and not for others?
Control variables in interaction effects
CC BY-SA 4.0
null
2023-05-03T18:12:36.237
2023-05-03T18:12:36.237
null
null
355204
[ "regression", "interaction", "controlling-for-a-variable" ]
614804
1
null
null
0
16
I am writing a thesis now, and I ran into a problem - I understand the basic concepts of time series econometrics and what models and tests exist, what exactly they check, but I can not find good structured information on how to choose a model for multinomial regression. I'm looking for something like a practical guide / cheat sheet on this issue (better even without a special theory), just something with pure logic, what tests and why I do in various models (VAR/VECM/GARCH/ARIMA/HAR-RV/etc). A very simplified example: For example, I tested with the ADF test that the regression is stationary/non-stationary. If stationary - I use VAR. If not stationary, then I check for cointegration using the Engle Granger procedure/ML estimator of Johansen. If there is no cointegration, then I take the first difference and then use VAR. If there is cointegration, I use VECM. I really need your recommendations.
Practical guide or book for time series analysis
CC BY-SA 4.0
null
2023-05-03T18:13:55.967
2023-05-03T18:22:41.217
2023-05-03T18:22:41.217
22311
361080
[ "time-series", "references", "econometrics", "garch", "vector-error-correction-model" ]
614805
1
null
null
0
25
Based on my studies, Hoeffding inequality is used for bounded or sub-Gaussian random variables which are explained on the [Wikipedia](https://en.wikipedia.org/wiki/Hoeffding%27s_inequality) page. I am reading the book [Introduction to Multi-Armed Bandits](https://arxiv.org/pdf/1904.07272.pdf) by Aleksandrs Slivkins. In page 161, he provides a general inequality without the assumption that a random variable is bounded or subgaussian and calls it Hoeffding inequality. I am wondering how it is possible to prove it. [](https://i.stack.imgur.com/LQIpa.png)
How to prove this version of Hoeffding inequality?
CC BY-SA 4.0
null
2023-05-03T18:14:22.807
2023-05-03T18:14:22.807
null
null
133197
[ "probability-inequalities", "hoeffdings-inequality" ]
614806
1
null
null
0
8
In the following i am studying life expectancy as an example: where I have dummies for a person being male, female, smart, wealthy. The base reference is a child who is not smart or wealthy. Life expectancy=β_1 Male+β_2 Female+β_3 Wealthy+β_4 Smart+β_5 (MALE Wealthy)+β_6 (FEMALE Wealthy)+β_7 (SMART Wealthy)+β_8 (MALE Wealthy)+β_9 (FEMALE Wealthy)+β_10 (MALE SMART Wealthy)+β_11 (FEMALE SMART Wealthy) How should β_10 and β_11 be interpreted?
Three-way interactions with two-way interactions consisting of only dummies :D
CC BY-SA 4.0
null
2023-05-03T18:25:03.450
2023-05-03T18:25:03.450
null
null
387150
[ "interaction", "interpretation" ]
614807
1
614859
null
2
24
In [[4D U-Nets for Multi-Temporal Remote Sensing Data Classification]](https://www.mdpi.com/2072-4292/14/3/634) they give the following formula for the traditional 2D CNN. But I’m confused about the w_i,j in this formula: [](https://i.stack.imgur.com/VuBA8.png) From my knowledge I would say that if you have a NxNxd input, your kernel will have shape HxWxd in this case. So the ‘depth’ of the kernel matches the ‘depth’ of the input. In this formula it seems that they are re-using the same HxW kernel against every ‘depth’ level d. I would therefore say that it should be w_i,j,c instead of w_i,j. Am I missing something here?
Traditional 2D CNN Formula
CC BY-SA 4.0
null
2023-05-03T18:40:17.003
2023-05-04T08:49:37.080
null
null
373727
[ "machine-learning" ]
614808
2
null
614678
0
null
If you have sufficiently long time series, you could run your DCC model on monthly data. Then you would have matching data frequencies for your dependent and independent variables. > However, there seems to be no ARCH effect when using monthly data, while this problem does not occur when using daily data. Even if there are no ARCH patterns in monthly data, there might still be DCC patterns there. Note that in principle the DCC model can take any univariate conditional variance model as an input; it does not have to be GARCH. However, some software implementations (such as the `rmgarch` package in R) only allow univariate GARCH inputs to DCC. In such a case you may either create a fake object of the appropriate GARCH class but with constant conditional variance or use a GARCH model with restricted parameter values, e.g. $\alpha=0.001$ and $\beta=0.999$ that will produce a virtually constant conditional variance.
null
CC BY-SA 4.0
null
2023-05-03T18:50:32.987
2023-05-03T18:50:32.987
null
null
53690
null
614809
1
null
null
0
22
Suppose you draw a random sample from a probability distribution, with the objective of gaining information about a parameter of that distribution. The inferential usefulness of the probabilistic information (Fisher information about the parameter) of the sample is obvious. Likewise, if we examine a set of messages sent over a noisy channel, we can quantify their entropy to draw inferences about the error source, the population of messages, and the messages as originally composed. Alternatively, we could use non-probabilistic measures in either scenario (or at least in some versions of them). These include exact, non-parametric, combinatorial, or algorithmic measures. These measures would yield, respectively, exact information, non-parametric Fisher information, combinatorial entropy, and algorithmic information or complexity. I can't recall ever seeing an analytical approach that uses the two types of information (probabilistic and non-probabilistic) in a complementary or supplementary way. They seem to be mutually exclusive. For example, we would never estimate a p-value when it is practical to compute an exact test. Non-parametric tests are inferior to (less informative than and redundant with) parametric tests when assumptions hold but are superior otherwise. Or else, non-parametric tests answer different classes of questions entirely (e.g., quantifying monotonic vs. linear relationships). Combinatorial entropy is equivalent to Shannon entropy under a discrete, uniform distribution, and the respective measures yield the same results in that case. Algorithmic information/Kolmogorov complexity describes individual strings exactly as they are, while Shannon entropy describes average/expected strings or the noise-generating distribution acting on a string. Are there ways that we could, and actually would prefer to, use both probabilistic and non-probabilistic information types to supplement one another, when answering a single question? I'm specifically interested in cases where the two information types are measured on a single data set, and where the question being answered is not multifaceted. It would be trivial to conceive of a question with several parts, some of which are probabilistic in nature and others not, or some requiring greater robustness than others. It's also easy to concoct a scenario where there are multiple data sources or multiple samples from one source, each with different data types or assumptions, respectively. In contrast, it is hard (at least for me) to think of a question as simple as "What is the mean of the population from which this single sample has been drawn?" that would benefit from using both.
Can probabilistic (e.g., Fisher, Shannon) and non-probabilistic (e.g., Hartley, Kolmogorov) information types be jointly useful?
CC BY-SA 4.0
null
2023-05-03T19:01:20.357
2023-05-03T19:17:07.857
2023-05-03T19:17:07.857
298128
298128
[ "nonparametric", "entropy", "information-theory", "fisher-information", "exact-test" ]
614810
2
null
614780
1
null
If your model specifies some change in the distribution (e.g. a level shift) at the time of the interruption and perhaps following it, you know the series is nonstationary. A more relevant question may be how to account for such changes when testing for stationarity or presence of unit roots (the hypothesis being that aside from the interruption, the series is stationary or has a unit root). There are some versions of the ADF and perhaps other tests that allow for specific changes such as level shifts. Also note that your time series are fairly short (36, 24 or 12 observations), so the tests will tend to have low power. You will thus only be able to detect very pronounced violations of the null hypothesis.
null
CC BY-SA 4.0
null
2023-05-03T19:09:16.227
2023-05-03T19:09:16.227
null
null
53690
null