Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
614561
2
null
614501
7
null
This is to some extent similar to some other answers, however I feel still worth saying. What I teach (and have seen elsewhere) is to either test at a fixed level $\alpha$, or to use more graded "evidence language". If we fix a level, I'd just say "We do not reject at level $\alpha$" (or we do, of course). Maybe (if you want to bring the term evidence in), "there is no significant evidence" (at level $\alpha$; unless there is). Alternatively I'd interpret test results in a non-binary way saying "There is very strong/strong/modest/weak/no evidence" for p<0.001/0.01/0.05/0.1/p>0.1. I don't like the term "insufficient", as it seems to suggest that we wanted to reject but failed to do so (same with the wording in the question "fail to reject"), whereas I think that a scientist should be open to any result rather than hoping for significance (even though in many cases it may be arguably more honest to say something like "I wanted significance so much but didn't get it, boohoo", in which case the researcher probably better says it this way so that people know what to think of the researcher's neutrality...).
null
CC BY-SA 4.0
null
2023-05-01T00:18:45.613
2023-05-01T00:18:45.613
null
null
247165
null
614562
1
null
null
0
10
There is a substantial literature on what I don't want to do, which is analyzing the variance across studies of a treatment versus control group. What I want to do should be simpler: I have 11 samples of Type 1 people and 11 samples of Type 2 people. My dependent variable, Y, has more variance for Type 1 people, reflected both in the variation of the sample means of Type 1 people and in the variation within samples. I conducted a Fisher’s F ratio test of the variance in sample means between Type 1 and Type 2 samples, and it was statistically significant. However, I’m unsure if this is a good approach. (People worry about normality. The variable Y is not normally distributed, although of course sample means will be normally distributed for a given sample size. Samples vary widely in size.) Nor do I know how to compare the within-samples variance. Just a quick and dirty and probably completely incorrect t-test of standard deviations between the two types of samples is highly significant. Any advice appreciated.
Meta-analytic test of variances (but not variances of differences)
CC BY-SA 4.0
null
2023-05-01T00:50:17.077
2023-05-01T00:50:17.077
null
null
386936
[ "hypothesis-testing", "mixed-model", "meta-analysis" ]
614563
1
null
null
0
14
I'm a college student looking to compare housing prices between some control group of cities and my so-called "treatment" group of cities. More specifically, I'm looking to see if there is a significant difference between increases in housing prices in major Chinese cities versus similar, non-Chinese East Asian cities within a specific time period. I considered using a 2x2 diff-in-diff method to measure the difference; however, I'm not sure this approach would work particularly well since I don't exactly have a "treatment" I am looking to measure between the two periods. I am also considering creating two regression models between Chinese and non-Chinese housing prices and computing the difference between the slopes. Does anyone with a better statistical background than myself have any other suggestions for possible methods? Thanks!
Preferred statistics technique for housing price comparison
CC BY-SA 4.0
null
2023-05-01T00:58:10.177
2023-05-01T00:58:10.177
null
null
386937
[ "regression", "statistical-significance", "difference-in-difference", "social-science" ]
614564
2
null
614501
3
null
#### My preference is to use "no evidence" The testing in a classical hypothesis test is a binary decision, so in this context I prefer to use "no evidence" vs "evidence". It is best not to conflate the decision to reject the null hypothesis (which is fixed by the data and has no uncertainty) with the underlying truth or falsity of the hypotheses (which is uncertain). For that reason I would recommend you avoid talking about "evidence to reject" and instead use wording that either refers to evidence in favour of the alternative hypothesis, or the actual rejection decision that was made: - We found no evidence in favour of $H_A$ at the significance level $\alpha$. - We reject $H_0$ in favour of $H_A$ at the significance level $\alpha$. - We found evidence in favour of $H_A$ at the significance level $\alpha$. - We do not reject $H_0$ in favour of $H_A$ at the significance level $\alpha$. Alternatively, you can build in the "statistically significant" description: - We found no statistically significant evidence in favour of $H_A$ (at the $\alpha$ level). - We found statistically significant evidence in favour of $H_A$ (at the $\alpha$ level). Alternatively, in many contexts it is more sensible to just state the relevant p-value and characterise the evidence without use of a specific significant level:$^\dagger$ - We found no evidence in favour of $H_A$ ($p=0.3255$). - We found weak evidence in favour of $H_A$ ($p=0.0341$). - We found strong evidence in favour of $H_A$ ($p=0.0076$). - We found very strong evidence in favour of $H_A$ ($p=0.0008$). The main reason I prefer not to use "insufficient evidence" is that it suggests some evidence in favour of the alternative hypothesis when that may not be the case. For example, if you have a p-value of $p=0.3255$, that means that if the null hypothesis is true, almost one-third of the time you would see a result that is at least that conducive to the alternative hypothesis. My view is that this is accurately characterised as "no evidence", not "insufficient evidence to reject". --- $^\dagger$ Here I use my own assessments of the strength of evidence, to wit: "weak" for p-value between 0.01-0.05, "strong" for p-value between 0.001-0.01, "very strong" for p-value of 0.001 or lower. Others may take a different view of the appropriate correspondence, but so long as you state the p-value, it should be fine.
null
CC BY-SA 4.0
null
2023-05-01T01:06:14.543
2023-05-01T11:24:20.677
2023-05-01T11:24:20.677
173082
173082
null
614565
2
null
555817
0
null
First of all, ARMA(1,1) process can be rewritten in this way: $y_t$ = $\epsilon_t$ + ($\beta$+$\theta$)$\sum_{j=1}^{\infty}$$\beta^{j-1}$$\epsilon_{t-j}$ Therefore, A = E[$y_{t-2}$$\epsilon_t$] = E[$\epsilon_{t-2}$$\epsilon_t$ + ($\beta$+$\theta$)$\sum_{j=3}^{\infty}$$\beta^{j-3}$$\epsilon_{t-j}$$\epsilon_t$] ---------- eq.(1) Because $\epsilon_t$ is white noise, E[$\epsilon_{t-2}$$\epsilon_t$]=0, and similarly for other terms in eq.(1). Overall A=0. Likewise, if you calculate B = E[$y_{t-2}$$\epsilon_{t-1}$], you will get zero, which means that $y_{t-2}$ is not correlated with error term $u_t$ = $\epsilon_t$ + $\theta$$\epsilon_{t-1}$. We also know that $y_{t-2}$ is correlated with $y_{t}$ through $y_{t-1}$. This all points to $y_{t-2}$ as a reliable instrument for $y_{t-1}$.
null
CC BY-SA 4.0
null
2023-05-01T01:45:46.873
2023-05-01T15:35:59.157
2023-05-01T15:35:59.157
386880
386880
null
614566
1
null
null
1
74
Let, \begin{equation} g(\alpha,\beta) = \begin{cases} \frac{\alpha}{\beta}, & \text{if } \alpha > \beta \\ 0, & \text{if } \alpha \leq \beta \end{cases} \end{equation} I want to find the CDF of $$Z=\max\left\{g(X,Y),g(X,C)\right\}$$ where$X,Y\geq0$ are independent random variables, and $C>0$ is a constant. I am facing two difficulties: - I could get the product of CDFs within $\max$ if two were independent. However, as they have $X$ in common, dependency is created. Therefore, I tried to condition on $X$ first and solve. However, in that case, $\frac{X}{C}$ becomes a constant, which is confusing me. - I have more complications in applying the conditions of $X>Y$ and $X>C$ seperately to the analysis. Can someone help me write the CDF of $Z$ in terms of the PDFs/CDFs of $X$ and $Y$?
CDF of $\max$ under conditions
CC BY-SA 4.0
null
2023-05-01T01:50:04.397
2023-05-03T03:45:06.543
2023-05-01T03:54:49.353
124679
124679
[ "probability", "density-function", "cumulative-distribution-function" ]
614567
1
null
null
2
81
I understand that Maximum Likelihood Estimation (MLE) can sometimes result in Biased Estimates - for example, when estimating parameters of the Normal Distribution, MLE is known to produce biased estimates of $\sigma^2$: \begin{aligned} \ell(\mu, \sigma^2 | \mathbf{y}) &= \log\left[\prod_{i=1}^{n}f(y_i; \mu, \sigma^2)\right] \ &= \sum_{i=1}^{n}\log f(y_i; \mu, \sigma^2) \ & \end{aligned} \begin{aligned} \frac{d}{d\sigma^2}\ell(\mu, \sigma^2 | \mathbf{y}) = 0 \end{aligned} \begin{aligned} Bias = \mathbb{E}\left(|\hat{\sigma}^2_{\text{MLE}} - \sigma^2|\right) &= \frac{1}{n}\sum_{i=1}^{n}(y_i - \bar{y})^2 - \frac{1}{n-1}\sum_{i=1}^{n}(y_i - \bar{y})^2 \neq 0\end{aligned} To target this problem, a modification of MLE called Restricted Maximum Likelihood Estimation (RMLE) can be used in which we calculate the Likelihood for a transformed (random) variable $k^\top y$, such that $k^\top x = 0$ and $k^\top = I - X(X^\top X)^{-1}X^\top$ . For example, in the case of a Linear Mixed Effects Model: $$y = X\beta + Z u + e$$ $$(u,e)^\top \sim \mathcal{N}\left(\begin{Bmatrix}0 \\ 0\end{Bmatrix}, \sigma^2\begin{Bmatrix} G & 0 \\ 0 & R \end{Bmatrix} \right)$$ $$\operatorname{Var}(y) = \sigma^2(ZGZ^\top + R) = \sigma^2 H$$ In this case, the regular Log-Likelihood for $Y$ can be written as: $$\mathcal{l}_{\text{ML}}(\beta, \phi;y) = -\frac{1}{2}\left(n\log(2\pi) + n\log(\sigma^2) + \log|\mathbf{H}| + (\mathbf{y}-\mathbf{X}\boldsymbol{\beta})^\top \mathbf{H}^{-1} (\mathbf{y}-\mathbf{X}\boldsymbol{\beta})/\sigma^2\right)$$ However, variance estimates from this regular Log-Likelihood function might be biased. On the other hand, the Restricted Maximum Log-Likelihood for $k^\top y$ can be written as: $$\mathbf{K}^\top \mathbf{y} \sim \mathcal{N}(0, \sigma^2 \mathbf{K}^\top \mathbf{H} \mathbf{K})$$ $$l_{\text{R}}(\phi; \mathbf{K}^\top \mathbf{y})_\ = -\frac{1}{2} \left((n-p)\log(2\pi) + (n-p)\log\sigma^2 + \log|\mathbf{K}^\top \mathbf{H}^{-1} \mathbf{K}| + \frac{1}{\sigma^2} \mathbf{y}^\top \mathbf{K} (\mathbf{K}^\top \mathbf{H}^{-1} \mathbf{K})^{-1} \mathbf{K}^\top \mathbf{y}\right)$$ This brings me to my question: I have read that sometimes terms such as $(\mathbf{K}^\top \mathbf{H}^{-1} \mathbf{K})^{-1}$ and $(X^\top X)^{-1}$ might be difficult to invert - for instance, sometimes the "ranks" of these matricies results in these terms being non-invertible. In such instances, a "Generalized Inverse" ([https://en.wikipedia.org/wiki/Generalized_inverse](https://en.wikipedia.org/wiki/Generalized_inverse)) can be used to invert these matrices . However, I remember from my Linear Algebra class that the Generalized Inverse is not always "unique" - that is, for some matrix $A$, there might exist several possible Generalized Inverses. Thus, if a portion of the Restricted Log Likelihood is not unique, will this complicate the estimation of the Variance components? For instance, if the Variance components due to parts of the Likelihood being non-invertible are non-unique - could this result in a Non-Identifiable model? If I have understood this correctly, the Restricted Maximum Likelihood was designed to target a problem of biased estimates - but ironically might result in another problem of non-identifiable estimates? Thanks! References: - https://www.sciencedirect.com/science/article/pii/S002437951100320X - Restricted maximum likelihood with less than full column rank of $X$
Are Variance Estimates from the Restricted Maximum Likelihood Considered as Unique?
CC BY-SA 4.0
null
2023-05-01T02:36:20.703
2023-05-04T06:24:43.550
2023-05-04T06:24:43.550
77179
77179
[ "regression" ]
614568
2
null
614118
0
null
If I am understanding this correctly, a t-test with Bonferroni correction to adjust for multiple comparisons should work.
null
CC BY-SA 4.0
null
2023-05-01T04:06:22.697
2023-05-01T04:06:22.697
null
null
319471
null
614569
1
null
null
0
12
I'm just learning about extreme distributions, more specifically the GEV. I still have a lot of questions about that but my question for this post is whether there exists a discretized version of the GEV. When I did a google scholar search, I kept getting results about the multiple discrete–continuous extreme value (MDCEV) models but maybe I'm not doing a good enough search. I did see a discretized Gumbel distribution article but noting about GEV So is there a discretized version of GEV? If not, is the continuous version enough to deal with discrete data? and what would be the benefits of having a stand alone discrete version of GEV?
Discrete Extremes
CC BY-SA 4.0
null
2023-05-01T04:11:56.327
2023-05-01T04:11:56.327
null
null
366252
[ "extreme-value" ]
614570
1
615317
null
3
122
Recently, I was wondering about how to "restrict" a statistical model from making predictions beyond a certain range ([Preventing Illogical Interoperations of Models?](https://stats.stackexchange.com/questions/614128/preventing-illogical-interoperations-of-models)). For example, in this video ([https://www.youtube.com/watch?v=h5aPo5wXN8E&list=PLDcUM9US4XdNM4Edgs7weiyIguLSToZRI&index=3](https://www.youtube.com/watch?v=h5aPo5wXN8E&list=PLDcUM9US4XdNM4Edgs7weiyIguLSToZRI&index=3) @ 56:40), a Bayesian Model is created using the Log Normal Distribution when modelling human heights as heights can not take negative values. After spending some more time reading about this, I came across the idea of "Truncated Probability Distributions" ([https://en.wikipedia.org/wiki/Truncated_normal_distribution](https://en.wikipedia.org/wiki/Truncated_normal_distribution)). As I understand, a Truncated Probability Distribution is a Probability Distribution that is defined only on a "limited range" (i.e. "restricted"). For example, consider the Normal Distribution - we can "truncate" this distribution over the range $a - b$: $$f(x; \mu, \sigma, a, b) = \frac{1}{\sigma} \cdot \frac{\phi\left(\frac{x-\mu}{\sigma}\right)}{\Phi\left(\frac{b-\mu}{\sigma}\right) - \Phi\left(\frac{a-\mu}{\sigma}\right)}$$ Where: $$\phi(x) = \frac{1}{\sqrt{2\pi}}\exp\left(-\frac{1}{2}x^2\right)$$ This leads me to my question: Suppose I collect some data on how long different people lived and the average amount of yearly income they earned in their life. Suppose I am interested in modelling (e.g. regression) the effect of income on life expectancy. In this problem, it is quite likely to observe an upwards trend in that people with higher incomes likely had the ability to access better quality healthcare and thus lived longer. However, it is also possible that if I use this model to predict the life expectancy of a billionaire, the life expectancy might be around 200 years - and we know that in modern history, no human has ever recorded to live that long. Thus, suppose if I found out the maximum age a human ever reached - to avoid making such illogical predictions, could I create a GLM Regression Model based on a Truncated Normal Probability Distribution between $a = 0$ and $b$ = max_age_ever_recorded and thus address this problem of illogical predictions? Is this a statistically valid approach? Or is this illogical or unnecessary? Thanks!
Applications of Truncated Probability Distributions
CC BY-SA 4.0
null
2023-05-01T04:12:01.100
2023-05-11T00:36:57.603
null
null
77179
[ "regression" ]
614571
1
null
null
1
14
I have built several classification/identification models to identify cat behaviour using accelerometer data. I am comparing two modelling techniques, Random Forest (RF) & Supervised Self-Organizing Maps (SOM), and two accelerometer mounting locations, collar & harness. Resulting in four different models: - Collar - Random Forest (CRF) - Harness - Random Forest (HRF) - Collar - SOM (CSOM) - Harness - SOM (HSOM) Information dataset - Size dataset: 40852 rows - Train/test split: 70/30% - SOM grid: 7x7 - Total of 32 identifier variables - Total of 4 model building rounds (MR): MR1 15 behaviours; MR2 8 behaviours; MR3 6 behaviours; MR4 3 behaviours. (Total of 16 models) Results I calculated the Kappa & Overall accuracy for each model (See results below). As you can see, the values for the SOM models are higher than those of the RF models. [](https://i.stack.imgur.com/17itU.png) Because I still had 6 days' worth of accelerometer data, that was not used in any way to train or test the model, I decided to put this data into all the models and have it identify the behaviour. I then calculated the daily proportions for each behaviour and did a Dirichlet regression to compare the proportions for each behaviour between the models. To be able to test all the 16 models, I merged all the behaviours into one of three main categories: active, inactive or maintenance. As you can see below, despite the lower performance values of the RF, the RF models identify much more similar than the SOM models do. The proportions as identified by the RF models, are also more similar to those found in literature for cats. [](https://i.stack.imgur.com/UyrS9.png) My question I am not a statistician and have some trouble trying to figure out why the SOM models have such high performance values, while identifying very variable between the SOM models. I would expect the SOM models to identify very similarly due to their high performance values. One of the possible explanations I can think of, is maybe an overfitting issue for the (supervised) SOM models? If anyone has any other ideas that I could look into, I will be very greatfull.
Supervised Self-Organizing Maps: Overfitting or something else?
CC BY-SA 4.0
null
2023-05-01T04:56:28.800
2023-05-01T04:56:28.800
null
null
386192
[ "machine-learning", "random-forest", "supervised-learning", "self-organizing-maps" ]
614572
1
614617
null
11
1460
It's been my understanding that the null hypothesis is never or rarely really true. Then, isn't the real point of statistical testing to detect if there's an effect size large enough for the test to show statistical significance at a given $\alpha$ level? (edit: I mean an effect size one is interested in, which depends on one's research question). If I'm correct, doesn't it mean that ideally we should always conduct some a priori power analysis? Or is my reasoning somehow incorrect? Are there situations where a priori power analysis isn't really necessary? I don't have a specific situation or problem in mind, that's why my question is really general.
If the null hypothesis is never really true, is there a point to using a statistical test without a priori power analysis?
CC BY-SA 4.0
null
2023-05-01T06:39:37.800
2023-05-02T16:19:50.873
2023-05-01T18:59:37.833
44269
386949
[ "hypothesis-testing", "statistical-significance", "statistical-power" ]
614573
1
null
null
1
36
Referring to [Measures of Association How to Choose](https://journals.sagepub.com/doi/pdf/10.1177/8756479308317006#:%7E:text=2.-,CONTINUOUS%2DORDINAL,%2Dsub%2Db%2C%20%CF%84b%20.) The paper states that for the association between a polychotomous nominal variable and a continuous variable; > If the nominal variable has more than two levels, then one can calculate the point-biserial correlation between the continuous variable and all possible pairs of levels of the nominal variable; this would result in such coefficients, where k represents the number of levels of the nominal variable. For further reading, see Tate. However, the paper doesn't state what to do with the point-biserial correlations of all the pairs of levels. I want an aggregated metric between the two variables. Would taking the arithmetic mean of the coefficients suffice?
Correlation/Association between a polychotomous nominal variable and a continuous variable
CC BY-SA 4.0
null
2023-05-01T07:32:51.147
2023-05-01T08:26:07.260
2023-05-01T08:26:07.260
22047
386952
[ "correlation" ]
614574
1
null
null
1
41
Given the following data: ``` Features: {20, 10, 10}, Classification: "One" Features: {10, 20, 10}, Classification: "Two" Features: {10, 20, 30}, Classification: "Two" ``` And the question arises to which classification belongs: ``` {10, 10, 20} ``` Is the correct answer: ``` One 0.33 Two 0.33 Unknown 0.33 ``` Or the correct answer is: ``` One 0.25 Two 0.5 Unknown 0.25 ``` Or maybe there is another correct answer?
What probability to assign to each classification?
CC BY-SA 4.0
null
2023-05-01T07:58:16.870
2023-05-01T16:47:28.360
2023-05-01T08:12:00.707
178768
178768
[ "classification" ]
614575
2
null
614572
4
null
I'm not sure how a power analysis would help. If you do a power analysis, which says you need N = 1000 and it is not significant, what do you know that you didn't know before you did the power analysis? (And how do you estimate the size of the effect to put into the power analysis?) This is the problem with over-relying on (or perhaps over-interpreting) p-values. A non-significant p-value tells you that you do not have confidence in knowing the direction of the effect. Andrew Gelman, in his blog, has popularized the idea of Type S and Type M errors instead of Type I and Type II errors. A Type S error is a Sign error - you have the wrong direction of effect, a type M error is a Magnitude error - you have not correctly estimated the magnitude of the effect.
null
CC BY-SA 4.0
null
2023-05-01T08:41:40.670
2023-05-01T08:41:40.670
null
null
17072
null
614576
1
null
null
2
32
I have created a multiclass predictive model, however, the target values are repeating. Why one of the target value is repeating and how can it removed? [](https://i.stack.imgur.com/cjyhS.png) The code for this is as follows: ``` #Defining and importing the dataset df = pd.read_excel("C:/Users/Ram Prakash/Downloads/Data.xlsx", sheet_name = 'Multiclass') x = df.drop('Fault Type', axis =1) y = df['Fault Type'] #y = df.iloc[0:, 10] #y = le.fit_transform(y) #Data-Preprocessing from sklearn.preprocessing import LabelEncoder le = LabelEncoder() df["Fault Type"] = le.fit_transform(df["Fault Type"]) #fvalue_selector = SelectKBest(f_classif, k = 7) #X_kbest = fvalue_selector.fit_transform(x, y) #splitting the dataset x_train, x_Combine, y_train, y_Combine = train_test_split(x,y, train_size=0.8) x_val, x_test, y_val, y_test = train_test_split(x_Combine, y_Combine, test_size=0.5) print(x_train.shape) print(y_val.shape) print(y_test.shape) scaler = StandardScaler().fit(x_train) x_train, x_test = scaler.transform(x_train), scaler.transform(x_test) #Training the model from sklearn.svm import SVC svclassifier = SVC(kernel = 'rbf', gamma = 0.001, C = 10000) #fitting the dataset clf = OneVsOneClassifier(svclassifier).fit(x_train, y_train) #clf.fit(X_train, Y_train) #Predicting Values Y_pred = clf.predict(x_test) X_train_pred = clf.predict(x_train) ``` [](https://i.stack.imgur.com/gLwba.png)
Repeating Target Values in output
CC BY-SA 4.0
null
2023-05-01T08:45:05.683
2023-05-01T16:29:23.240
2023-05-01T16:29:23.240
386957
386957
[ "machine-learning", "python", "svm", "multi-class" ]
614577
1
null
null
0
21
In conducting an impact analysis, how can we deal with a target variable, if it contains a unit root? In this case, an impact is expected to change the level of the variable. But then the impact persists and then a new impact is added due to continuing intervention? does this mean that the impact accumulates, e.g., doubled, tripled, quadrupled, etc?
impact analysis and a unit root
CC BY-SA 4.0
null
2023-05-01T08:57:42.900
2023-05-01T08:57:42.900
null
null
368378
[ "unit-root", "intervention-analysis" ]
614578
2
null
614548
3
null
The issue with the question is that the expression$$\mathbb E[\theta_1|x]$$is not well-defined: - either $(\theta_1,\theta_2)$ is considered a random vector with joint prior $\pi(\theta_1,\theta_2)$, in which case $$\mathbb E[\theta_1|x]=\int \theta_1 \pi(\theta_1,\theta_2|x)\,\text d(\theta_1,\theta_2)$$only depends on $x$ and can be computed - or $\theta_1$ is considered a random variable with prior $\pi(\theta_1)$, while $\theta_2$ is an unknown fixed value, in which case $$\mathbb E_{\theta_2}[\theta_1|x]=\int \theta_1 \pi_{\theta_2}(\theta_1|x)\,\text d\theta_1$$ is indexed by $\theta_1$, hence cannot be computed when $\theta_2$ is unknown.
null
CC BY-SA 4.0
null
2023-05-01T09:26:15.967
2023-05-01T09:26:15.967
null
null
7224
null
614579
1
null
null
1
18
I am using R to build a model. My outcome variable is the number of patients with specific disease of interest (Call it X) in health facilities over the past 7 days. I converted the count data to proportion by: Number of patients with x in past 7 days/number of total patients past 7 daysX100 %. I used Poisson regression but since there is overdispersion, I moved to Negative Binomial Poisson regression. One point I should mention is both the count data and the proportion are not normally distributed. I read that converting count data to percentages isn't advisable. But we want to appreciate the count of patients for the disease we are studying regarding the total number of patients visiting the facility for any illness. Is Negative binomial regression a best approach? do you suggest better approach?
I have a count data converted to proportion which test to use?
CC BY-SA 4.0
null
2023-05-01T09:26:52.743
2023-05-01T09:37:17.187
null
null
386961
[ "r", "proportion" ]
614580
2
null
614579
0
null
This does not really make sense to me. Modelling a count of things (e.g. patients) as Poisson, Negative binomial or a version of these with overdispersion, and possibly with an offset for number of days or the like (these are, after all, distributions for counts - you could also truncate them, if you want to reflect that the total number of beds is limited). Modeling a percentage as Poisson (NegBin etc.) when you know the denominator makes no sense to me (your model doesn't even respect that a proportion needs to be in [0,1] / percentage in [0, 100]). If a known denominator, a percentage is better modelled as a binomial outcome (e.g. using a form of logistic regression) e.g. as X out of available bed-days (=number of beds times days). If you didn't know the denominator, something like beta-regression is an option. Also, there's presumably some correlation (patients likely rarely stay one night, which you are not modeling, which you could try to do). It can often be a good idea to write down the data generating process and describe all of it using distributions that make sense.
null
CC BY-SA 4.0
null
2023-05-01T09:37:17.187
2023-05-01T09:37:17.187
null
null
86652
null
614582
1
null
null
1
36
Consider random variables $X_i$ such that the following holds: $$ a_1 X_1 + \dots + a_n X_n -k \geq 0 + \epsilon $$ where the $a_i$ are constants and $\epsilon$ is random noise. How is it possible to uncover a relationship like this given realisations from the $X_i$? Initial Thoughts PCA If the inequality were an equality, one way to do this would be via PCA and looking at the components with small eigenvalues. Of course performance would be dependent (as above) on the signal to noise ratio determined by the $\epsilon$. Optimisation Instead we could look to perform an optimisation to determine the optimal $a_1, \dots, a_n, k$ parameters. For a given realisation $x_i$, we could use a loss function of the form: $$ \exp(-h(a_1, \dots, a_n, k)) $$ where $h(a_1, \dots, a_n, k) = a_1 x_1 + \dots + a_n x_n - k$. This would penalise small values while remaining fairly indifferent to larger values. Non-Uniqueness This is all well and good but then runs into the issue that with a finite dataset there are going to be infinite hyperplanes surrounding the data in all orientations. Even with an infinite amount of data, if the inequality holds then there will be an inifinite number of looser inequalites holding with the same $a_i$ parameters but smaller $k$. Reframing Hence we want to find a 'tight' inequality i.e. a lot of the points are close to the hyperplane itself. Note this would not be generally true given the initial inequality e.g. imagine points distributed on a unit circle. The 'tight' hyperplanes would be tangents to these where there would not be a lot of close points. However I will focus on the cases where this assumption is valid. This motivated a loss function of the form: $$ \exp(-h) - \lambda \exp(-h^2) $$ Therefore rewarding values of $h$ close to 0, where $\lambda$ is a hyperparameter. Tuning Now the problem is the above is very sensitive to the $\lambda$ value and I can't think of an obvious way to tune it. It essentially gives the 3 Goldilocks scenarios seen in plots of $h$ below. Note to generate these plots I used the following: - Sample from $X_1, X_2, Z \sim MVN(0, I)$. - Defining $X_3 = X_1 - X_2 + 5 - Z^2$, calculate the realisation for $X_3$ using above values. - Hence $X_1 - X_2 - X_3 + 5 = Z^2$ and there is a 'tight' hyperplane to discover. Too small (we care too much about not breaking the constraint and get a very loose hyperplane): [](https://i.stack.imgur.com/PoIoh.png) Too large (we care too much about getting values close to 0: [](https://i.stack.imgur.com/rC96P.png) Just right (we uncover a tight hyperplane): [](https://i.stack.imgur.com/serMf.png) I'd be interested in any improvements for example hyper-parameter tuning in the above scenario, reframing of the objective function or a different approach to the problem.
Finding inequality relationships in stochastic data
CC BY-SA 4.0
null
2023-05-01T10:12:17.643
2023-05-02T07:43:20.000
2023-05-02T07:43:20.000
29783
29783
[ "pca", "optimization", "linear", "non-independent" ]
614585
1
614612
null
2
29
In pg. 64-65 of "Foundations of Machine Leaning (2nd Ed.)" by Mohri et al. there is a discussion about structural risk minimization. The hypothesis class $\mathcal H$ is decomposed into a union of hypothesis classes $\mathcal H=\cup_{k\geq 1}\mathcal H_k$. The bound at the start of page 65 says that for all $h\in\mathcal H_k$, with probability $\geq 1-\delta$ over an iid sample of $m$ elements $S$, $$ R(h)\leq\hat R_S(h)+\mathfrak R_m(\mathcal H_k)+\sqrt{\frac{\log k}{m}}+\sqrt{\frac{\log 2/\delta}{2m}} $$ where $R$ denotes risk (expected 0-1 loss), $\hat R_S(h)$ denotes empirical risk, and $\mathfrak R_m$ is the Rademacher complexity. I am confused why there is a dependence on $k$ in the bound. As per the discussions in the textbook, we do not even assume the $\mathcal H_k$ are nested. Hence to me, the value of $k$ is totally arbitrary and serves only as an index; I can for example permute all the $\mathcal H_k$ around arbitrarily and hence reindex the $k$ arbitrarily. This doesn't seem right, so clearly I must be missing something about what $k$ means.
Upper bound on risk in structural risk minimization
CC BY-SA 4.0
null
2023-05-01T11:36:48.033
2023-05-01T18:09:13.047
null
null
386965
[ "machine-learning" ]
614586
1
null
null
0
29
I'm here to receive some information regarding the application of a statistical test of significance for this type of data. The example shows the values ​​of the peak areas of a given cocoa molecule for 4 defect level groups: Group A: 100% defect ; Group B: 10% defect ; Group C: 1% defect ; Group D: 0% defect ; [](https://i.stack.imgur.com/D1VEI.png) I immediately thought of applying a one-way ANOVA with post-hoc TUKEY HSD to evaluate the inter- group differences and the result was: - p-value < 0.05 = 4.5086e-13 Tukey's test HSD: - no significant differences C (1%) vs D (0%) => p-value= 0.6423331 (yes significant differences for all others) So I thought of doing a one-way Anova only between C and D and this test has now highlighted significant differences (p-value <0.05 = 2.0511e-07) between these 2 groups. Therefore, I fear that the statistical tool used in this case is incorrect because it cannot be assumed that the variances between the groups are homogeneous, since there are orders of magnitude different between one group considered and another which affect the result. I would like to kindly ask you what might be the most valid alternative to carry out a test of this type on this experiment. Or could a transformation of the data i.e. logarithm help? Thank you so much for your support, Francesco
Is one-way ANOVA + Tukey HSD a good choice for significant test for data with different orders of magnitude?
CC BY-SA 4.0
null
2023-05-01T11:52:37.083
2023-05-01T12:21:37.270
2023-05-01T12:21:37.270
364501
364501
[ "statistical-significance", "anova", "p-value", "tukey-hsd-test" ]
614587
1
null
null
1
17
The random vector distribution X = (Y, X, Z) is Gaussian with mean µ = (1, 2, 4)T and a covariance matrix Σ is equal to: \begin{pmatrix} 2 & 3 & 1\\ 3 & 5 & 2\\ 1 & 2 & 6 \end{pmatrix} How can I calculate regression functions of E(Z|Y) and E(Z|X,Y), and a conditional variance of D(Z|Y) and D(Z|X,Y) ?
Having difficulty finding regression function and conditional variance
CC BY-SA 4.0
null
2023-05-01T12:08:44.877
2023-05-01T12:08:44.877
null
null
386970
[ "regression", "covariance-matrix", "conditional-expectation", "conditional-variance" ]
614588
1
null
null
1
17
I have the following plot of the training and validation loss from a deep neural network. The signature U-curve of the validation loss can be noticed. I want to use the validation loss as the metric for saving the weights of my model and I am currently using the weights with the lowest possible validation loss. However, a question came to me, suppose I list down the top 5 weights with the following losses: - Epoch 120: 1.62 - Epoch 150: 1.65 - Epoch 190: 1.68 - Epoch 230: 1.73 - Epoch 270: 1.78 As you can see, their losses are very near each other while the best validation loss occurs the earliest epoch. Particularly, how can I quantify which is better than Top 1 and 2 when their losses varies only by less than 0.03? I am using data augmentation and I think that the generalizability would be better when trained for longer epochs, however the validation loss does not select Top 3 for example when its difference between the Top 1 is quite small. Any suggestions will do, thank you. [](https://i.stack.imgur.com/DXo8e.png)
On the reliability of validation loss as a metric
CC BY-SA 4.0
null
2023-05-01T12:19:54.457
2023-05-01T12:19:54.457
null
null
338644
[ "neural-networks", "validation" ]
614590
1
null
null
1
34
Loss functions I've known of always returns a positive value so that gradient descent can minimize them. Let's take a regression problem. Suppose my NN underestimate the result by x, then the loss would be the same as if it overestimate the result by x. Intuitively, should the optimizing algorithm do the opposite thing? Are (1) make the math and implementation simpler, and (2) so far it had been working the reason we keep doing this? Was asked in StackOverflow but here seems to be a more suitable community.
Minimizing the loss function seems counterintuitive
CC BY-SA 4.0
null
2023-05-01T12:32:16.327
2023-05-01T12:32:16.327
null
null
134568
[ "neural-networks", "gradient-descent" ]
614591
1
null
null
0
31
The data are for areas. How would I interpret these ACF and PACF graphs, and what model could I use? [](https://i.stack.imgur.com/abYig.png) To me, this is a non-stationary series and therefore an ARIMA would be suitable, but I don't know which parameters to choose. It would be nice if someone could explain, I'm struggling with these new notions. Thanks.
ACF and PACF graphs - MA, AR, ARMA, ARIMA?
CC BY-SA 4.0
null
2023-05-01T12:38:25.470
2023-05-02T07:01:10.993
2023-05-01T17:00:19.970
53690
386968
[ "time-series", "arima", "model-selection", "acf-pacf" ]
614592
1
null
null
0
53
In the game of chess a white advantage is defined as a positive number, a black advantage is a negative number, an equal position is 0. A chess engine displays a list of the n best variations of a certain position on the board, starting from the best one. e.g. (from white perspective): |Variation |Evaluation | |---------|----------| |Best One |0 | |Second Best |-50 | |Third Best |-120 | How would I be able to find the relative dispersion of the various variations over the best one? I'm leaning towards using a coefficient of variation; my pain points are: - would zero here be considered "meaningful"? From Wikipedia, > The coefficient of variation should be computed only for data measured on scales that have a meaningful zero (ratio scale) and hence allow relative comparison of two measurements (i.e., division of one measurement by the other). - the dispersion would be more significant as values gets closer to 0 (e.g. a list like [0,1,2] would be much more disperded than a list [998,999,1000]). How would I be able to account for it? Sorry if these questions sound naive but I'm a noobie in the field, and thanks for your time.
How to find the relative dispersion of a list of numbers over a number in it?
CC BY-SA 4.0
null
2023-05-01T12:58:31.587
2023-05-01T19:54:05.533
2023-05-01T19:54:05.533
386973
386973
[ "coefficient-of-variation", "dispersion" ]
614593
1
null
null
0
10
I have two models that have fairly good support, to wit: A clustered AFT model where time is right-censored (trial failure), regressed off genotype, age, and day of trial, clustered by specific animal. Variable of interest is slope of response (time/days). A mixed-level model where response also regresses off genotype, age, and day of trial, clustered by specific animal. Response is a "strategy", which can be considered multinomial or ordinal (technically ordinal but some controversy over ordering). The overall response would be time and the mediator would be strategy. I am presuming partial mediation. How would I do this in R, for example?
Mediation analysis with interactions and an ordinal mediator produced by mixed level modeling
CC BY-SA 4.0
null
2023-05-01T13:42:13.980
2023-05-01T13:42:13.980
null
null
28141
[ "mixed-model", "survival", "mediation", "ordered-logit", "multinomial-logit" ]
614595
1
null
null
1
43
Given observational data $X$ and knowledge of the true causal graph structure $\mathcal{G}$. How does abduction of the exogenous noise ($U$) for computing counterfactuals work? We don't have data about the unobserved noise variables $U$ and we do not know the structural equations $\mathcal{F}$ - i.e. $f_i$ for each endogenous variable $X_i$. So how can we ever infer the exogenous noise $U_i$ for any particular $X_i$? For more context about this topic: This question refers to contents similar to, for example, the following Structural Causal Model (SCM) in Section 4.2 of this paper: [https://arxiv.org/pdf/2301.02499.pdf](https://arxiv.org/pdf/2301.02499.pdf) or more general Section 4.1.4 in the book Elements of Causal Inferene: [https://library.oapen.org/bitstream/id/056a11be-ce3a-44b9-8987-a6c68fce8d9b/11283.pdf](https://library.oapen.org/bitstream/id/056a11be-ce3a-44b9-8987-a6c68fce8d9b/11283.pdf)
Noise abduction for computing counterfactuals
CC BY-SA 4.0
null
2023-05-01T13:49:54.147
2023-05-23T08:20:26.727
2023-05-22T17:18:25.960
17072
368208
[ "causality", "graphical-model", "causal-diagram", "counterfactuals" ]
614597
2
null
614531
1
null
One assumption is that the regression model is correctly specified. This assumption implies that (1) treatment and outcome are linear and (2) confounding variables and outcome are linear if only main effect terms are included. For (2), you could include quadratic, spline, or other nonlinear terms. You just need to assume you have correctly modeled the relationship for all those confounding variables. See [this paper](https://arxiv.org/abs/2006.11754) for a full discussion of the assumptions, including correct model specification.
null
CC BY-SA 4.0
null
2023-05-01T14:32:19.157
2023-05-01T14:32:19.157
null
null
247479
null
614599
1
null
null
0
13
Every year since 2016 all of the nurses (n~100/year) are asked to complete a survey on job satisfaction and wellbeing. I'm interested in conducting a correlation analysis (Spearman's) to determine which job aspects the survey asks about are correlated, as well as a logistic regression to predict the response to one of the questions (job satisfaction). The data has been deidentified, and there is minimal turnover, so I've got years of survey data without a way to pair responses or weight them. I don't know for sure, but considering the response rate (~70%) and the minimal amount of turnover, most of the responses are coming from the same people every year. I've received advice to pool all of the data together and just acknowledge the limitation of multiple respondents. This doesn't sit well with me, because it seems like the results will be diluted by individuals who have completed the survey as many as 7 times. What are my options?
Pooling annual staff survey data for analysis
CC BY-SA 4.0
null
2023-05-01T15:34:39.293
2023-05-01T15:34:39.293
null
null
386991
[ "correlation", "survey", "pooling", "cross-section" ]
614600
1
null
null
0
39
I want to analyze flight times of an insect (in function of several weather parameters), but these are done at different distances. One option could be to calculate the speed and analyze this, or what my professor proposed is to use flight time and then distance as an offset. (It's also in a mixed model with Individual as random factor) My model looks like this: ``` lmer(Flighttime ~ Temperature + (1|IndividualID), offset=Distance, na.action=na.omit, data=KMIdata) ``` Can someone help me to interpret the outcome? I have parameter estimates between 300 and 600, but my flight time only reach between 0 and 8 minutes maximum. Should I divide them by something? Can someone explain explain offset in a normal linear regression? I can only find videos with count data and Poisson regression but this is not the case for my data.
Parameter estimate interpretations in lm with offset
CC BY-SA 4.0
null
2023-05-01T14:53:43.067
2023-05-02T21:55:03.933
2023-05-01T15:54:56.193
2126
386989
[ "r", "offset", "lme4-nlme" ]
614601
2
null
239166
0
null
This is only a guess, but I suspect the regularization is interacting with the logistic regression optimizer. In principle, if you can find optimal loss-minimizing parameters, regularization won't increase performance, and instead is likely to lower it (on the training set). However, for large data sets, there are typically stochastic or iterative solvers used to learn the regression parameters, and these will not generally find an optimal solution. For example, the sklearn default in python is LBFGS, a low-memory variant of a quasi-Newtonian iterative solver. Intuitively, when you add regression, you may be restricting the optimization path to a smaller, "better behaved" region of the parameter space, making the optimizer work better in practice.
null
CC BY-SA 4.0
null
2023-05-01T16:39:00.810
2023-05-01T16:39:00.810
null
null
366672
null
614602
1
null
null
0
28
I want to perform 3 splits walk forward cross validation with expanding training set for the deepar model from the pytorch forecasting framework. When I do walk forward validation, I also want to do hyperparameter optimization using Optuna. Currently, the setup is normal validation (i.e. split1 only). After every epoch, the validation loss is calculated and passed to Optuna which uses it to prune the trial if necessary. A trial is one of the many possible combinations of hyperparameters. This is a kind of early stopping done by Optuna to prevent training of further epochs in that trial to save time. If my understating of walk forward cross validation is correct, I would like to use 3 folds of validation that are temporally increasing. The picture below shows the splits. [](https://i.stack.imgur.com/njbUT.png) For every epoch, I would like to synchronously perform trainer.fit() for all the three splits and then get their average validation loss after each epoch and pass it to Optuna to decide whether to prune this trial or not. Do I have to do some kind of multithreading or is there any setting in deepar that can achieve this? Is there another way to perform walk forward validation? I mean like doing sequentially for each split at a time. So first for split1 many epochs are tried until early stopping condition is met (i.e. validation loss does not improve). Next split2 is trained and validated like split1 to obtain minimum validation loss for split2. The same happens with split3. Finally all three minimum validation loss are averaged. Note that all 3 splits might have stopped at different epochs but we just ignore that and only take the minimum validation loss in each split. In this case how to use Optuna to prune the trial? Currently, the objective function in Optuna looks like below: ``` def objective(trial,): neu = trial.suggest_int(name="neu",low=600,high=800,step=25,log=False) lay = trial.suggest_int(name="lay",low=1,high=3,step=1,log=False) bat = trial.suggest_int(name="bat",low=4,high=12,step=4,log=False) lr = trial.suggest_float(name="lr",low=0.000001,high=0.01,log=True) num_ep = trial.suggest_int(name="num_ep",low=20,high=30,step=2,log=False) enc_len = encoder_length pred_len = 1 drop = trial.suggest_float(name="dropout",low=0,high=0.4,step=0.1,log=False) train_dataset = TimeSeriesDataSet( train_data, time_idx="time_idx", target=Target, categorical_encoders=cat_dict, group_ids=["group"], min_encoder_length=enc_len, max_encoder_length=enc_len, min_prediction_length=pred_len, max_prediction_length=pred_len, time_varying_unknown_reals=[Target], time_varying_known_reals=num_cols_list, time_varying_known_categoricals=cat_list, add_relative_time_idx=False, randomize_length=False, scalers={}, target_normalizer=TorchNormalizer(method="identity",center=False,transformation=None ) ) val_dataset = TimeSeriesDataSet.from_dataset(train_dataset,val_data, stop_randomization=True, predict=False) train_dataloader = train_dataset.to_dataloader(train=True, batch_size=bat) val_dataloader = val_dataset.to_dataloader(train=False, batch_size=bat) ######### Load DATA ############# """ Machine Learning predictions START 1) DeepAR """ metrics_callback = MetricsCallback() trainer = pl.Trainer( max_epochs=num_ep, gpus=-1, #-1 auto_lr_find=False, gradient_clip_val=0.1, limit_train_batches=1.0, limit_val_batches=1.0, logger=True, val_check_interval=1.0, callbacks=[lr_logger,metrics_callback] ) #print(f"training routing:\n \n {trainer}") deepar = DeepAR.from_dataset( train_dataset, learning_rate=lr, hidden_size=neu, rnn_layers=lay, dropout=drop, loss=Loss, log_interval=20, log_val_interval=6, log_gradient_flow=False, # reduce_on_plateau_patience=3, ) torch.set_num_threads(10) trainer.fit( deepar, train_dataloaders=train_dataloader, val_dataloaders=val_dataloader, ) metrics_list = [ metrics["val_RMSE"].item() for metrics in metrics_callback.metrics[1:]] min_val_rmse = metrics_list[-1] trial.report(min_val_rmse) # Handle pruning based on the intermediate value. if trial.should_prune(): raise optuna.exceptions.TrialPruned() return min_val_rmse ```
Walk forward cross-validation with Optuna and deepar in pytorch forecasting
CC BY-SA 4.0
null
2023-05-01T16:46:03.783
2023-05-01T16:46:03.783
null
null
360272
[ "time-series", "forecasting", "cross-validation", "hyperparameter", "tuning" ]
614603
1
null
null
0
29
I would very much appreciate some help regarding how to interpret different robust measures of scale (Inter-quartile range or IQR, biweight midvariance, and median absolute deviation or MAD). Thus, comments or pointers to relevant documents will be really welcomed. I am aware that none of these statistics can probably be considered "better" than the others but I would like to know the possible pros and cons of each of these statistics (or when the use of each of them is more/ less recommendable). For example, I assume that IQR may be less sensitive to effects occurring at central locations of a group's distribution (and, therefore, less interesting in cases we suspect dispersion is not uniform), but I do not know if biweight midvariances and mad differ in this (or any other relevant) regard. Thanks in advance for any possible help!
pros and cons of different robust measures of scale/ dispersion
CC BY-SA 4.0
null
2023-05-01T16:47:14.467
2023-05-01T16:47:14.467
null
null
222456
[ "robust", "dispersion", "mad" ]
614604
2
null
614574
1
null
There is no way to answer this question, because it is similar to asking "what is the next number in the sequence 1,2,5,10...?" There is no correct way to extrapolate to new data without some assumption (i.e. a model) that relates the data's features to the desired outcome, and a metric by which you fit that model to past data. Otherwise, your past data gives you no information that would help you assign classification probabilities to new points.
null
CC BY-SA 4.0
null
2023-05-01T16:47:28.360
2023-05-01T16:47:28.360
null
null
366672
null
614605
2
null
614591
0
null
This looks like an AR(1) process: gradually decaying ACF and a sharp cutoff in PACF after lag 1. While the ACF of AR(1) should not become negative at distant langs, this is not uncommon in small and medium samples, or even ones as large as 1000 observations - check out the following simulation: ``` T=1000; set.seed(0); eps=rnorm(T); x=rep(NA,T); x[1]=eps[1]; for(t in 2:T) x[t]=0.8*x[t-1]+eps[t]; acf(x); pacf(x) ``` [](https://i.stack.imgur.com/XBHhK.png)
null
CC BY-SA 4.0
null
2023-05-01T16:59:55.787
2023-05-02T07:01:10.993
2023-05-02T07:01:10.993
53690
53690
null
614606
2
null
613924
0
null
Bootstrapping involves randomly resampling a dataset with replacement to generate multiple bootstrap samples. These samples can be used to estimate the sampling distribution of a statistic or to evaluate the stability of a model. Here are some general steps you can follow to evaluate the stability of your model using bootstrapping: - Randomly sample your data with replacement to create multiple bootstrap samples. - For each bootstrap sample, fit your model and evaluate its performance on a validation set or using cross-validation. - Calculate a performance metric for each bootstrap sample. - Examine the distribution of the performance metric across the bootstrap. If the distribution is narrow and symmetric, it indicates that the model is stable. If the distribution is wide or skewed, it indicates that the model is unstable. - Optionally, you can calculate confidence intervals for the performance metric using the bootstrap samples. This will give you an estimate of the range of values within which the true population parameter is likely to fall.
null
CC BY-SA 4.0
null
2023-05-01T17:03:36.793
2023-05-01T17:03:36.793
null
null
204397
null
614608
1
null
null
0
18
If an image is composed of three categorical variables with fixed proportions (such as 0.2,0.3, and 0.5), how could I ensure the generated image (such as GAN model) is satisfied with this proportion?
How to make the generated machine learning model satisfied with specified proportion
CC BY-SA 4.0
null
2023-05-01T17:25:02.213
2023-05-01T17:25:02.213
null
null
252802
[ "machine-learning", "neural-networks", "conv-neural-network", "gan" ]
614609
1
null
null
1
99
I am building OCR software, for this purpose I trained a model on many types of fonts, the model is `SVM` but it is not principled. Now I want the users of the software to be able to improve the model they received from me, when the users come across a font that the model does not recognize accurately, they will want to improve the model, how can they do this? For this purpose, do I need to bring them the model together with all the training data, and all the pictures of the letters, and will they have to do the whole long training from the beginning for every letter that the users want to add to the model? Or is there a way to bring users a ready-made model and on the other hand give users the option to improve the model easily? Is there a type of model that you can add letters to without rebuilding it?
An OCR model that can be easily improved on the user side
CC BY-SA 4.0
null
2023-05-01T17:43:19.407
2023-05-13T02:05:26.100
null
null
178768
[ "machine-learning", "modeling", "svm", "libsvm", "optical-character-recognition" ]
614610
1
null
null
1
22
I'm assessing whether the relative abundance of bacterial communities are different across two treatments. I have a control (n=3), a food compost treatment (n=3) and a biosolid compost treatment (n=3) that were added at three different timepoints (spring21, summer21, spring22). I want to determine if there are changes in the microbial community at different timepoints, treatments or an interaction of the two. When I run a PERMONOVA there are significant differences for the timepoint and treatment factors but not an interaction of the two. I can also identify differences between different timepoints and treatments (Food compost is different than controls, spring21 is different than summer21, etc.) using pairwise comparisons. However, I also want to know if there is a particular sample (not which bacterium) that is responsible for these differences, one that is hugely different from the others. Which statistical test should I use?
Which statistical test - which sample is driving differences
CC BY-SA 4.0
null
2023-05-01T17:50:00.893
2023-05-01T17:50:00.893
null
null
387000
[ "post-hoc", "manova" ]
614611
1
null
null
0
9
I'm trying to figure out a way to test for a difference in the means (for instance) over two somewhat unequal periods in an AR(1) time series. For data without any AR structure, I might do something like the example below using a permutation test. But, knowing that the data aren't ~iid, how can I modify my approach? E.g., fit an AR(1) model for each sample and then add it back into each random sample? ``` library(ggplot2) set.seed(123) # an ar1 ts with an offset n <- 120 phi <- 0.5 y <- c(arima.sim(model=list(ar=phi),n=n)) + 10 # add a slight diff for illustration y[51:120] <- y[51:120] + 0.5 # take a look ggplot() + geom_line(aes(x=1:n,y=y),color="grey40") + geom_vline(xintercept = 50,linetype="dashed") + geom_line(aes(x=1:50,y = mean(y[1:50]))) + geom_line(aes(x=51:120,y = mean(y[51:120]))) + theme_minimal() # time period 1 is observations 1:50, time period 2 is 51:120 sampOne <- y[1:50] sampTwo <- y[51:120] # diff in means testStat <- mean(sampTwo) - mean(sampOne) #0.32 # simple permutation test m <- 1e3 testStatPermute <- numeric() for(i in 1:m){ indexSampOne <- sample(1:n,length(sampOne)) tmpOne <- y[indexSampOne] tmpTwo <- y[-indexSampOne] testStatPermute[i] <- mean(tmpTwo) - mean(tmpOne) } # take a look ggplot() + geom_density(aes(x=testStatPermute),fill="grey40",alpha=0.2) + geom_vline(xintercept = testStat,color="grey40",linetype="dashed") + labs(y="Density",x="Test Statistic") + theme_minimal() + theme(legend.position="none") # prob sum(testStatPermute > testStat) / m #0.048 ``` ```
permutation test for a difference in means of an autocorrelated time series over two unequal time periods
CC BY-SA 4.0
null
2023-05-01T18:02:01.133
2023-05-01T18:02:01.133
null
null
111024
[ "time-series", "permutation-test" ]
614612
2
null
614585
2
null
You (slightly) misquoted the learning bound on page 65 : the actual bound is given by (emphasis mine) > $$R(h)\leq\hat R_S(h)+\mathfrak R_m(\mathcal H_{\color{red}{k(h)}})+\sqrt{\frac{\log \color{red}{k}}{m}}+\sqrt{\frac{\log 2/\delta}{2m}} $$ The crucial thing that this notation highlights is that $k$ depends on the hypothesis $h$. More specifically, the author defines $k(h)$ as follows > For any $h\in\mathcal H$, we will denote by $\mathcal H_{k(h)} $ the least complex hypothesis set among the $\mathcal H_k$s which contain $h$. The idea is that, although the hypotheses sets $\mathcal H_k$ may not be nested, the indices $k$ are a measure of complexity for each $\mathcal H_k$, hence picking a more complex hypothesis will hurt the generalization error accordingly. Figure 4.4 in the book and following discussion should make it clearer.
null
CC BY-SA 4.0
null
2023-05-01T18:09:13.047
2023-05-01T18:09:13.047
null
null
305654
null
614613
2
null
614572
3
null
Yes! If we have done a priori power calculations to figure out the sample size we'd need to consistently detect an effect of the size we care about, and we've actually collected that amount of data, then a significant p-value is confirmatory and meaningful. You made a deliberate effort to collect enough data to rule out the straw-man argument of "What if your results are just sampling variation?" and you appear to have overcome that hurdle. [In Deborah Mayo's words](http://bactra.org/reviews/error/), you subjected your hypothesis to "severe testing," using a test with "an overwhelmingly good chance of revealing the presence of a specific error, if it exists --- but not otherwise." But if we haven't done a priori power calculations, and we chose a sample size in other ways (convenience, or budget constraints, or a mistaken belief that "n=30 is big enough" for everything)... then our test was not "severe.". So, what's the use of hypothesis testing without an a priori power analysis? Sometimes we're in a situation where we simply couldn't have collected more data. (Maybe we are looking back at historical records and there's only a small sample left in existence. The population was larger than this sample, but there's no way to sample more data from that population any longer.) Then hypothesis testing isn't ideal, but might still be useful in a limited way: Although a significant p-value wouldn't tell us much, an insignificant p-value would tell us that we definitely should be worried about sampling variation as we interpret our findings.
null
CC BY-SA 4.0
null
2023-05-01T18:29:23.393
2023-05-01T18:29:23.393
null
null
17414
null
614614
2
null
614572
4
null
> If the null hypothesis is never really true, is there a point to using a statistical test? If we already know that the null hypothesis is not true, then the point is not to proof that the null hypothesis is not true. The point of the null hypothesis test is to show that a test is sensitive enough to be able to exclude certain hypothesis. The quality of a test is not the ability to show which values are most likely true, but instead it is the ability to show which values are likely not true and to show it with a large significance. --- In addition, there are some issues with continuous distributions having zero probability for any specific value. So no value is ever true when we consider a continuous distribution for some parameter. Still, relevant are the distribution densities and whether the region around certain hypothesis, like the null hypothesis, are low or not.
null
CC BY-SA 4.0
null
2023-05-01T18:59:27.163
2023-05-01T18:59:27.163
null
null
164061
null
614615
1
null
null
0
20
## Introduction Consider a univariate circularly symmetric complex Gaussian (CSCG) mixture $Y$ with pdf $$p_Y(y) = \sum_i c_i p_i(y) = \sum_i c_i \frac{\exp(-\lvert y \rvert^2/\sigma_i^2)}{\pi \sigma_i^2},$$ where $c_i$ and $\sigma_i^2$ are the weight and variance of component $i$. Its differential entropy can be decomposed as \begin{align} h(Y) & = h(Y|C) + I(C;Y) \\ & = \sum_i c_i \log(\pi e \sigma_i^2) - \int \sum_i c_i p_i(y) \log \frac{\sum_j c_j p_j(y)}{p_i(y)} dy. \end{align} The first term is trivial but the second term involves non-elementary integral. --- ## Some research There are many bounds/approximations for differential entropy of general mixture distributions, for example: - Lower bound by Jensen's inequality [1] $$I_{\text{Jensen}}(C;Y) = -\sum_i c_i \log \sum_j c_j \frac{\sigma_i^2}{\pi (\sigma_i^2 + \sigma_j^2)} - 1$$ - Approximation by 1st order Taylor's expansion at $y=0$ [2] $$I_{\text{Taylor1}}(C;Y) \approx - \log \sum_i c_i \frac{1}{\pi \sigma_i^2} - \pi$$ - Estimation over arbitrary distribution-distance function $D(p_i || p_j)$ [3] $$I(C; Y) \approx - \sum_i c_i \log \sum_j c_j \exp(-D(p_i || p_j)) \tag{1} \label{1}$$ Specifically, Bhattacharyya distance $- \log \int \sqrt{p(x) q(x)} dx$ gives an lower bound $$I_{\text{Bhattacharyya}}(C; Y) = - \sum_i c_i \log \sum_j c_j \frac{2 \sigma_i \sigma_j}{\sigma_i^2 + \sigma_j^2}$$ KL divergence $\int p(x) \log \frac{p(x)}{q(x)} dx$ gives an upper bound $$I_{\text{KL}}(C; Y) = - \sum_i c_i \log \sum_j c_j \frac{\sigma_{i}^2}{\sigma_{j}^2} \exp \Bigl(1 - \frac{\sigma_{i}^2}{\sigma_{j}^2}\Bigr)$$ Those are all based on fantastic generic results but I want a tighter, optimization-friendly bound for CSCG mixture. --- ## Thoughts In terms of [Renyi divergence](https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy#R%C3%A9nyi_divergence), we can write Bhattacharyya distance as $0.5 D_{0.5}(p||q)$ and KL divergence as $D_1(p||q)$. Heuristically, what if we use $D_{0.5}(p||q)$ in \eqref{1}? For CSCG mixture that is, $$I_{\text{Renyi0.5}}(C;Y) = -\sum_i c_i \log \sum_j c_j \frac{4 \sigma_i^2 \sigma_j^2}{(\sigma_i^2+\sigma_j^2)^2}$$ Initial simulation results suggest it is not only much tighter than other bounds/approximations, but also works as an upper bound in all tested zero-mean Gaussian mixture cases (but not true when some components are with non-zero mean). Here is an example: ![](https://i.stack.imgur.com/jGfyl.jpg) Each column is $I(C;Y_k)$ for $k=1,2,3$. The first column is $I(C;Y_k)$ by Monte Carlo simulation while the last column is $I_{\text{Renyi0.5}}(C;Y_k)$. For zero-mean Gaussian mixture, the latter appears a finer upper bound than all others. --- ## Question For zero-mean Gaussian mixture, it is possible to prove $I_{\text{Renyi0.5}}(C;Y)$ is an upper bound for $I(C;Y)$? Explicitly, can we prove the inequation $$- \sum_i c_i \int \frac{\exp(-{\lvert y \rvert^2}/{\sigma_i^2})}{\pi \sigma_i^2} \log \frac{\sum_j c_j \exp(-{\lvert y \rvert^2}/{\sigma_j^2})}{\exp(-{\lvert y \rvert^2}/{\sigma_i^2})} dy \le - \sum_i c_i \log \sum_j c_j \frac{4 \sigma_i^2 \sigma_j^2}{(\sigma_i^2 + \sigma_j^2)^2}$$ Thank you very much for your attention and help. --- [1] Huber, Marco F., et al. "On entropy approximation for Gaussian mixture random vectors." 2008 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems. IEEE, 2008. [2] Gu, Yujie, Nathan A. Goodman, and Amit Ashok. "Radar target profiling and recognition based on TSI-optimized compressive sensing kernel." IEEE Transactions on Signal Processing 62.12 (2014): 3194-3207. [3] Kolchinsky, Artemy, and Brendan D. Tracey. "Estimating mixture entropy with pairwise distances." Entropy 19.7 (2017): 361.
Differential Entropy of Zero-Mean Gaussian Mixtures
CC BY-SA 4.0
null
2023-05-01T20:06:09.400
2023-05-01T20:07:01.827
2023-05-01T20:07:01.827
359470
359470
[ "entropy", "information-theory", "gaussian-mixture-distribution", "distance-functions", "probability-inequalities" ]
614617
2
null
614572
5
null
If we focus on medical research; performing a study involves taking a risk and potentially harming people. This is acceptable within bounds defined by the Principle of Equipoise as outlined in the [Declaration of Helsinki](https://en.wikipedia.org/wiki/Declaration_of_Helsinki). Prior to recruiting even a single subject to a study, the protocol must be reviewed and approved by an ethics board, usually an institutional review board (IRB). Many medical centers include a statistician or epidemiologist on such boards, and they consider the statistical feasibility of the study. That is to say, the protocol statistician has outlined the assumptions and the anticipated effects and applied the necessary formulas to provide rationale for the specified sample size(s). There are a number of questions to consider subsequently: are the assumptions reasonable? Is the analysis well powered? Does it make sense to recruit this many people without additional preliminary research? Will the potential benefits in the population after the study outweigh the risks in the study participants? And so on... The constitution and mission of an IRB is outlined in the Belmont report. Just a plug, IRBs within medical institutions often have difficulty recruiting and retaining statisticians. If you are a biostatistician within an academic medical center, ask whether there is a seat for a biostatistician to participate. The result of a successful medical trial is that standard practice can be updated based on what is known. Typically, this does fall down to a trial showing a significant result. One can hope based on the input of IRBs, and the natural limitation of cost, that the design feature under study has a reasonable profound impact on health so that the significance is compelling in its own right. There is a flipside to this. Much less can be said of non-experimental, large EHR based studies which often show significant effects that can't and shouldn't be translated into practice. Open data sources and semi-closed data sources often do not have a steering committee to review the ethics of proposed research. Conversely, many languishing areas of healthcare continue to hem and haw over results due to the failure of trials to show unequivocal results, such as sodium reduction, cognitive behavioral therapy, fish oil supplementation, low fat diets, some vaccines, and so on. In summary, for any confirmatory study, no there is no point to conducting a hypothesis test unless a power/sample size calculation has been performed - and the primary endpoint(s) is/are formally powered and secondary endpoints are reasonably powerful or important. In any other case, the analysis should be treated as exploratory, and a "hypothesis test" in this framework can be viewed as yet another method to identify research topics or detect effects - in that case, the statistician should be completely transparent in the reporting of their results.
null
CC BY-SA 4.0
null
2023-05-01T20:51:02.323
2023-05-02T16:19:50.873
2023-05-02T16:19:50.873
8013
8013
null
614618
1
null
null
2
66
I have data showing the number of cause-specific deaths in a cohort. I want to use Poisson regression to estimate the cause-specific rate by age. The complication is that I want the predicted cause-specific rates to add-up to the predicted all-cause rate. Here is some data in R format, showing the total duration of follow-up in the cohort by age (`person_years`), the number of all-cause deaths (`all_cause`), and the number of deaths due to three causes: accidents (`accidents`), non-communicable diseases such as heart disease (`ncd`), and other (`other`): ``` d <- structure(list(age = 18:90, person_years = c(11, 213, 174, 155, 53, 151, 75, 121, 198, 274, 205, 148, 286, 178, 256, 199, 345, 223, 242, 319, 301, 401, 350, 272, 303, 457, 467, 387, 364, 470, 346, 426, 424, 388, 377, 415, 540, 378, 433, 391, 410, 435, 314, 441, 399, 286, 300, 334, 320, 371, 381, 288, 288, 375, 236, 190, 316, 212, 306, 250, 321, 290, 309, 178, 137, 225, 113, 163, 237, 175, 204, 111, 23), all_cause = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 1L, 0L, 2L, 0L, 1L, 0L, 1L, 0L, 1L, 5L, 1L, 1L, 2L, 3L, 2L, 4L, 5L, 7L, 6L, 2L, 7L, 4L, 7L, 9L, 9L, 10L, 17L, 16L, 17L, 10L, 17L, 17L, 17L, 30L, 20L, 17L, 22L, 17L, 25L, 27L, 26L, 29L, 34L, 41L, 29L, 30L, 60L, 36L, 58L, 56L, 94L, 50L, 90L, 58L, 45L, 88L, 36L, 59L, 113L, 75L, 139L, 78L, 21L), accidents = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 1L, 1L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), ncd = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 2L, 0L, 1L, 0L, 1L, 0L, 1L, 5L, 0L, 1L, 1L, 3L, 0L, 3L, 4L, 5L, 6L, 1L, 4L, 1L, 3L, 6L, 4L, 4L, 13L, 15L, 16L, 10L, 12L, 13L, 12L, 26L, 17L, 11L, 20L, 10L, 17L, 19L, 20L, 21L, 28L, 26L, 25L, 23L, 44L, 29L, 45L, 43L, 72L, 31L, 73L, 50L, 36L, 69L, 29L, 36L, 94L, 56L, 118L, 69L, 17L), other = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 0L, 1L, 1L, 0L, 0L, 3L, 3L, 4L, 2L, 4L, 6L, 4L, 1L, 1L, 0L, 5L, 4L, 4L, 4L, 3L, 5L, 2L, 6L, 8L, 8L, 6L, 8L, 6L, 15L, 4L, 7L, 16L, 7L, 13L, 13L, 22L, 18L, 17L, 8L, 9L, 19L, 7L, 23L, 19L, 19L, 21L, 9L, 4L)), class = "data.frame", row.names = c(NA, -73L)) ``` We can use Poisson regression to fit a curve on the all-cause rate. In this model, I've used age and age squared: ``` model1 <- glm(all_cause ~ poly(age, 2) + offset(log(person_years)), data = d, family = 'poisson') ``` We can predict the age-specific mortality rate and compare this to the observed rates graphically: ``` model1 <- glm(all_cause ~ poly(age, 2) + offset(log(person_years)), data = d, family = 'poisson') newdata <- data.frame(age = d$age, person_years = 100000) d$predicted_all_cause <- predict(model1, type = 'response', newdata = newdata) d$observed_all_cause <- d$all_cause / d$person_years * 100000 plot(d$age, d$observed_all_cause, ylab = 'mortality rate per 100,000', xlab = 'age') lines(d$age, d$predicted_all_cause) ``` [](https://i.stack.imgur.com/FoypQ.png) So that works fine, but if we did something similar with each cause of death, the predicted rates would not add up to the all-cause rate we just estimated. If we fit a separate model for each cause of death ... ``` m_accidents <- glm(accidents ~ poly(age, 2) + offset(log(person_years)), data = d, family = 'poisson') m_ncd <- glm(ncd ~ poly(age, 2) + offset(log(person_years)), data = d, family = 'poisson') m_other <- glm(other ~ poly(age, 2) + offset(log(person_years)), data = d, family = 'poisson') ``` ... Then predict the age-specific rates ... ``` d$predicted_accidents <- predict(m_accidents, type = 'response', newdata = newdata) d$predicted_ncd <- predict(m_ncd, type = 'response', newdata = newdata) d$predicted_other <- predict(m_other, type = 'response', newdata = newdata) ``` ... When we add them up, they don't equal the all-cause rate. The head of the table comparing the two is shown below. ``` d$sum_predicted_cause_specific <- d$predicted_accidents + d$predicted_ncd + d$predicted_other head(d[, c('age', 'predicted_all_cause', 'sum_predicted_cause_specific')]) age predicted_all_cause sum_predicted_cause_specific 1 18 102.2936 108.3501 2 19 111.8447 118.4173 3 20 122.2923 129.3918 4 21 133.7212 141.3514 5 22 146.2239 154.3805 6 23 159.9019 168.5714 ``` The fact they are different is not surprising since the curves are estimated separately. But I want them to add up exactly, so that the modelled all-cause rate 'decomposes' into the modelled cause-specific rates. The two approaches I've considered are: - a multivariate poisson model, for example using vgam::vglm, but this model will not force the sum of cause-specific rates to add up to the all-cause rate. - estimating the all-cause rate using glm and then using multinomial logistic regression to estimate the proportion at each age that are due to each cause of death. This seems like it might work, but feels complicated. I can't figure out the best approach. I'm happy to use any common statistical software.
How do you estimate multiple poisson rates simultaneously?
CC BY-SA 4.0
null
2023-05-01T20:53:56.160
2023-05-01T20:53:56.160
null
null
40961
[ "multivariate-analysis", "poisson-regression", "multinomial-distribution" ]
614619
2
null
306929
0
null
I thought adding an answer might be useful for anyone else stumbling across this question. As is mentioned in the comments, both the estimators for the standard deviation you have mentioned in the question are biased. See [this](https://stats.stackexchange.com/questions/11707/why-is-sample-standard-deviation-a-biased-estimator-of-sigma) question for great explanations of this. ## Motivation for my answer In the original question a simulation is mentioned. Simulations can be a good way to check if we are unsure about things. ## Simulation I take $10,000$ samples of $10$ points from a normal distribution with mean $=0$ and standard deviation $=1$. Then for each of these $10,000$ samples apply the estimators $s$ and $s_N$ (defined in the original question, $s$ has $N-1$ in the denominator $s_N$ has $N$ in the denominator) to estimate the standard deviation. [](https://i.stack.imgur.com/bnD7Y.png) The results of both of these are plotted in the histograms above. The black lines show the mean of the estimates for the standard deviation. The orange lines shows us the true value of the standard deviation. We see that in both cases the means of the estimates are less than the true value. Based on this simulation we might conclude that the expected value for both $s$ and $s_N$ is strictly less than the true value of $\sigma = 1 $. ## Exercises/Things to explore It might be fun/useful to experiment with the code below. For example use the biased and unbiased estimators for the variance instead and see how the plots above change. ## Python Code ``` import numpy as np import matplotlib.pyplot as plt mean=0 standard_deviation=1 #create a 10000 samples containign 10 points data_samples = np.random.normal(loc=mean, scale=standard_deviation, size=(10000,10)) #For each of our 1000 samples, apply each of our estimators. sn_results = np.std(data_samples, axis=1) s_results = np.std(data_samples, axis=1, ddof=1) #mean of each estimator mean_sn = np.mean(sn_results) mean_s = np.mean(s_results) #plotting fig, axs = plt.subplots(2, sharex=True, figsize=(15,7)) for a in fig.axes: a.tick_params( axis='x', # changes apply to the x-axis which='both', # both major and minor ticks are affected bottom=True, top=False, labelbottom=True) fig.suptitle('Comparison of the two estimators for the standard deviation (true value σ=1)') axs[0].hist(sn_results) axs[0].axvline(mean_sn, c="black", label="Mean of estimator s_n") axs[0].axvline(1, c="orange", label="Actual") axs[0].legend(loc='upper left') axs[1].hist(s_results) axs[1].axvline(mean_s, c="black", label="Mean of estimator s") axs[1].axvline(1, c="orange", label="Actual") axs[1].legend(loc='upper left') ```
null
CC BY-SA 4.0
null
2023-05-01T20:59:55.287
2023-05-01T20:59:55.287
null
null
358991
null
614620
2
null
414672
0
null
If you recall the KL Divergence between two distributions is given by: $\mathcal{D}_{KL} ( P \mid \mid Q) = \underset{x \in X}{\sum} P(x) \ln \left[ \frac{P(x)}{Q(x)} \right] $. This is interpreted as the Expected Value of the Log of the Density Ratios of the Distributions P and Q, with respect to the variable X, which is distributed as $P(x)$. If you need further convincing, by expanding the previous expression we obtain: $$\mathcal{D}_{KL} ( P \mid \mid Q) = \left[ \underset{x \in X}{\sum} P(x) \cdot \left( \ln P(x) - \ln Q(x) \right) \right] $$ $$ \underset{x \sim p(x)}{\mathbb{E}} \left[ \ln P(x) - \ln Q(x) \right]$$ By substituting $P(x)$ for the variational posterior, $q_{\phi} (z \mid x)$ and $Q(x)$ for the prior, $p(z)$ we obtain an expression for the divergence term , formulated as an Expectation below: $$ \underset{z \sim q(z \mid x)}{\mathbb{E}} \left[ \ln q (z \mid x) - \ln p(z) \right]$$ $$ \underset{z \sim q(z \mid x)}{\mathbb{E}} \ln \left[ \frac{q(z \mid x)}{p(z)} \right]$$ If you recall, the second version of the SGVB estimator actually calls for the negative of the KL Divergence: $$\overset{\sim}{\mathcal{L}^{B}} (\theta, \phi, x^{(i)}) = -D_{KL} (q_{\phi} (z \mid x^{(i)}) \mid \mid p_{\theta}(z)) + \frac{1}{L} \sum\limits_{l=1}^{L} (\log p_{\theta} (x^{(i)} \mid z^{(i, l)}) )$$. For each of the latents in the hierarchy, $-D_{KL}$ is computed using the log of the Gaussian Distribution, the constants are omitted from the expression, as they will be cancelled out when the terms are subtracted.
null
CC BY-SA 4.0
null
2023-05-01T21:21:03.817
2023-05-01T21:29:49.297
2023-05-01T21:29:49.297
385271
385271
null
614621
1
null
null
2
35
I am trying to compare the paired pre to post-responses in a student survey to see if gender plays a role. I was told to do a wilcoxon signed ranked test, and I can get the mean and p-value by each gender, but it won't compare the genders just the pre-post data. Am I using the wrong test?
Wilcoxon Rank Test Pre/Post By gender
CC BY-SA 4.0
null
2023-05-01T22:15:45.270
2023-05-02T02:57:37.447
null
null
387011
[ "wilcoxon-signed-rank", "pre-post-comparison" ]
614622
1
null
null
1
30
Given 5 Lines(table) X values and corresponding Y1,Y2,...Y5. How can I calculate the approximate X value given the corresponding Y's? How can I tweak the formula if I want to weight to bias the calculation towards corresponding line, say Y1? Right now, I am just looking at the Line chart to estimate the X. I am using Excel, so preferable to have solution that will work on Excel formula or VBA or I can just translate it. [](https://i.stack.imgur.com/eMlvH.png)
Approximate X given 5 function values and y values
CC BY-SA 4.0
null
2023-05-01T23:00:46.743
2023-05-02T03:12:04.707
2023-05-02T03:12:04.707
362671
387012
[ "approximation", "excel" ]
614623
1
null
null
1
24
I am currently stuck in one line in the paper named ["Divide-and-Conquer Reinforcement Learning"(Ghosh et. al., 2018 ICLR)](https://arxiv.org/pdf/1711.09874.pdf). It is the equation (1) in the page 4 which is like below. $$E_\pi[D_{KL}(\pi \Vert \pi_c)] \leq \sum_{i,j}\rho(\omega_i)ρ(\omega_j )E_{\pi_i}[D_{KL}(\pi_i\Vert \pi_j )]$$ Here, the policy in the Augmented MDP(with context information added to the state) $\pi$ is the family of context-wise policies(which I assume it means it will just use the context specific policy when given the context info), $\pi_c$ is the policy in the original MDP without the context info which is represented as the weighted(weight is the context belief distribution given state)sum of the context-wise policies that are in the family $\pi$. Although I understand the basic concept, I still cannot understand how this inequality is driven. Especially when defining the $\pi_c$, there were no comments on $\rho(w)$ and it should not since it is for the original MDP without any contextual information... More info: - w : context where it determines the initial state distribution - $\rho$ : initial state distribution - $\pi$ : $(\pi_i)^n_{i=1}, \pi_i$ =context-wise policy - $\pi_c$ : $\sum_{\omega \in \Omega} p(\omega |s)\pi_\omega,p(\omega|s)$= belief distribution of what context the state is in.
Derivation of the target upper-bound with Jensen's inequality in 'Divide-and-Conquer RL'
CC BY-SA 4.0
null
2023-05-02T00:21:44.350
2023-05-02T01:52:45.610
2023-05-02T01:52:45.610
362671
163466
[ "reinforcement-learning", "kullback-leibler", "inequality" ]
614624
1
null
null
0
48
I have n Poisson random variables with the same parameter $\lambda$: $x_i\sim POI(\lambda)$. I know that $\frac{\sum(x_i)}{n}$ is a MLE of $\lambda$. My question is, observing the weighted sum of these Poisson variables $X=\sum \alpha_i x_i$, is the MLE of $\lambda$ equal to $\frac{X}{\sum \alpha_i}$? If no, only knowing the sum $X$ and these weights $\alpha_i$, how do we get the MLE for $\lambda$?
MLE of a weighted sum of Poisson variables
CC BY-SA 4.0
null
2023-05-02T00:58:36.887
2023-05-02T15:40:58.537
null
null
387013
[ "self-study", "mathematical-statistics", "maximum-likelihood", "binomial-distribution", "poisson-distribution" ]
614625
2
null
614621
0
null
A least as an initial answer: Yes, you are using the wrong test. A simple approach would be to use the differences between the paired observations as the dependent variable, and compare genders using a test for independent samples (t-test or Wilcoxon-Mann-Whitney, if there are two genders). But this won't directly address if there is a difference between pre and post. You could do a separate test to determine if the differences calculated above are statistically different from zero (one-sample t-test, one-sample Wilcoxon test, one-sample sign test). A more sophisticated --- and probably preferred approach --- would be to create a single model which includes both Time and Gender as independent variables. This would use repeated measures since the same subject has both a pre and post response.
null
CC BY-SA 4.0
null
2023-05-02T02:57:37.447
2023-05-02T02:57:37.447
null
null
166526
null
614626
1
null
null
1
27
Doing a work project where I created a simple multivariate linear regression model in python with sklearn. My model performs well, now I'd like to discuss some takeaways with my team, namely the "weight" or "strength" of each input variable. I have the coefficient matrix from sklearn, and I understand the coefficients represent change in output per unit change (1) in input but my concern is my input variables are not normalized. Some have range [0,1], others have range [0,10000]. To me (and maybe I am mistaken), a variable with a coefficient of 100 with range [0,1] is more "important" to study than a coefficient of 0.1 with range [0,100]. It may be worth mentioning that the input variables are fairly uniformly distributed in their ranges. Is there a standard method for what I am describing here, or am I trying to fix a non existent problem and the coefficients are telling enough? Thanks
Standard way to measure linear regression input variable "strength"?
CC BY-SA 4.0
null
2023-05-02T02:59:36.167
2023-05-02T08:51:26.230
2023-05-02T03:50:00.573
313763
313763
[ "regression", "feature-selection", "regression-coefficients", "weights" ]
614627
1
null
null
2
36
I have read a paper entitled "Attention is all you need" by [Vaswani et al. (2017)](https://arxiv.org/abs/1706.03762). This paper use the so-called position-wise feedforward neural network, where the input of this network is a matrix $\mathbf{X} \in \mathbb{R}^{n \times d_\mathrm{model}}$ (not a vector $\mathbf{X} \in \mathbb{R}^{d_\mathrm{model}}$). If I am not mistaken, the meaning of position-wise is that the (same) feed-forward layer applies to every vector $\mathbf{X}_{i*}$ ($i$th row of $\mathbf{X}$) for $i = 1, \dots, n$. Thus, the weights are shared. I want to do backpropagation for a position-wise network consisting only a linear layer with no activation. Let the output dimensionality is $d_\mathrm{model}$. Applying this network yields $\mathbf{Z} \in \mathbb{R}^{n \times d_\mathrm{model}}$ where each row $\mathbf{Z}_{i*},\ i=1, \dots, n$ is given by $\mathbf{Z}_{i*} = \mathbf{X}_{i*} \mathbf{W} + \mathbf{b}^\intercal$. Here, $\mathbf{W} \in \mathbb{R}^{d_\mathrm{model} \times d_\mathrm{model}}$ and $\mathbf{b} \in \mathbb{R}^{d_\mathrm{model}}$ are weight and bias, respectively. Let $L$ be the loss function. For $i$th row I get: $\dfrac{\partial L}{\partial \mathbf{W}_{pq}} = \dfrac{\partial L}{\partial \mathbf{Z}_{i1}} \cdot \dfrac{\partial \mathbf{Z}_{i1}}{\partial \mathbf{W}_{pq}} + \dfrac{\partial L}{\partial \mathbf{Z}_{i2}} \dfrac{\partial \mathbf{Z}_{i2}}{\partial \mathbf{W}_{pq}} + \dots + \dfrac{\partial L}{\partial \mathbf{Z}_{id_\mathrm{model}}} \dfrac{\partial \mathbf{Z}_{id_\mathrm{model}}}{\partial \mathbf{W}_{pq}} = \dfrac{\partial L}{\partial \mathbf{Z}_{ip}} \mathbf{X}_{iq}$, for $p, q = 1, \dots, d_\mathrm{model}$. Thus, I end up with $\dfrac{\partial L}{\partial \mathbf{W}} = \left(\dfrac{\partial L}{\partial \mathbf{Z}_{i*}}\right)^\intercal \mathbf{X}_{i*}$. My question: is $\left(\dfrac{\partial L}{\partial \mathbf{Z}_{1*}}\right)^\intercal \mathbf{X}_{1*} = \left(\dfrac{\partial L}{\partial \mathbf{Z}_{2*}}\right)^\intercal \mathbf{X}_{2*} = \dots = \left(\dfrac{\partial L}{\partial \mathbf{Z}_{d_\mathrm{model}*}}\right)^\intercal \mathbf{X}_{d_\mathrm{model}*}$ holds?
Backpropagation of position-wise feedforward neural network
CC BY-SA 4.0
null
2023-05-02T03:34:44.213
2023-05-03T03:33:30.733
2023-05-03T03:33:30.733
387019
387019
[ "neural-networks", "backpropagation" ]
614629
1
null
null
0
13
If the data is time series, and the null hypothesis, assuming the feature has the same distribution in both sets, is rejected, would it be valid to perform feature selection or engineering based on this result (e.g., removing the feature or applying a transformation)? Or would it introduce bias and compromise the model's integrity due to decision-making using the test set?
Is it appropriate to use a statistical test to compare the distribution of a time series data feature between the train set and the test set?
CC BY-SA 4.0
null
2023-05-02T05:57:58.547
2023-05-02T05:57:58.547
null
null
276238
[ "machine-learning", "time-series", "data-transformation", "feature-selection" ]
614630
1
null
null
1
56
In Bishop's "Pattern Recognition and Machine learning" in chapter 1.2.2, the author introduces the concept of expectations of functions of two random variables. In pg 20, introduces the following term, $$ E_x[f(x,y)] $$ And goes on to remark on it, saying that it- > denotes the average of the function f(x, y) with respect to the distribution of x. Note that Ex[f(x, y)] will be a function of y. Why is the expectation a function of y?
Expected value of a function of two random variables
CC BY-SA 4.0
null
2023-05-02T06:36:23.553
2023-05-03T10:05:08.113
null
null
358344
[ "probability" ]
614631
1
null
null
1
16
I have conducted a PCA and identified that the principal components (PC) are not driven by a single environmental parameter but are affected by several for each PC. I was then advised to retrieve the coordinates of PC1 and PC2 for each site and plot them onto my NMDS of species assemblages to visualize the potential effects of the environmental composites, instead of plotting the environmental parameters individually like usual with envfit from the vegan package in R. I now have an NMDS biplot with PC1 and PC2 as the arrows, which show there is some separation of sites from the environmental composite of PC1. The aim of this is an exploratory analysis to visualize how species assemblages are affected by the environmental parameters. I understand the reason behind doing this, but I am not certain it makes sense statistically, nor have I been able to find any references to justify this method. I would very much appreciate some help in determining if this is an appropriate method or if there are better alternatives to this. Thank you very much! [](https://i.stack.imgur.com/apvn4.png)
Plotting PCA cordinates as composites of environmental parameters into NMDS ordination of species assemblages
CC BY-SA 4.0
null
2023-05-02T06:38:08.607
2023-05-10T08:16:48.320
null
null
387025
[ "pca", "multidimensional-scaling", "vegan", "environmental-data" ]
614632
1
null
null
0
12
Lets say i have a table which has 10 categorical features for each customer and these features are recorded on a daily grain. This means that I have 10 categorical time series for each customer.Now I need to find wether there are data quality issue in the table ie I want to create a model for outlier detention. How do i approach this problem.
Anomaly Detection for Categorical Data
CC BY-SA 4.0
null
2023-05-02T07:14:08.380
2023-05-02T07:14:08.380
null
null
382761
[ "time-series", "categorical-data", "anomaly-detection" ]
614635
2
null
614630
3
null
The notation is ambiguous because (a) it is using lower cases and (b) $X$ and $Y$ are both random variables, possibly dependent random variables. One then wonders whether it should be $$\mathbb E_{X|Y=y}[f(X,y)]=\dfrac{\int_\mathfrak X f(x,y) p_{X,Y}(x,y)\,\text d x}{\int_\mathfrak X p_{X,Y}(x,y)\,\text d x}$$ or $$\mathbb E_{X}[f(X,y)]=\int_\mathfrak X f(x,y) \int_\mathfrak Y p_{X,Y}(x,z)\,\text d z\,\text d x$$ where $p_{X,Y}$ denotes the joint density of $(X,Y)$.
null
CC BY-SA 4.0
null
2023-05-02T07:54:47.050
2023-05-03T10:05:08.113
2023-05-03T10:05:08.113
7224
7224
null
614637
1
null
null
0
12
I try to distinguish under which group a specific instance is over-represented, if any. My data has 3 groups (columns), and 150+ instances (rows), and the values are counts each instance is present in a group. I would like to perform a chi-squared test for each row to conclude if an instance is over-represented (i.e., greater prevalence than others) in a specific group, and which group. First, I wonder whether this is the right test for such question. Another natural option is Fisher exact t-test. Which one is better for such case? Second, From looking at coding examples, I didn't manage to run the test for each row, but only for the entire dataset (as a contingency table) which made me think I may be missing something. An example of the data I use: ``` import pandas as pd df = pd.DataFrame(np.random.randint(1,100,(10,3)), columns=['g1', 'g2', 'g3']) df.index = ['var' + str(i+1) for i in range(len(df.index))] df.head(5) Out[12]: g1 g2 g3 var1 23 24 18 var2 88 15 70 var3 24 42 86 var4 39 55 62 var5 62 86 59 ``` The final aim is to determine for each instance (row), whether it is over-represented, with respect to the statistical test, and significance. Something like this: ``` g1 g2 g3 final_score var1 23 24 18 NA var2 88 15 70 g1 var3 24 42 86 g3 var4 39 55 62 NA var5 62 86 59 g2 ``` The final_score column indicate whether a variable is at all over-represented or not (NA for insignificant results), and for which group it is if so. Any help regarding this would be much appreciated!
multiple chi-squared tests for concluding divergence over representation in a group
CC BY-SA 4.0
null
2023-05-02T08:19:41.203
2023-05-02T09:06:25.127
2023-05-02T09:06:25.127
369240
369240
[ "python", "chi-squared-test", "fishers-exact-test" ]
614638
2
null
614626
1
null
You should have gone for normalisation/standardisation of variables before you fit the model. This gives better interpretability of the coefficients and also helps the parameters to converge faster."To me (and maybe I am mistaken), a variable with a coefficient of 100 with range [0,1] is more "important" to study than a coefficient of 0.1 with range [0,100]". This statement of yours is right. The feature has less variability [0,1] but still has high coefficient.
null
CC BY-SA 4.0
null
2023-05-02T08:51:26.230
2023-05-02T08:51:26.230
null
null
382761
null
614639
1
null
null
0
21
I am training three models, ANN, LSTM and RNN to predict the PV energy production. The data contains two features which are the irradiation which is normalised (highest value 1) and the historical production values which are also normalised. The data is available in a 15min interval which correspond to 94 observations per hour. However, since in the night the production as well as the irradiation is zero, I decided to only take 52 observations per day (6am to 19pm). The input data consist of two days in the past for both features. Therefore, 2x2x52 = 208. The output is the energy production for 52 timesteps. The data is always shifted by one day. This means: input 01.Jan - 02.Jan. output 03.Jan input 02.Jan - 03.Jan. output 04.Jan input 03.Jan - 04.Jan. output 05.Jan [](https://i.stack.imgur.com/cPSch.png) My output layer of the NNs is always a Dense(52). The problem is that the predictions do never reach the peak values of the true values. [](https://i.stack.imgur.com/P3sDH.jpg) Here is the code for my ANN: ``` # x_train_r: (28595, 104, 2) # y_train: (28595, 52) # x_test_r: (12255, 104, 2) # y_test: (12255, 52) learning_rate = 0.001 def lr_decay(epoch, initial_lr=learning_rate): drop = 0.98 epochs_drop = 10.0 lr = initial_lr * math.pow(drop, math.floor((1 + epoch) / epochs_drop)) return lr filepath = f'models/ANN_Klimazone02_{datetime.datetime.now().date()}.h5' checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min') tf.keras.backend.clear_session() lr_scheduler = LearningRateScheduler(lr_decay, verbose=1) model = Sequential() model.add(Flatten(input_shape=(x_train_r.shape[1], 2))) model.add(Dense(32)) model.add(Dense(32)) model.add(Dense(16)) model.add(Dense(52)) model.build() opt = Adam(learning_rate=learning_rate) model.compile(optimizer=opt, loss='mse') history = model.fit(x_train_r, y_train, epochs=100, batch_size=52, validation_data=(x_test_r, y_test), callbacks=[lr_scheduler, checkpoint]) y_pred_ann = model.predict(x_test_r) ``` I am not sure if the input shape, in my case (104,2), is correct or if it should be (52,2)? I also read about to specify the `input_shape` as `batch_input_shape` as well as the `predict` as `predict_on_batch`. However I do not know if I should chance that, or what is wrong with my code, or my method. The RNN and LSTM are very similar. Does somebody know what I could try to improve my predictions?
Models do not predict peak values
CC BY-SA 4.0
null
2023-05-02T08:56:36.387
2023-05-02T08:56:36.387
null
null
386676
[ "neural-networks", "predictive-models", "model-evaluation", "lstm", "recurrent-neural-network" ]
614640
1
null
null
0
5
I have a double question related with expressing the variability (as standard deviation) of a dataset. Let's say I want to give an average value of the nitrogen content in "fruits" (units: percentage over dry matter). I have the nitrogen content of apples, oranges and bananas. |apples |oranges |bananas | |------|-------|-------| |1.15 |3.15 |5.15 | |1.16 |3.16 |5.16 | |1.18 |3.18 |5.18 | The within-group variability is the same in all the groups (SD = 0.015). I think that the correct way to express the within-group variability is the average of the SD of every group. This is equivalent to the approach of ANOVA for within-group variation (root square of mean square within groups). The problem comes when I try to express the between-group variability. I can only think of 3 solutions: - Calculating the mean for each group and using this 3 data forgetting the intragroup variability (with this approach we obtain a mean: 3.16 and a SD: 2) - Calculating the mean and SD for each group and using an error propagation method for averages (see link) gives me a value of SD: 0.0088. Clearly, this is not expressing the intergroup variability. (https://www.dummies.com/article/academics-the-arts/science/biology/simple-error-propagation-formulas-for-simple-expressions-149357/) - Using the ANOVA approach for between-group variation it gives me the sum of squares of 3*(-2)^2+30^2+32^2 = 24. Then I divided between the degrees of freedom (2) and then I took the squared root and I obtained SD: 3.46. This value seems too high (where this variability comes from?). Is valid some of these approaches? Why?
How expressing within and between groups variability?
CC BY-SA 4.0
null
2023-05-02T09:04:52.383
2023-05-02T09:04:52.383
null
null
353633
[ "anova", "error-propagation", "variability" ]
614642
2
null
614382
2
null
Interestingly, there is no single equation for a weighted standard error. Multiple versions have been proposed in the literature though. See for example: Donald F. Gatz and Luther Smith (1995). "The Standard Error of a Weighted Mean Concentration - I: Bootstrapping Vs Other Methods". In: Atmospheric Environment 29.11, pp. 1185-1193 I implemented some of these method in R to use as an (unexported) function in the `adjustedCurves` R-package I developed, here is the code: ``` weighted.se <- function(x, w, se_method, na.rm=FALSE) { if (na.rm) { miss_ind <- !is.na(x) w <- w[miss_ind] x <- x[miss_ind] } n <- length(x) mean_Xw <- stats::weighted.mean(x=x, w=w, na.rm=na.rm) ## Miller (1977) if (se_method=="miller") { se <- 1/n * (1/sum(w)) * sum(w * (x - mean_Xw)^2) ## Galloway et al. (1984) } else if (se_method=="galloway") { se <- (n/(sum(w)^2)) * ((n*sum(w^2 * x^2) - sum(w*x)^2) / (n*(n-1))) ## Cochrane (1977) } else if (se_method=="cochrane") { mean_W <- mean(w) se <- (n/((n-1)*sum(w)^2))*(sum((w*x - mean_W*mean_Xw)^2) - 2*mean_Xw*sum((w-mean_W)*(w*x-mean_W*mean_Xw)) + mean_Xw^2*sum((w-mean_W)^2)) ## As implemented in Hmisc } else if (se_method=="Hmisc") { se <- (sum(w * (x - mean_Xw)^2) / (sum(w) - 1)) / n } return(sqrt(se)) } ``` where `x` is your vector of interest, `w` is a vector of weights with equal length, `se_method` specifies which method to use and `na.rm` specifies whether to remove missing values before performing calculations. I understand that this doesn't fully answer your questions, but it might still be helpful to you.
null
CC BY-SA 4.0
null
2023-05-02T09:18:48.917
2023-05-02T09:18:48.917
null
null
305737
null
614643
1
614658
null
0
26
Suppose I have many sensors recording a certain measure at discrete times (not too many times, something around 20/30 max). I want to get an idea of the average trend of the measure over time without necessarely fixing a functional form. Would it make sense to simply have a param for each time and have partial pooling? Something along the lines of $$ Y_t \sim \beta_t + \text{other covariates}$$ With the betas coming, for example, from the same normal distribution. Is there any major flaw/problem with this kind of simple model? Any good reason not to do this?
Modeling simple longitudinal data, unknown trend
CC BY-SA 4.0
null
2023-05-02T09:22:01.163
2023-05-02T13:46:08.810
null
null
387031
[ "generalized-linear-model", "stan" ]
614649
1
null
null
0
16
I have quite a generic problem. I have a collection of videos, and a collection of tags that identify an action on a specific timestamp. I wish to be able to classify the correct neighborhoods of such events using these videos (which consist of both frames images and audio). Even though there are many great sources for audio analysis and computer vision, I find it hard to come by a good ("for dummies") tutorial for this kind of generic problem. [This](https://www.youtube.com/watch?v=1E3mOTtKtCo&list=PLtGXgNsNHqPRAscIi6dMUuPCfDQ1Vho2U&index=3&ab_channel=TalhaAnwar) YouTube collection is almost the only one I found, that demonstrates how to work with videos and utilize `torchvideo`. I am sure there are other resources as well. Any tutorial will be greatly appreciated. p.s.: Surprisingly, just to give an example of the scarcity there is of video analysis discussions (and hence tools and tutorials, that come usually to meet the demand of people working with videos), there is no "video" tag.
Looking for resources for supervised learning on video classification
CC BY-SA 4.0
null
2023-05-02T11:45:07.657
2023-05-02T11:53:26.190
2023-05-02T11:53:26.190
285927
285927
[ "time-series", "neural-networks", "references", "computer-vision", "audio" ]
614650
2
null
606806
2
null
This is a problematic approach. If you just want to know about general relationships between two variables, not restricting to linear (Pearson correlation) or even monotonic (Spearman correlation) relationships, you could use a value like mutual information. The `R` function `JMI::JMI` is one way to calculate mutual information between two variables. (My experience with this function is that it is slow.) That is probably the answer to the posted question: use mutual information to calculate/estimate the overall relationships between pairs of variables so you are not restricted to the particular relationships detected by, for instance, Pearson or Spearman correlation. Then, a flexible model like a random forest will figure out the nonlinear, nonmonotonic relationships in the regression. However, a flexible model like a random forest also looks at interactions between variables and their nonlinear transformations. By only considering one feature at a time in the mutual information calculations, you miss all of those. In fact, any kind of feature-by-feature screening is going to miss these interactions. In terms of information theory, two variables can be independent (zero mutual information) yet be conditionally dependent, conditional on the value of a third variable. Considering just two variables at a time will always miss that conditional dependence, yet a random forest model will be able to discover such relationships and use them to make accurate predictions (subject to the usual concerns about overfitting).
null
CC BY-SA 4.0
null
2023-05-02T11:47:42.657
2023-05-02T11:47:42.657
null
null
247274
null
614651
1
null
null
2
27
Data and background of the study: The data is a repeat measurement of metabolite levels in responses to 2 types of stimuli. The data measured in 4 sessions— `sess1, sess2, sess3, sess4.` `Sess1` and `sess3` are the baseline blocks of no stimuli, so one level of possible stimulus type- `rest`, while `sess2, sess4` blocks are blocks with stimuli presentation that has two possible types of stimulus presentation `(A/B)`. The snippet of the data as the following: ``` ID session stim metabolite time 1 sess1 rest 0.072 1 1 sess1 rest 0.073 2 1 sess2 A 0.084 3 1 sess2 B 0.092 4 1 sess3 rest 0.068 5 1 sess3 rest 0.071 6 1 sess4 A 0.75 7 1 sess4 B 0.069 8 2 sess1 rest 0.072 1 2 sess1 rest 0.073 2 2 sess2 A 0.084 3 2 sess2 B 0.092 4 2 sess3 rest 0.068 5 2 sess3 rest 0.071 6 2 sess4 A 0.75 7 2 sess4 B 0.069 8 ... ``` I’m interested in how the session of measurement and stimulus type affect metabolite level. And here’s the proposed model: ``` mod1 <- lmer(metabolite ~ session * time + stim*time + (1 | ID), data = df, REML=FALSE) ``` And of course, for mod1 I got a warning of: `fixed-effect model matrix is rank deficient so dropping 6 columns / coefficients`, with the only interaction between `A*time` but not any others. Here I suspected it to be due to the `sess1` and `sess3` having only one level that is not in any other sessions. So, I drop the interaction term to: ``` mod2 <- lmer(metabolite ~ session * time + stim + (1 | ID), data = df, REML=FALSE) ``` But I still got a warning: `fixed-effect model matrix is rank deficient so dropping 1 column / coefficient`, with the only interaction between A*time showed in summary(mod2) but not any others. So my questions are: - What causes the rank deficiency here, is it because not enough information to estimate the specified model, or because REST stim is not coded in any other sessions, or both? - What I can do to deal with the rank deficiency, should I simply drop the baseline sess1 and sess3 to look for the stimulus effect? It'd be preferable to include all sessions since I'm interested in whether the metabolite levels will change without any stimulus as well (sess1/sess3). Apologies if this has been asked elsewhere; any help or pointers would be much appreciated!
How to deal with the rank deficiency in lmer for nested variable
CC BY-SA 4.0
null
2023-05-02T11:54:25.133
2023-05-02T13:53:55.543
2023-05-02T13:53:55.543
374794
374794
[ "r", "lme4-nlme", "repeated-measures", "nested-data" ]
614652
2
null
507357
0
null
When you include additional features, you put yourself at greater risk of overfitting to coincidences rather than modeling the true relationship. The logic about dropping one of two correlated variables seems to be that this allows you to decrease the overfitting risk while retaining most of the information (in a loose sense) available in both features, since the correlation means there is some level of redundancy; that is, you lower the risk of overfitting without having to sacrifice much. The trouble is that, just based on the correlations, you do not know how much you are sacrificing by dropping one of the two correlated variables. It might be that each variable has a unique effect, in which case, you will struggle to make up for having dropped one of the variables. In my answer [here](https://stats.stackexchange.com/a/579484/247274) I discuss why I see feature selection as overrated (using some arguments that will look familiar) and link to additional material on the topic. Particularly if you allow yourself to use some kind of regularization to constrain the problem, your best bet might be to use all of the features, despite the correlations between them.
null
CC BY-SA 4.0
null
2023-05-02T12:14:05.270
2023-06-02T14:04:15.813
2023-06-02T14:04:15.813
247274
247274
null
614653
2
null
606806
2
null
The bootstrap can help in these types of problems and is a valuable procedure in exposing the true difficulty of the task by taking a lot of uncertainties into account if you do one-at-a-time feature selection or a joint model (e.g. elastic net or better ridge regression). The idea is to get bootstrap confidence intervals for the importance ranking of all candidate features simultaneously. Importance can be measured in the one-at-a-time case by Wilcoxon statistics (or its equivalent Somers' $D_{xy}$ rank correlation or concordance probability $c$), Spearman $\rho$, $\chi^2$, ordinary correlation, or anything you want. Simulating data like yours is another approach, where you'll see that for non-large samples there is not even a correlation between the true and estimated gene-specific effects across genes and show a scatterplot. These methods are described and exemplified [here](https://hbiostat.org/bbr/hdata.html) and [here](https://hbiostat.org/rmsc/validate.html#sec-val-bootrank). We are mainly fooling ourselves about the ability to answer gene-level questions in the $N << p$ case. Aside: not sure that age in general is a meaningful target feature, depending on what even causes age to be assessed.
null
CC BY-SA 4.0
null
2023-05-02T12:19:56.047
2023-05-02T12:19:56.047
null
null
4253
null
614655
1
null
null
0
45
As far as CrossValidated allows users to ask for reference, - does anyone know (top-notch, authoritative) books, resources, or references about ML, DL, or AI for social scientists? If there are still " [The Two Cultures: statistics vs. machine learning?](https://stats.stackexchange.com/questions/6/the-two-cultures-statistics-vs-machine-learning) ", perhaps reflected in StackExchange [Cross-Valited and Data Science](https://meta.stackexchange.com/questions/266067/difference-between-the-cross-validated-and-data-science-se-sites); where to begin learning the ML culture starting from the perspective of statistics/social science? I am not asking specifically for [Resources on Explainable AI](https://stats.stackexchange.com/questions/349319/resources-on-explainable-ai/355970) but this may be relevant. You may know if you are more familiar with the fields than I.
entries into machine learning, deep learning, artificial intelligence for social scientists
CC BY-SA 4.0
null
2023-05-02T12:49:47.207
2023-05-02T20:17:09.377
2023-05-02T20:17:09.377
207649
207649
[ "machine-learning", "self-study", "references", "artificial-intelligence", "social-science" ]
614657
1
614667
null
0
45
First of all, apologies but my stats knowledge is limited and based on very isolated topics! From what I understand, regression splines are used when the covariates "aren't linear". What does that mean? Interaction of the covariate with what needs to be linear (if you could provide an example based on survival models in R it would be great)? I understand that the "regression splines" smooth the interaction of the covariate with something over knots and that they're smooth at the joints- but when to use them, how to pick the number of knots (because the position doesn't matter in cubic splines?) and what's the overall impact on a model? I feel like I'm missing something really important so apologies if i completely misinterpreted their use!
Regression splines- how to use them inside a model?
CC BY-SA 4.0
null
2023-05-02T12:59:22.503
2023-05-02T14:35:37.670
null
null
265390
[ "r", "regression", "survival", "modeling", "splines" ]
614658
2
null
614643
0
null
What you describe is very similar to a [MMRM](https://cran.r-project.org/web/packages/mmrm/vignettes/introduction.html), which would additionally account for the correlation of measurements over time (and do some implicit imputation of missing timepoints, which you would otherwise ignore). You may also want to consider whether there's an interaction between timepoint and covariates (e.g. the importance of a pre-experiment measurement will often decline over time). You may indeed believe that the times are not too different, or perhaps that the size of a visit to visit change cannot be too large (forward-difference-parameterization). Additionally, it may indeed often be plausible that covariates have similar effects at different times, which you could reflect by having main effects and being skeptical on time point by covariate interactions being tool large. One way of encoding such things would be in a Bayesian fashion, which you could e.g. do with the `brms` R package (see [here](https://github.com/paul-buerkner/brms/pull/1435)).
null
CC BY-SA 4.0
null
2023-05-02T13:46:08.810
2023-05-02T13:46:08.810
null
null
86652
null
614659
1
null
null
0
15
I would like to validate a 30-item questionnaire. There is a a lot of missing data. How do I handle missing data. My questions are the following: a) Do I have to use only complete case data to validate the questionnaire? b) Can I use Multiple Imputation to perform a Confirmatory Factor analysis? c) If yes to b) Can you give me an explanation how to do this.
Questionnaire validity with imputed data, possible?
CC BY-SA 4.0
null
2023-05-02T14:00:25.570
2023-05-02T14:08:21.090
2023-05-02T14:08:21.090
44269
387051
[ "survey", "data-imputation", "validity" ]
614660
1
null
null
0
18
I have a very unbalanced 2 data sets, the first is the one that the model was trained on (old training data), and the second is the newly collected data I want to see if the model performance dropped due to a shift in the data patterns before considering retraining the model with new data. Both Datasets have the exact same features and the same feature order (the only difference between both datasets is the number of data points, where the old training data has significantly more data, almost 10 times the new data sample). The data is tabular, the model task is binary classification, also the ground truth column exists in both datasets and all features are numerical (int or float only). what are the best drift detection metrics/approach to use for this case, please?
What are the best metrics to use for data drift detection of a Binary classification model on a tabular imbalanced numerical data
CC BY-SA 4.0
null
2023-05-02T14:10:28.617
2023-05-02T15:00:20.660
2023-05-02T15:00:20.660
363384
363384
[ "classification", "dataset", "metric", "data-drift" ]
614661
1
null
null
1
21
I am new on this site, so excuse me, if I made any mistakes writing this post. I want the describe the results of a linear regression of a variable "cost_difference" which has some positive values, some negative values and a lot of 0 based on the variable "days". As I'm not allowed to share further information about my data, I made a little simulation ``` > cost_difference <- c(-5,0,-6,46,0,0,0,0,0,8,0,13,-24,0,0,0,0,00,0,0,-9) > days <- c(20,12,26,14,22,24,17,10,69,21,16,30,23,25,11,15,29,50,13,28,19) > test <-data.table(cost_difference,days) > plot(test$cost_difference ~ test$days, + xlab = "days", + ylab = "Cost difference") > abline(lm(test$cost_difference ~ test$days)) > summary(cost_difference) Min. 1st Qu. Median Mean 3rd Qu. Max. -24.000 0.000 0.000 1.095 0.000 46.000 > my_lm <- lm(cost_difference ~days, data=test) > summary(my_lm) Call: lm(formula = cost_difference ~ days, data = test) Residuals: Min 1Q Median 3Q Max -25.140 -2.155 -1.647 -0.632 44.099 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.08651 5.56115 0.555 0.585 days -0.08465 0.20551 -0.412 0.685 Residual standard error: 12.6 on 19 degrees of freedom Multiple R-squared: 0.008851, Adjusted R-squared: -0.04331 F-statistic: 0.1697 on 1 and 19 DF, p-value: 0.685 ``` My question: - Can lm be used to describe the relationship between those 2 variables? - is this method okay with so many 0 and negative values? - is there a better model to do that? - what does a negative t-value mean? - what does a negative adj. R^2 mean? - would it help to ignore the case with 0? or any methodological issues with that? - do the results read as follows (English not my main language): starting with 3.09 ± 5.56 at day 0, for each additional day the Cost-Difference reduces by 0.085 ± 0.21
Interpretation and validity of linear regression when dependent variables has negative, positive and lot of 0 values
CC BY-SA 4.0
null
2023-05-02T14:12:01.267
2023-05-02T15:09:05.827
2023-05-02T15:09:05.827
56940
387052
[ "r", "regression", "interpretation", "linear" ]
614662
1
null
null
0
6
My data set consists of time series of consumption of 4 countries, and the average temperature of each country during the time period. I know that consumption is largely dependent on temperature, so for each country I have a linear model with consumption as the dependent, and temperature as the independent variable. I haven't been able to test it yet, but I heavily suspect some other, unobserved factor to influence consumption in all countries, causing the errors to be correlated. I want to use Pesaran's common correlated effects (CCE) approach to circumvent this and obtain estimators for my model. Is this possible with N=4? The idea: So for each country i, with i=1,...,N and N=4, I have a linear model: $y_{i,t}=\beta_ix_{i,t}+e_{i,t}$. As you can see, I allow for heterogeneous coefficients. Since there is cross-sectional dependence in the error terms, I remodel them as $e_{i,t}=\gamma_i\textbf{f}_t+\epsilon_{i,t}$ where $\gamma$ is the factor loading, $f$ the unobserved common factor and $\epsilon$ the individual error. As I understand, the trick here is to find an estimator for the unobserved common factor, and to show that it asymptotically converges to this estimator for $N,T\rightarrow\infty$. I skipped a few steps here, but Pesaran basically proposes the cross-sectional average as an estimator. Now I read somewhere that for the approach to make sense, you need to have a sample/factor ratio of N/f~5. So 5 observations per factor. Of course I only have 4, so I wonder to what extent this is only a rule of thumb. But apart from that, I don't see how an average of only 4 (or 5) observations per time step could be representative. Should I worry about this? Should I try to collect data for one more country, or should I abandon this approach?
When is the sample size of the cross-sectional units N large enough to use a CCE approach in panel data?
CC BY-SA 4.0
null
2023-05-02T14:21:04.217
2023-05-02T14:21:04.217
null
null
384768
[ "panel-data", "cross-correlation", "unobserved-components-model" ]
614663
1
null
null
0
5
I am building a super learner ensemble model using the classifiers SVM, kNN, AdaBoost, XGBoost, and Random Forest. However I am not sure the logic behind what classifier to use for the meta learner I have seen many applications using logistic regression but I am not using this algorithm since I am doing a classification problem. Is there any logic behind choosing the meta learner?
How to choose the meta learner for the super learner model?
CC BY-SA 4.0
null
2023-05-02T14:24:13.433
2023-05-02T14:24:13.433
null
null
387058
[ "ensemble-learning" ]
614664
1
null
null
2
28
I am currently working on intraclass correlation in the context of inter-rater reliability. There are different forms of ICC ([Shrout and Fleiss, 1979](http://rokwa.x-y.net/Shrout-Fleiss-ICC.pdf); [McGraw and Wong, 1999](https://www.semanticscholar.org/paper/Forming-inferences-about-some-intraclass-McGraw-Wong/f98590ed4967d6bb64a7a6e495419d1fd91cc0ce)) and according to the literature three questions need to be answered to make the right choice between these different forms: - Which model to choose (one-way model, two-way random model, two-way mixed model)? - Are you interested in consistency or agreement? - Are you interested in the single or average ICC? I have a design and question that suggests a two-way random model. In the study, different raters have rated several objects on the same construct. Accordingly, I would set up the population model for the single ICC(2,1) ([McGraw and Wong, 1999](https://www.semanticscholar.org/paper/Forming-inferences-about-some-intraclass-McGraw-Wong/f98590ed4967d6bb64a7a6e495419d1fd91cc0ce)) as follows. Model: $$ y_{ij} = \mu + o_i + r_j + e_{ij}\,,$$ with $o = $ object (random factor) and $r = $ rater (also random factor). ICC: $$ ICC(2,1) = \frac{\sigma_0^2}{\sigma_0^2+\sigma_r^2+\sigma_e^2}$$ As far as I know, the ICC can be calculated using both the MS of an ANOVA model and the variance components of a mixed model. The mixed model approach seems simpler to me and also seems to handle missing values better. Therefore, I am following this approach. My current question relates to the difference between single and average ICC. Question 1 – Formula of average ICC: - Calculating the average ICC To calculate the average ICC (e.g. $ICC(2,k)$), the formula is adapted as follows: $$ ICC(2,k) = \frac{\sigma_0^2}{\sigma_0^2+(\sigma_r^2+\sigma_e^2)/k}$$ $k$ corresponds to the number of raters in this formula. Can anyone explain this formula to me? Why is the variance of the assessors and the residual variance divided by k? Does this represent the expected value of the variance of the average? Question 2 – Interpretation of ICC In the above example, is it correct for me to interpret the ICC as follows: - Simple: the expected correlation of the scores when two randomly selected raters rate the same item. - Average: The expected correlation of the average values from two random samples of size k.
Some Question about the Intraclass Correlation in interrater reliability
CC BY-SA 4.0
null
2023-05-02T14:26:16.977
2023-05-02T14:58:09.283
2023-05-02T14:58:09.283
119261
338919
[ "anova", "multilevel-analysis", "reliability", "intraclass-correlation" ]
614665
1
null
null
2
27
I have two continuous variables and would like to determine which of these is more strongly associated with a third continuous variable. I realize I could perhaps do a multiple regression and see if both remain significant in this model but I fear my two independent variables are rather correlated to each other and collinearity would be a problem. Is there any other statistical way I can determine a difference? Can you compare two single regressions somehow? Or any other completely different test?
Method for determining which of two related continous variables has a stronger association with a third continous variable
CC BY-SA 4.0
null
2023-05-02T14:30:10.607
2023-05-02T17:25:39.760
2023-05-02T17:24:48.490
44269
387059
[ "regression", "association-measure" ]
614666
1
null
null
0
11
I've been running Granger causality tests on stationary time series in Python. Testing the same time series with itself gives me p-values of 1 for all tests (i.e. no causality), except for the parameter F test, which returns `p = 0`. However, when testing the relative change of a time series against the log change of the same time series (which obviously also doesn't have a causal relationship), I get p-values of 0 for all four tests, in either direction. For small variances on the other hand, I get large discrepancies between the test's p-values. My questions are: - Why does the parameter F test return a p-value of zero for two identical time series? - Why do all Granger causality tests indicate a bidirectional causal relationship between relative difference and log difference? - Why are there so large discrepancies between the p-values of the various tests when the variance is small? To reproduce the result where all p-values are zero, run: ``` import pandas as pd import numpy as np from statsmodels.tsa.stattools import grangercausalitytests time_series = pd.DataFrame(np.random.normal(10, 2, 100)) time_series_relative_change = (time_series / time_series.shift(1) - 1)[1:] time_series_log_difference = np.log(time_series / time_series.shift(1))[1:] dataframe = pd.DataFrame() dataframe['time_series_relative_change'] = time_series_relative_change dataframe['time_series_log_difference'] = time_series_log_difference grangercausalitytests(dataframe, 10) ``` To reproduce the result where the discrepancy between test results is large, run: ``` time_series = pd.DataFrame(np.random.normal(10, 0.1, 100)) time_series_relative_change = (time_series / time_series.shift(1) - 1)[1:] time_series_log_difference = np.log(time_series / time_series.shift(1))[1:] dataframe = pd.DataFrame() dataframe['time_series_relative_change'] = time_series_relative_change dataframe['time_series_log_difference'] = time_series_log_difference grangercausalitytests(dataframe, 10) ```
Why does relative difference granger cause log difference? (time series)
CC BY-SA 4.0
null
2023-05-02T14:30:17.850
2023-05-02T14:30:17.850
null
null
379910
[ "time-series", "python", "granger-causality" ]
614667
2
null
614657
1
null
Frank Harrell explains the principles in Chapter 2 of [Regression Modeling Strategies](https://hbiostat.org/rmsc/genreg.html#sec-relax.linear). The "nonlinearity" in question is the association between outcome and the continuous predictor in ordinary least squares, between a function of outcome and the predictor in a generalized linear model, and between the log-hazard of an event and the predictor in Cox models. For the number of knots, decide beforehand how many degrees of freedom that you want to devote to the predictor, and choose the number of knots accordingly. After that choice, you typically place them at evenly spaced quantiles of the distribution of the predictor unless you suspect there is some range of the predictor where the association changes rapidly. I prefer having the outermost knots somewhat within the extreme values, which is the default in the `rcs()` function in the [rms package](https://cran.r-project.org/package=rms) but not in the basic R `splines::ns()` function. [Chapter 21](https://hbiostat.org/rmsc/coxcase.html) illustrates the application of restricted cubic splines to a Cox survival model. There are other ways to model a flexible association between (a function of) outcome and continuous predictors, summarized and compared on [this page](https://stats.stackexchange.com/q/558759/28500).
null
CC BY-SA 4.0
null
2023-05-02T14:35:37.670
2023-05-02T14:35:37.670
null
null
28500
null
614668
1
null
null
0
14
I am interested in what determinants may drive a car manufacturing firm to sell its products only in specific countries. The same car can be available for sale in countries A and B but not in country C, and these choices from firm might be strongly correlated. That's why I'm thinking of a multivariate probit as one possible approach. The choice to sell a car on a specific market A, B or C might depend on the car itself (the mass of the vehicle as a proxy of the type), on the firm characteristics (the size of the firm as measured by the total number of sales) and on the market itself (the total number of cars/drivers in country A, B or C). My problem is I'm not sure if multivariate probit is the right approach if I want to add a country-level variable, such as the number of cars in the country for example, as it appears as just one scalar for each probit equation. Can anyone confirm multivariate probit actually works to regress simultaneously the 3 binary choices (sell or not in countries A, B and C) given the 3 independent variables I mentioned? Otherwise, would you have further suggestions as what modelling framework could suit and, potentially how would it translate in an econometric software (R, Stata, Eviews)? Additional question. As manufacturers produce different cars, how can I add firm effects for different cars from the same manufacturer? Thanks for the help, D.
Is multivariate probit the right approach?
CC BY-SA 4.0
null
2023-05-02T14:46:44.757
2023-05-02T14:59:57.887
2023-05-02T14:59:57.887
386247
386247
[ "econometrics", "probit", "multivariate-regression" ]
614669
1
614898
null
0
68
I developed the ordinal model where the outcome (high, middle low) is predicted from variables socioeconomic status (low, middle or high), child/adult relationship (family type A, B or C), and some other variables. I fitted the interaction between the status of the family and the adult-child relationship, but I am not sure how to interpret the effects of this interaction. If there were no interaction I would say that the middle family status increases the odds of the outcome by 50% and the high by 130% and family_B has no significant effect, while family_C increases the odds by 24%. I think that in the presence of interaction, the above odds are only interpretable as above if the other variable is at the reference level and at the other levels of both variables there is the effect modification. But considering that the effects for one family_B level and many levels of interaction are not significant, I do not know by how much, and if at all the effects are modified. For example: - In the case of the high socioeconomic status and family_C would the actual effect be: exp(beta(high) + beta(family_C) + beta(high*family_C) exp( log(2.32) + log(1.24) + log(0.98)) or, should I ignore beta(high*family_C) as it is not significant? - In the case of the middle socioeconomic status and family_B would the actual effect be: exp(beta(middle) + beta(family_B) + beta(middle*family_B), but in this case, both family_B and middle*family_B are not significant. Are they included or are they both ignored? I want to write two sentences: The odds of the outcome high vs middle or low when growing up with middle SES and family_B are XX higher/ lower than of growing up in the family with low SES and family_A. The odds of the outcome high vs middle or low when growing up with high SES and family_C are XXX higher/lower than of growing up in a family with low SES and family_A. How to calculate what the XX and XXX are equal to? [](https://i.stack.imgur.com/NBvf4.png) The problem of interaction and not significant terms have been discussed many times, but I have not found the answer where there was an explanation on how to provide the actual size of modification when dealing with not significant variables. Thanks for any help I used the emmeans package as reccomended in the answer and this are the results: [](https://i.stack.imgur.com/EJcso.png) The interaction between high SES and family C is insignificant but pairwise comparisons of high ses family_c to high ses family_a and high ses family_c to middle ses family_a and high ses family_c to low ses family_a are all significant. Does it mean that there is an interaction on some levels but not the other? But why is the interaction in main output in the model not significant? Are these pairwise effect spurious? Additionally when comering the model with interaction to one without, adjusted R2 are the same, but Wald test is significant. I expected the interaction to be significant based on the previous evidence from literature review, and I would not want to misinterpret the results I got.
Interpreting an interaction term in the context of study
CC BY-SA 4.0
null
2023-05-02T14:47:59.273
2023-05-06T12:25:58.067
2023-05-05T19:55:31.980
303717
303717
[ "statistical-significance", "interaction", "regression-coefficients" ]
614670
1
null
null
0
28
I'm working on a project to predict the category of music segments in an audio file (represented in pianoroll format with an additional column for the corresponding class). Each row represents the state of the notes after 50ms, and the last column is the class into which this state was labelled. I've got 150 such music files that are labelled and want to train a supervised model to be able to classify new music into segments. However, I have multiple problems: - the problem of temporal dependence. The commonly used algorithms work with spatial data as far as I'm concerned. So how should one capture the temporal aspect? - the problem of data representation. How can I train a method on 80% of my 150 files and test on the remaining 20%? The states of one matrix are dependent on one another, but concatenating the matrices of the 150 files is a bit much. Plus 2 matrices are different in length, and have no musical connection to one another in any way. I cannot come up with a method to train one network on multiple files, nor how to capture the temporal aspect of this task. Any advice is welcome.
Deep learning classification with multiple temporal data
CC BY-SA 4.0
null
2023-05-02T14:48:02.950
2023-05-02T17:41:14.293
2023-05-02T15:12:43.860
56940
387036
[ "time-series", "neural-networks", "classification", "unsupervised-learning", "data-preprocessing" ]
614671
2
null
614670
1
null
- rnn, gru and lstm are all common choices for modeling sequential data using neural networks. - If you have 150 songs, and you want to measure how well your model generalizes to new songs, you hold out some portion (you mentioned 20%) of the 150 songs. Sequences of different lengths can be padded and masked. There are standard methods to do this in widely-used NN libraries (Tensorflow, Torch, etc.). An example is TORCH.NN.UTILS.RNN.PACK_PADDED_SEQUENCE .
null
CC BY-SA 4.0
null
2023-05-02T14:55:46.373
2023-05-02T17:41:14.293
2023-05-02T17:41:14.293
22311
22311
null
614673
1
null
null
1
17
The current model we are working on is a multilevel SEM model with an 18-item scale (Mplus, ML estimation), between-level (the country-level) is saturated by letting each item correlate with one another. At the within-level, we've specified a six-factor structure (3 items per factor) and fixed variances for each factor at 1. The model seems to provide the correct kind of output, and fit indices look alright but show errors (a saddle point or a point where observed and expected information matrices do not match has been reached). There were some huge modification indices in the output. Based on these, we tried correlating the item residuals within factors. But this did not help with the convergence issues. Do you have any idea how to address convergence issues here? The dataset is refers to N = 18000 and about 30 clusters.
Convergence in multilevel SEM in MPLus / N = 18000 about 30 clusters
CC BY-SA 4.0
null
2023-05-02T15:02:55.983
2023-05-03T08:49:19.183
null
null
387064
[ "structural-equation-modeling", "mplus" ]
614674
1
null
null
0
39
Let assume that we have a training test and a test set with unseen data. I perform monte carlo cross valitation on training set to tune hyper-parameters. The question is related to feature selection. The standard approach is to implement FS inside the MCCV, so for different MCCV iterations different features are selected. I want to know if it is legit to perform a completely independent feature selection before model train on the whole training set, so in this way it is completely disconnected from the rest of the algorithm. Feature selection has its own MCCV and end with selection of features based on the cumulative frequency of occurrence among MCCV rounds. This is beacuse I want to tune my ML model with a fixed subset of predictors. Finally model is re-trained using best hyper-parameters and then tested one-shot on the unseed data of test set to have a reliable evaluation of model performance. Thanks a lot
Lock predictors subset before final model building
CC BY-SA 4.0
null
2023-05-02T15:05:04.033
2023-05-04T20:58:05.230
2023-05-04T20:58:05.230
387063
387063
[ "machine-learning", "cross-validation", "feature-selection", "blocking", "tuning" ]
614676
1
null
null
0
4
I have a research question where I argue that X1 and X2 affect M which ultimately affects Y. My problem is I expect the effect of X1 on M to be positive and X2 on M to be negative. In this case, how do I interpret the b path? My problem is, X1 and X2 are mutually exclusive and do not appear on the same individual. I am inclined towards using SEM on R at the moment but I am seeking ideas. Any suggestion is extremely appreciated.
Mediation Model for two predicting variables with opposing effects on the moderator
CC BY-SA 4.0
null
2023-05-02T15:10:46.963
2023-05-02T15:13:18.500
2023-05-02T15:13:18.500
56940
162099
[ "structural-equation-modeling", "mediation" ]
614678
1
null
null
0
29
After using the DCC-GARCH model to find the estimated correlations between 2 time series, I would like to perform linear regression with these daily correlations as my dependent variable and test the influential significance of several independent variables (ex: CPI, long term interest rate ...). The problem is that the correlations are of daily frequency, while the independent variables are of monthly and yearly frequency. A suggestion from my professor is to aggregate the daily correlations to monthly and yearly frequency to perform the regression, but I can not seem to figure out how to do it. What is an effective way of performing this procedure?
Aggregation of daily correlations to different frequencies
CC BY-SA 4.0
null
2023-05-02T15:29:49.370
2023-05-03T18:50:32.987
2023-05-03T08:29:47.237
53690
385593
[ "regression", "time-series", "correlation", "aggregation", "midas" ]
614679
1
null
null
0
30
Context: Suppose that I run a Sequential Monte Carlo simulation with likelihood tempering to perform parameter inference on a filtering problem. This takes me from my (unspecified) prior distribution to a particle approximation of the posterior distribution on the model parameters. Question: Can I use this posterior distribution as the prior for a new simulation? How? What are the pitfalls? Motivating example: Suppose that a first SMC simulation uses uniform priors to estimate credible model parameters. Suppose that a second SMC simulation then takes this parameter distribution and conditions it on a new batch of data. The marginal likelihood of the first simulation would have little value, coming from uniform priors, but the marginal likelihood of the second simulation might represent goodness-of-fit for the model. Thoughts: Generally speaking it seems valid to reuse a posterior distribution as the prior for further inference. However, the specific particle approximation produced by SMC seems problematic: once the likelihood tempering process has completed the particles seem unlikely to produce good proposals for further inference e.g. based on random-walk MCMC rejuvenation steps. The particles will be too sparsely distributed to detect changes in the distribution. If that is accurate then I wonder how one can better rejuvenate the particles? Do you need to rerun the earlier SMC simulation(s) to ensure a smooth series of bridging distributions? Or do you introduce a new proposal distribution that covers more of the parameter space in between particles from the prior simulation? Or...?
Do Sequential Monte Carlo simulations degenerate when you chain them together?
CC BY-SA 4.0
null
2023-05-02T15:35:37.860
2023-05-02T15:35:37.860
null
null
167476
[ "bayesian", "sequential-monte-carlo" ]
614680
1
null
null
2
88
I fit a Poisson regression model where I used $x$ and $x^2$ to predict $y$. Let's say the coefficients of $x$ and $x^2$ are $\beta_1 = 0.8$ and $\beta_2 = -0.1$. Exponentiating the coefficient (${\beta_1}$) gives us the multiplicative factor by which the mean count changes when we increase $x$ by one unit: $e^{\beta_1}$ = 2.23. So increasing x by one unit changes the mean of $y$ by a factor of 2.23. This factor is constant over all $x$. According to [this answer](https://stats.stackexchange.com/a/230008/282256), "$e^{\beta_2}$ would be called a ratio of ratio of rates comparing groups differing by 1 unit differing by 1 unit of $X$." I tried this out with a toy example below with the parameters from above (${\beta_1} = 0.8$ and ${\beta_2} = -0.1$) and indeed the ratio is not constant because of the nonlinear term but the ratio of the ratio is constant. But the ratio of ratios is not equal to $e^{\beta_2}$ but to $e^{(2\beta_2)}$. I think this relates to this part of the referenced answer above "But if you do a difference in differences for $(E[Y|X=x+2] - E[Y|X=x+1]) - (E[Y|X=x+1] - E[Y|X=x]) = 2\beta_2$. So basically the $\beta_1$ is the tangent slope of the quadratic curve at the origin, and $\beta_2$ is a quadratic slope." With all this, I am not quite sure how to interpret a one-unit change of $x$ on the mean of $y$. Would it be correct to say that for each unit increase in $x$, the linear term ${\beta_1}$ changes the mean of $y$ by a factor of 2.23 while the nonlinear term ${\beta_2}$ leads to a decrease of this factor by $e^{(2\beta_2)}$ = 0.8187 for each increase in $x$. I don't think this is correct because the effect is dependent on the value of $x$. Is there a way to express the effect on a one-unit increase in $x$ on the mean of $y$ that is independent of $y$ or is this not possible? And if this is not possible, what is the correct way to specify a change from let's say $x = 1$ to $x = 2$ for this toy example? "Increasing $x$ from 1 to 2 increases the mean of $y$ by a factor of 1.6487"? |x |predicted y |ratio |ratio of ratio | |-|-----------|-----|--------------| |0 |1 | | | |1 |2.01375271 |2.01375271 | | |2 |3.32011692 |1.64872127 |0.81873075 | |3 |4.48168907 |1.34985881 |0.81873075 | |4 |4.95303242 |1.10517092 |0.81873075 | |5 |4.48168907 |0.90483742 |0.81873075 |
Interpretation of quadratic term in Poisson regression
CC BY-SA 4.0
null
2023-05-02T15:38:56.863
2023-05-08T11:43:19.713
2023-05-02T17:36:05.443
8013
282256
[ "generalized-linear-model", "nonlinear-regression", "poisson-regression" ]
614681
2
null
614624
1
null
$X/\sum{\alpha_i}$ is the method of moments estimator of $\lambda$ and (usually) would be a good starting value in an iterative algorithm to find the maximum likelihood estimator. For the maximum likelihood estimator one, of course, would need to find the likelihood. That requires finding all combinations of $(x_1,x_2,\ldots,x_n)$ such that $\sum_{i=1}^n \alpha_i x_i$ equals $X$. I'm sure this can be done in R but for me it's easier done in Mathematica. Suppose $n=3$: ``` alpha = {12, 3, 1}; x = {x1, x2, x3}; X = 16; ``` Now find all of the possible triplets of `x` that satisfies $\sum_{i=1}^n \alpha_i x_i=X$: ``` sol = x /. Solve[X == alpha . x, x, NonNegativeIntegers] (* {{0, 0, 16}, {0, 1, 13}, {0, 2, 10}, {0, 3, 7}, {0, 4, 4}, {0, 5, 1}, {1, 0, 4}, {1, 1, 1}} *) ``` Find the likelihood by summing up all of the triplet probabilities: ``` likelihood = Total[Exp[-Length[a] lambda] lambda^Total[#]/(#[[1]]! #[[2]]! #[[3]]!) & /@ sol] // Simplify ``` $$\frac{e^{-3 \lambda } \lambda ^3 \left(\lambda ^{13}+3360 \lambda ^{11}+2882880 \lambda ^9+691891200 \lambda ^7+36324288000 \lambda ^5+174356582400 \lambda ^3+871782912000 \lambda ^2+20922789888000\right)}{20922789888000}$$ Find the value of $\lambda$ that maximizes the likelihood: ``` mle = FindMaximum[Log[Total[likelihood]], {{lambda, X/Total[alpha]}}, WorkingPrecision -> 30] (* {-2.94728411728903585719176910466, {lambda -> 1.04081006682956426205089024036}} ``` The value 1.04081006682956426205089024036 is pretty close to the method of moments estimate of $16/(12+3+1)=1$.
null
CC BY-SA 4.0
null
2023-05-02T15:40:58.537
2023-05-02T15:40:58.537
null
null
79698
null
614682
1
null
null
0
26
Using the house price data from [kaggle](https://www.kaggle.com/datasets/harlfoxem/housesalesprediction) and `xicorpy`, I have done some experimentation about how Chatterjee correlation coefficients behave in comparison to Pearson. Since Chatterjee's coefficient is a non-symmetric measurement of correlation power and ranges from 0 to 1, I have taken absolute value of the pearson coefficient and the lowest Xi value for the {x,y} pairs for comparison. [](https://i.stack.imgur.com/bsgEZ.png) As you can see from the plot above, they broadly select the same variables as significantly correlated. However, Xi seems to pick out false correlations with binary/discrete variables, most notably {waterfront, yr_renovated}. [](https://i.stack.imgur.com/ZpqUu.png) So my question is: why does the chatterjee coefficient Xi preferably pick out correlations with binary variables? Am I missing something? code for the taking the minimum chatterjee xi: ``` from functools import wraps @wraps(compute_xi_correlation) def symmetric_xi(x, y=None, *args, **kwargs): xis = compute_xi_correlation(x, y, *args, **kwargs) xis[:] = np.where(np.abs(xis) < np.abs(xis.T), xis, xis.T) return xis ``` For a better example: ``` n = 10000 us = np.random.uniform(0, 1., n//2) x, y = np.linspace(0, 1, n), np.append(np.zeros(n//2)+us, np.ones(n//2)-us) y = np.where(y>0.5, 1, 0) from numpy.lib.stride_tricks import sliding_window_view window = 1000 sliding = sliding_window_view(y, window_shape=window).mean(axis=1) plt.scatter(x, y) plt.plot(x[window-1:], sliding, '-') xicorpy.compute_xi_correlation(x, y) ``` This gives us a coefficient of 0.25 when it should be near 0.
The behaviour of the chatterjee correlation coefficient with binary data
CC BY-SA 4.0
null
2023-05-02T16:08:37.597
2023-05-03T17:06:14.857
2023-05-03T17:06:14.857
35774
35774
[ "correlation", "pearson-r" ]
614683
1
614736
null
1
13
I have a strange and maybe stupid question about cross-correlation. Let's imagine having 2 times series, for example, `asset A` and `asset B`. Both time series have enough history. Suppose, I've calibrated an ARFIMA-GARCH (or maybe just a GARCH) models for both and want to calculate cross-correlation between series based on these models I've calibrated. Is it possible to do and how? So, more widely, if we have `N` calibrated univariate ARFIMA-GARCH / GARCH models, can we calculate the covariance matrix based on these models? Sorry if this question is so stupid. Thank you.
Cross-correlation for univariate GARCH models
CC BY-SA 4.0
null
2023-05-02T16:17:50.227
2023-05-03T06:39:54.880
2023-05-03T06:38:54.747
53690
112733
[ "estimation", "covariance-matrix", "garch" ]
614685
1
null
null
0
7
So I have 2 data sets one with dimensions 100000x8, and the other 30000x20000 and want to if possible understand how these can be mapped onto each other. Of the 20000 variables in the second dataset these include the same 8 variables contained in the first. There's a few things I'd like to understand: - if those 8/20000 variables do indeed correlate with the 8 in the first dataset - can I map the first dataset using the 8 variables onto the 20000 The datasets are from the same sample origin, but from different instruments hence once gives more information than the other. Grateful for any advice -- I've looked into canonical correlation analysis but I don't think this works as the rows are unequal. I had thought about maybe sampling the first dataset to contain the same number of rows and running CCA, or potentially multiple regression? Thanks in advance!
Methods for comparing 2 independent datasets with some overlapping variables
CC BY-SA 4.0
null
2023-05-02T17:20:54.017
2023-05-02T17:20:54.017
null
null
322577
[ "correlation" ]