Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
614303
|
1
| null | null |
1
|
37
|
I have a classiification problem with a dataset where the number of variables is very large, and the number of observations is small. Approximately 200 observations and 10000 variables. I am using a SVC model to make the predictions and I want to use SHAP to meassure the importance of each variable, to try and see of there is a subset of the 10000 variables that are more important for the predictions.
However, when I try to use shap with the `algorithm='auto'` parameter, it says that "it cannot allocate a vector of size 149GB. And when I try and use other algorithms like `algorith='partition'`, the code simply crashes.
So my question is: ¿Is there a way to compute shap values for this kind of high dimensional datasets?
I leave here a minimal code example to show my issue with some fake data:
```
import numpy as np
import pandas as pd
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
import shap
X, y = make_classification(n_samples=200, n_features=10000, n_informative=100, scale=1, random_state=999)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
# Build Support Vector Classifier on the train set
model = SVC(C=0.5, kernel='sigmoid', gamma=0.01, probability=True)
model.fit(X_train, y_train)
# Compute SHAP values for the test data
explainer = shap.Explainer(model.predict_proba, X_train)
shap_values = explainer(X_test, max_evals=20001) # max_evals must be at least 2 * num_features + 1
# MemoryError: Unable to allocate 149. GiB for an array with shape (2000100, 10000) and data type float64
explainer = shap.Explainer(model.predict_proba, X_train, algorithm='partition')
# This simply kicks me out of the python terminal.
```
|
How can I compute the SHAP values in a high dimensional dataset?
|
CC BY-SA 4.0
| null |
2023-04-27T16:08:51.327
|
2023-04-27T16:08:51.327
| null | null |
313773
|
[
"python",
"high-dimensional",
"shapley-value"
] |
614306
|
2
| null |
526158
|
0
| null |
I dispute that the classes are so inseparable. For instance, in perplexity 4, observations around $(50, -20)$ are almost certainly going to be red, yet points around $(40, 30)$ seem to be an even mix of red and green. Since green is so outnumbered by red, a $50/50$ chance of green in a region is quite remarkable!
Now, t-SNE can create groupings that do not exist just as it can miss groupings that do exist, so that perplexity 4 plot is not necessarily indicative of how the real data look in many dimensions. Nonetheless, having clusters like this strikes me as at least a positive sign.
One of the issues perhaps leading you to have poor results is the concern with a threshold-based, improper scoring rule like $F_1$ score. Among the issues, $F_1$ score does not evaluate the XGBoost model. The $F_1$ score evaluates the XGBoost model along with a decision rule based on a threshold that might be wildly inappropriate for your task. A standard software default is a threshold of probability $0.5$. In all four of the plots, there are few regions where there will be a probability of $0.5$ of the point being green.
You might find yourself having better luck evaluating the raw outputs of your model instead of applying a software-default threshold. At the very least, you can tune the threshold to change your $F_1$ score.
Finally, it might be that the classes are simply quite similar and cannot be separated do a large extent on your data.
I will leave some links on class imbalance and the drawbacks of threshold-based performance metrics.
[Are unbalanced datasets problematic, and (how) does oversampling (purport to) help?](https://stats.stackexchange.com/questions/357466/are-unbalanced-datasets-problematic-and-how-does-oversampling-purport-to-he)
[Profusion of threads on imbalanced data - can we merge/deem canonical any?](https://stats.meta.stackexchange.com/questions/6349/profusion-of-threads-on-imbalanced-data-can-we-merge-deem-canonical-any)
[Why is accuracy not the best measure for assessing classification models?](https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models)
[Academic reference on the drawbacks of accuracy, F1 score, sensitivity and/or specificity](https://stats.stackexchange.com/questions/603663/academic-reference-on-the-drawbacks-of-accuracy-f1-score-sensitivity-and-or-sp)
| null |
CC BY-SA 4.0
| null |
2023-04-27T16:30:46.880
|
2023-04-27T16:30:46.880
| null | null |
247274
| null |
614307
|
1
| null | null |
2
|
88
|
I'm trying to fit a simple hierarchical model using simulated data.
The model is
$$y_{ij} \sim N(\mu_i, \sigma^2)$$
$$\mu_i \sim N(\mu, \tau^2)$$
The log joint distribution of parameters and data is:
$$\log p(\boldsymbol y, \mu,\tau,\sigma,\boldsymbol \mu) = \sum_{i=1}^m \log f(\mu_i;\mu,\tau) + \sum_{i=1}^m\sum_{j=1}^{n_i} \log f(y_{ij};\mu_i,\sigma)$$
where $f$ is the univariate normal pdf, and I have $m$ groups, with $n_i$ observations from group $i$.
It's my understanding that typically when fitting these kinds of models, you would do it in two stages:
- integrate out the $\mu_i$ to get the likelihood of the data, maximise it to estimate $\mu, \sigma, \tau$, then fix them
- maximise the full log joint, with $\mu, \sigma, \tau$ fixed, to estimate the random effects $\mu_i$
You can also estimate the parameters using MCMC. The thing I'm trying to do is just find the MAP estimate by directly optimising the log joint in one go.
I'm finding that the first two approaches agree with each other using simulated data, in the sense that the point estimates I get from the two-stage empirical Bayes approach are practically identical to the posterior modes I get from sampling and the coverage of the confidence intervals I construct using the Fisher information is good and it all matches up with the posterior distributions I get from sampling.
If I just maximise the log joint directly, I end up underestimating the residual variance $\sigma^2$ and the CIs I construct for the random effects $\mu_i$ are too narrow.
However, the log joint is larger when I optimise everything at once rather than using the two-stage approach, so I don't think it's just that my optimiser is converging to a worse local maximum.
I don't understand why this is the case. The parameters found by the optimiser should be the mode of the posterior, and they should be the parameters that maximise the log joint (at least away from the boundary of $\tau=0$), right? So how am I finding "wrong" parameters with higher log joint than the posterior mode?
EDIT:
Another framing of this question is "when and how can MAP estimates differ from the modes of the marginal posteriors I obtain from sampling, assuming both (appear to) have converged, and how does this relate to hierarchical models in particular?"
|
Underestimation of residual variance in a hierarchical model
|
CC BY-SA 4.0
| null |
2023-04-27T16:39:10.690
|
2023-06-01T10:10:56.540
|
2023-06-01T10:10:56.540
|
134445
|
134445
|
[
"mixed-model",
"optimization",
"markov-chain-montecarlo",
"bias"
] |
614309
|
1
| null | null |
0
|
9
|
How can I reduce dimensions using Partial Least Squares (PLS)?
My goal is to reduce dimension in the indepentend component way.
|
How can I reduce dimensions using Partial Least Squares (PLS)?
|
CC BY-SA 4.0
| null |
2023-04-27T16:52:23.290
|
2023-04-27T16:52:23.290
| null | null |
275488
|
[
"dimensionality-reduction"
] |
614310
|
1
| null | null |
0
|
54
|
I have a proportion data (0-100%, or it can be count data 0-20) that is not normally distributed. The hist graph in R shows:
I have tried many ways to transfer data (log, coxbox, ranktransfer), but non of them works.
Is there any other way that I can transfer these data to normal distribution?
If not, I think the only way is to use non-parametric test or generalized linear model to analyze my data, am I right?
Then the question is: if I want to use glm in R, which distribution family I should choose?
|
How to deal with non-normally distributed data?
|
CC BY-SA 4.0
| null |
2023-04-27T16:23:37.400
|
2023-05-15T16:05:12.070
|
2023-05-15T16:05:12.070
|
11887
|
386710
|
[
"r",
"distributions",
"proportion",
"count-data"
] |
614311
|
1
|
614316
| null |
0
|
41
|
Consider two dependent random variables $X$ and $Y$, i.e. $E(X|Y) = g(Y)$, where $g$ is the conditional mean function.
From the conditionning theorem we know that $E(X|X)=X$.
Can we similarly simplify the expression $E(X|X,Y)$ in some way?
|
Conditonal expectation
|
CC BY-SA 4.0
| null |
2023-04-27T17:21:37.513
|
2023-04-27T17:39:43.337
| null | null |
171162
|
[
"conditional-expectation"
] |
614312
|
1
| null | null |
1
|
13
|
I'm experimenting with quantifying uncertainty in data from a Bernoulli distribution by measuring the likelihood the p parameter using the beta distribution. Specifically, I'd like to show how uncertainty changes over time as you update the alpha and beta priors of beta distribution with data from a Bernoulli distribution (let's say p=0.75).
It seems that entropy will always decrease as new data in incorporated into the alpha and beta params and that makes sense to me. However, I want to show entropy increasing if I start updating alpha and beta with data from a Bernoulli distribution with a different p parameter (let's say p=0.33). Just updating alpha and beta with data from the new distribution will shift my posterior, but will also compress it. I want the posterior to widen as I incorporate data from a different distribution. What would be the best way to accomplish that? I'm not a stats expert so let me know if I'm way off in what I'm trying to accomplish.
|
Capturing increased uncertainty as data changes
|
CC BY-SA 4.0
| null |
2023-04-27T17:21:56.530
|
2023-04-27T17:21:56.530
| null | null |
197975
|
[
"bayesian",
"entropy",
"beta-distribution",
"bernoulli-distribution",
"uncertainty"
] |
614313
|
1
| null | null |
0
|
7
|
Here a few cases I run into while building a time-series model.
Case 1: Two time series, both monthly, different release dates.
Say I want to take the quotient of monthly time series A which is released on the 15th of each month and time series B which is released on the 1st of each month, then take the 3 month and 12 month moving averages of this quotient. Do I:
- resample to daily, forward fill, take the quotien, then compute MA windows with 21 * 3 days and 260 days
- shift the older time series so that instead of "being released" on day 1 of the month, my model assumes it is released on the 15th of the month, then take the quotien, then compute MA windows with 3 months and 12 months
- something else?
Case 2: Two time series, one weekly + one monthly, (obviously) different release dates.
Similar to case 1, I want to take the quotient between the two time series and then compute moving averages. How should I go about it this time?
Discussion
I tried looking, but couldn't find answers to this kind of offset date questions. My biggest issue is I don't want to introduce any lookahead bias in my model, while also trying to remain as close as possible to the business logic (in Case 1, I want the MA to be sensitive to the changes from month to month and not slowed down by the daily forward fills).
Thank you in advance for any and all help with this subject!
|
How to combine time series which have differing release dates every week or month?
|
CC BY-SA 4.0
| null |
2023-04-27T17:33:14.243
|
2023-04-27T17:33:14.243
| null | null |
386714
|
[
"time-series"
] |
614314
|
1
| null | null |
1
|
29
|
I am working on a study where I have species presence/absence data for 80 independent sites, as well as the soil carbon values for each of the sites. I want to test whether the presence/absence of each species is related to the site's soil carbon value. In other words, which species coincide with site that have high soil carbon, and which species coincide with site that have low soil carbon.
Since I have a binary predictor variable, and a continuous response variable, what would be the best approach for this analysis?
|
Presence/absence data as predictor variable
|
CC BY-SA 4.0
| null |
2023-04-27T17:35:34.923
|
2023-04-27T18:53:57.770
|
2023-04-27T18:47:59.593
|
7290
|
386715
|
[
"generalized-linear-model",
"t-test"
] |
614315
|
1
| null | null |
0
|
22
|
I am running a linear mixed model on hierarchical data. I have 500 samples from 3 regions. Within each region, samples were taken from 3 urban and 3 nonurban sites. I am interested in how the classification of the site as urban or nonurban impacts trait y.
This is my model:
y ~ class + (1|region) + (1|site) + (1|sex)
I am running my model using lme4 in R with the following code:
lmer(y ~ class + (1|city) + (1|site) + (1|sex),
data=meas,
REML=TRUE)
And am given the message: boundary (singular) fit: see help('isSingular')
I believe that my singular fit is because the variance of the random effect of site is estimated at 0. When I remove this random effect, I no longer receive a message about the singular fit; however, the denominator degrees of freedom obtained from running an anova on the model are extremely high (463.92). Is this an error in my model?
I appreciate any help. Thank you!
|
Singular fit with high degrees of freedom in LMM
|
CC BY-SA 4.0
| null |
2023-04-27T17:36:08.090
|
2023-04-27T17:36:08.090
| null | null |
386716
|
[
"r",
"mixed-model",
"lme4-nlme"
] |
614316
|
2
| null |
614311
|
0
| null |
Yes, $E[X|X, Y] = X$, which is a direct application of the "pulling out known factors out" (the fifth item in [this link](https://en.wikipedia.org/wiki/Conditional_expectation#Basic_properties)) property of conditional expectation.
In general, formally, suppose $X$ is defined on the probability space $(\Omega, \mathscr{F}, P)$ and $\mathscr{G}$ is a sub-$\sigma$-field of $\mathscr{F}$, then if $X$ is $\mathscr{G}$-measurable, then $E[X|\mathscr{G}] = X$. Heuristically, $X$ is $\mathscr{G}$-measurable means that for any real value $x$, we know whether the event $\{\omega: X(\omega) = x\}$ occurs or not. Consequently, given the "partial information" of $\mathscr{G}$, $X$ is completely determined, whence $E[X|\mathscr{G}] = X$.
Also note that "$E[X|X, Y]$" is a shorthand for $E[X|\mathscr{G}]$ with $\mathscr{G} = \sigma(X, Y)$, which by definition is the smallest $\sigma$-field in $\mathscr{F}$ with respect to which $(X, Y)$ is measurable. If $(X, Y)$ is $\mathscr{G}$-measurable, certainly $X$ is $\mathscr{G}$-measurable (in other words, $\sigma(X) \subset \sigma(X, Y)$ always holds).
| null |
CC BY-SA 4.0
| null |
2023-04-27T17:39:43.337
|
2023-04-27T17:39:43.337
| null | null |
20519
| null |
614317
|
2
| null |
599919
|
0
| null |
I believe the VAE Recognition Model (Encoder) uses a Neural Network to take a data point $x_{i}$ and output the mean and variance vectors for a normal distribution, N: $$q_{\phi} (z_{i} \mid x_{i} ) = N (z_{i} \mid \mu_{\phi} (x_{i}), \Sigma_{\phi} (x_{i}))$$
- The same mapping is used to output the mean and variance for the latents of each data point.
- Since the neural network uses a global function to parameterize each data point, the inference is considered Amortized.
If my understanding is correct, Mean Field and Amortized Variational Inference are not mutually exclusive, but when we use a Neural Network, we cannot enforce the Mean Field Assumption: $$q(z_{1} ... z_{m}) = \prod\limits_{j=1}^{m} q(z_{j})$$.
since the neural network is learning the distribution over latent variables for us. TLDR: The Variational AutoEncoder performs Amortized Variational Inference but not necessarily Mean-Field Variational Inference.
| null |
CC BY-SA 4.0
| null |
2023-04-27T17:44:31.303
|
2023-04-29T00:18:50.813
|
2023-04-29T00:18:50.813
|
385271
|
385271
| null |
614318
|
2
| null |
329675
|
0
| null |
Perhaps the easiest possible approximation is to use a Gaussian equivalent for the denominator and divide component wise,
\begin{align}
h(x) = \frac{f(x)}{g(x)} \approx \frac{f(x)}{\mathcal{N}(\mathbf{x}|\mathbf{v}_{eq}, \mathbf{S}_{eq})}
\end{align}
where $(\mathbf{v}_{eq}, \mathbf{S}_{eq})$ are respectively, the mean and covariance of the Gaussian approximation.
Note that the division of two Gaussian has a closed form solution, which, in this case, results in,
\begin{align}
h(x) = \sum_{i=1}^N \alpha_i .\frac{|\mathbf{S}_{eq} - \mathbf{\Sigma}_i|}{|\mathbf{\Sigma}_i|}.\frac{1}{\mathcal{N}(\mathbf{\mu}_i|\mathbf{v}_{eq}, \mathbf{S}_{eq} - \mathbf{\Sigma}_i)}.\mathcal{N}(\mathbf{x}|\mathbf{\mu}_{n,i}, \mathbf{\Sigma}_{n,i})
\end{align}
where $(\mathbf{\mu}_{n,i}, \mathbf{\Sigma}_{n,i})$ are,
\begin{align}
\mathbf{\Sigma}_{n,i} &= (\mathbf{\Sigma}^{-1}_{i} - \mathbf{\Sigma}^{-1}_{eq})^{-1} \\
\mathbf{\mu}_{n,i} &= \mathbf{\Sigma}_{n,i}[\mathbf{\Sigma}^{-1}_{i}\mathbf{\mu}_{i} - \mathbf{\Sigma}^{-1}_{n,i}\mathbf{\mu}_{n,i} ]
\end{align}
Note that for proper probability density functions, it is required that $(\mathbf{S}_{eq} - \mathbf{\Sigma}_i)$ is positive definite.
Alternatively you can approximate the individual fraction $\frac{f(x,i)}{g(x)}$ as a Gaussian by computing first and second moments using Monte-Carlo methods.
| null |
CC BY-SA 4.0
| null |
2023-04-27T17:53:23.030
|
2023-04-27T17:53:23.030
| null | null |
306513
| null |
614319
|
1
| null | null |
0
|
14
|
Let $X$ be a real-valued random variable with density $f(x) = (2\theta x + 1 - \theta) \mathbb{1}(x \in [0,1])$ where $1$ here is the indicator function and $-1 < \theta < 1$. I am trying to derive the most powerful size $\alpha$ hypothesis test for the hypotheses $\begin{cases} H_0 : \theta = 0 \\ H_1 : \theta = 1 \end{cases}$.
I have resorted to trying to derive the likelihood ratio test, which by the Neyman Pearson lemma is the most powerful. The likelihood ratio is $\Lambda = f_1/f_0 = 2x1(x\in [0,1])$. Therefore I'd expect the hypothesis test to be reject $H_0$ if $2X > k$ for some $k$. We can determine $k$ using the size $\alpha$. I get $\alpha = \mathbb{P}_{H_0}(2X > k)\Rightarrow k = \sqrt{1 - \alpha}$.
Now I want to derive a uniformly most powerful test for the hypotheses $\begin{cases} H_0 : \theta = 0 \\ H_1 : \theta > 0\end{cases}$.
The Neyman-Person lemma (as proved in our statistics class) does not apply for the case of composite hypotheses. So I think we need to adapt our solution to the simple hypothesis case to get a uniformly most powerful test for this new case. But I'm unsure precisely of the argument to use.
I've been stuck on a couple of other problems of this type ("given a most powerful test with simple hypotheses, show that if we alter the hypotheses to become composite then the composite test is actually uniformly most powerful as well") and would therefore appreciate some help through this example.
|
Show a composite test is the most powerful after deriving a similar most powerful simple test
|
CC BY-SA 4.0
| null |
2023-04-27T18:02:58.617
|
2023-04-27T21:09:39.020
| null | null |
331303
|
[
"hypothesis-testing",
"statistical-power",
"likelihood-ratio",
"neyman-pearson-lemma"
] |
614320
|
1
| null | null |
1
|
60
|
I was reading [this article](https://www.researchgate.net/publication/276293294_Utilizing_Ordered_Statistics_in_Lifetime_Distributions_Production_A_New_Lifetime_Distribution_and_Applications) which introduce the families of distribution for the median of an random sample. However, the writer did not state the reason why one is interested in modelling the distribution of the median of a random sample.
I found articles that related to this in [here](https://www.jstor.org/stable/2236761) and [here](https://www.jstor.org/stable/2311631) but they still did not mention why. I do know that median, unlike mean, is insensitive to outliers and in the case of Laplace distribution the median is the MLE, but I'm not sure it is related to my question? Any insight why?
|
Why does one interested in modelling the distribution of median of random samples?
|
CC BY-SA 4.0
| null |
2023-04-27T18:20:31.923
|
2023-04-28T21:24:01.727
|
2023-04-28T05:32:56.617
|
386718
|
386718
|
[
"distributions",
"mathematical-statistics",
"sample",
"median"
] |
614322
|
1
|
614331
| null |
1
|
55
|
If a distribution belongs to a certain class, then the distribution with the largest entropy in that class is typically referred to as the least-informative distribution. To me, this his highly confusing. We have the following definition of self-information (or information content):
$$I(X) = -\text{log}(P(X))$$
More often than not, this is referred to as surprise, but I prefer the term information (after all, it's called information theory not surprise theory).
Entropy, then, is the expected surprise or expected information of a random variable
$$H(X) = \mathbb{E}[I(X)] = \mathbb{E}[-\text{log}(P(X))]$$
So, the distribution which has the maximum entropy in a given class is the distribution that is, on average, "the most surprising", but that should mean, given our setup, that this is the distribution which has the "most information" on average by the definition that we've given of self-information.
EDIT In the [Wikipedia article](https://en.wikipedia.org/wiki/Principle_of_maximum_entropy#Information_entropy_as_a_measure_of_%27uninformativeness%27) on this topic, there is the following phrase:
>
Consider a discrete probability distribution among $m$ mutually
exclusive propositions. The most informative distribution would occur
when one of the propositions was known to be true. In that case, the
information entropy would be equal to zero.
My understanding is that, in information theory, a sure event has zero information content. So, when we say that the maximum entropy distribution is the least informative are we using a definition of information that is the exact opposite of the definition set out in information theory?
All this boils down to the following question: when we say that "a maximum entropy distribution is the least informative distribution in a given class" is this in fact using the opposite definition set out in information theory? Or is the definition of information actually compatible with this and I am missing something?
|
Is the principle of maximum entropy misleading?
|
CC BY-SA 4.0
| null |
2023-04-27T18:26:21.607
|
2023-04-27T20:32:40.950
|
2023-04-27T19:03:28.970
|
332763
|
332763
|
[
"entropy",
"information-theory",
"maximum-entropy"
] |
614323
|
2
| null |
614314
|
1
| null |
The "best" analysis depends on what your question is and how you are thinking about the relationships. For example, what do you think is a (possible) function of what? You state that presence/absence is a predictor of carbon in the soil (presumably a continuous variable). As stated, determining if a continuous variable differs between two groups would be a $t$-test. If you had a bunch of different species that could be present or not, it might be a factorial ANOVA. This is not my field, but I would guess the thinking is something like 'this species generates carbon that then becomes incorporated into the soil (or leaches it out of the soil)'. On the other hand, if you wanted to see if the presence of carbon predicted whether a given species were present, you would probably want logistic regression. In that case, the thinking would be something like 'this species needs carbon to survive, if there is very little in the soil, this species won't be there'.
| null |
CC BY-SA 4.0
| null |
2023-04-27T18:53:57.770
|
2023-04-27T18:53:57.770
| null | null |
7290
| null |
614324
|
1
| null | null |
3
|
354
|
In R, suppose you generate $10^7$ random variates of the standard Cauchy distribution. Then you plot a histogram and density of this simulated data with the actual standard Cauchy density on top of it. For some reason, the charts don't match:
```
n = 10^7
tibble(cauchy_real = rcauchy(n)) %>%
filter(cauchy_real >= -10, cauchy_real <= 10) %>%
ggplot() +
geom_density(aes(x = cauchy_real)) +
geom_function(fun = dcauchy, colour = 'red') +
geom_hline(yintercept = 1/pi)
```
R output:
[](https://i.stack.imgur.com/FRhgG.png)
Observations: 1. We filter the sample generated via `rcauchy` because it can generate some extreme outliers, and 2. We plot the horizontal bar on $y = 1/ \pi$ because, generally, in a Cauchy distribution, the peak of the distribution is at $y = 1/(\pi \gamma)$. 3. This discrepancy doesn't go away even if you increase `n`.
Why does this happen?
|
Difference between charts of rcauchy(10000) and geom_function(fun = dcauchy)
|
CC BY-SA 4.0
| null |
2023-04-27T19:09:12.893
|
2023-04-28T16:34:48.707
| null | null |
163908
|
[
"r",
"distributions"
] |
614327
|
1
|
614337
| null |
1
|
44
|
Suppose I have the following hierarchical distribution:
$$\mathbf{y} \sim \text{Normal}(\mathbf{X}\boldsymbol{\beta} + \mathbf{K}\boldsymbol{\alpha}, \sigma^2\boldsymbol{\Sigma}_y),$$
$$\boldsymbol{\alpha} \sim \text{Normal}(\boldsymbol{0}, \sigma^2_\alpha\mathbf{I}),$$
$$\boldsymbol{\beta} \sim \text{Normal}(\boldsymbol{\mu}_\beta, \boldsymbol{\Sigma}_\beta),$$
$$\sigma^2_\alpha \sim \text{IG}(\alpha_\alpha, \beta_\alpha),$$
$$\sigma^2 \sim \text{IG}(\alpha_\sigma, \beta_\sigma),$$
where $\boldsymbol{\Sigma}_y$ is a diagonal matrix, $\mathbf{X}$ and $\mathbf{K}$ are known, and $\boldsymbol{\mu}_\beta$, $\boldsymbol{\Sigma}_\beta$, $\alpha_\alpha$, $\beta_\alpha$, $\alpha_\sigma$, and $\beta_\sigma$ are fixed hyperparameters. Further, suppose that $\mathbf{K}$ acts as a mapping matrix, where it selects two values in $\boldsymbol{\alpha}$ for each response $y_i$ (i.e., $y_i$ may have mean $\mathbf{X}_i^T\boldsymbol{\beta} + \alpha_3 + \alpha_8$).
I would like to marginalize over $\boldsymbol{\alpha}$. Is the resulting hierarchical model simply
$$\mathbf{y} \sim \text{Normal}(\mathbf{X}\boldsymbol{\beta}, 2\sigma^2_\alpha\mathbf{I} + \sigma^2\boldsymbol{\Sigma}_y),$$
$$\boldsymbol{\beta} \sim \text{Normal}(\boldsymbol{\mu}_\beta, \boldsymbol{\Sigma}_\beta),$$
$$\sigma^2_\alpha \sim \text{IG}(\alpha_\alpha, \beta_\alpha),$$
$$\sigma^2 \sim \text{IG}(\alpha_\sigma, \beta_\sigma)?$$
This seems a bit too trivial, so I'm wondering if I messed up somewhere. My thinking is that, regardless of which response we have, there will be an addition of two normally distributed random variables with variance $\sigma^2_\alpha$.
|
How can I marginalize $\boldsymbol{\alpha}$ out of my hierarchical model?
|
CC BY-SA 4.0
| null |
2023-04-27T19:29:50.013
|
2023-04-27T20:56:17.473
| null | null |
257939
|
[
"hierarchical-bayesian",
"marginal-distribution"
] |
614328
|
2
| null |
614324
|
16
| null |
There is nothing surprising here since you're comparing a kernel density estimate based on a sample from a [truncated](https://en.wikipedia.org/wiki/Truncated_distribution) Cauchy distribution with the theoretical pdf of an untruncated Cauchy.
If what you're trying to do is to plot a kernel density estimate based on a sample from an untruncated Cauchy over a range not including the whole sample and compare that to the theoretical pdf, this can be done straightforwardly in base R:
```
n <- 10^6
x <- rcauchy(n)
f <- density(x, from = -10, to = 10)
plot(f, col="red", main="")
curve(dcauchy, add=TRUE, lty=2)
```
From what I can tell (but I'm not an expert on tidyverse), there is no simple way to specify this range in the [geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html) function in the tidyverse similar to how the `to` and `from` arguments in the `density` function is used in the above code. Adding `coord_cartesian(xlim = c(-10, 10))` (as suggested by @LukasLohse) and something like `n = 1e+6` in the calls to `geom_density` and `geom_function` would potentially work but would make the code painfully slow and memory intensive.

| null |
CC BY-SA 4.0
| null |
2023-04-27T19:46:55.957
|
2023-04-28T12:57:42.830
|
2023-04-28T12:57:42.830
|
77222
|
77222
| null |
614329
|
1
|
614400
| null |
0
|
33
|
This was one the [question][1] which was helped and suggested by [EdM][2] how to do it in R way.
So far what I have done is this
The dataset6 is my complete dataset which include all my genes and the Scores of the two modules which I have already calculated based on the regression coefficient already obtained.
```
dataset6 = inner_join(surv_obj1,dataset5) %>% cbind(M1[c(32)]) %>% cbind(M2[c(42)])
dim(dataset6)
```
##Modules : all the genes and the module scores
```
M1 <- read.csv("Module_turq.txt",sep = "\t")
names(M1)[32] = "turquoise"
M2 <- read.csv("module_red.txt",sep = "\t")
names(M2)[42] = "red"
```
As an example the M1 looks something like this
```
head(M1)
ENSG00000006659 ENSG00000008853 ENSG00000070808 ENSG00000084734 ENSG00000086570 ENSG00000091490 ENSG00000100228 ENSG00000105672 ENSG00000106772 ENSG00000108179
1 -0.6790895 9.838845 3.8723699 2.0334803 7.408578 11.63426 5.388844 3.403330 8.402316 12.10071
2 -0.7223128 9.216762 1.9638908 0.2942262 6.407777 11.65133 4.751099 2.862639 8.371253 10.56136
3 -0.6791654 8.901460 1.9595495 1.1313903 8.457811 11.66191 4.814653 2.939381 9.843061 11.04008
4 -0.7388880 9.249982 2.8520751 0.9272291 7.241343 11.37203 4.344087 3.680633 9.009311 11.64196
5 -0.6837360 10.312524 1.3708872 1.3626295 7.800494 11.00893 5.046930 3.630544 7.975663 11.56495
6 0.2083037 8.045902 0.9177488 0.7799186 9.234063 11.16630 5.218017 4.294078 6.153577 11.39511
ENSG00000116985 ENSG00000124196 ENSG00000125740 ENSG00000127412 ENSG00000127946 ENSG00000128578 ENSG00000138622 ENSG00000140836 ENSG00000149635 ENSG00000153208
1 8.337326 0.6028046 14.95574 3.676078 12.195097 6.266345 0.7590498 10.134101 1.07506924 9.153512
2 7.050741 0.1714888 13.19622 3.044690 11.475854 7.258213 0.9300694 11.619774 0.38973280 10.189657
3 7.675785 0.8536641 15.04202 3.275292 11.331764 6.495812 3.3460528 10.565562 -0.06284153 9.095599
4 8.114534 0.4120677 13.72461 2.954629 11.965255 6.803049 0.6115331 9.842173 0.67297629 9.193858
5 8.852637 0.2637794 14.80603 3.040592 11.733995 6.085176 1.3058149 11.195194 -0.07398686 8.680911
6 8.150247 1.5651071 13.70370 3.678386 9.989683 6.363244 1.2811180 10.964695 0.88423334 10.820407
ENSG00000159228 ENSG00000160999 ENSG00000164086 ENSG00000166866 ENSG00000166922 ENSG00000171885 ENSG00000177138 ENSG00000185338 ENSG00000186510 ENSG00000205856
1 9.890364 8.853479 10.935791 3.737294 4.531792 0.9634707 0.2378286 6.563547 2.408185 1.1550716
2 7.085259 7.590380 9.670493 4.879658 4.407784 0.4286926 2.2786092 4.555640 2.336665 0.6929619
3 8.294819 8.290893 9.657836 5.134109 3.646726 2.3294132 2.1727927 5.381531 3.271168 0.8168658
4 9.153176 7.964677 10.564849 3.630441 4.645231 1.3016413 0.5430753 5.313173 2.586458 1.1699673
5 7.814855 8.173153 11.026781 4.127289 4.826615 0.4942343 0.6896831 6.481789 1.758393 3.5270260
6 6.933262 8.273962 9.209629 4.554088 4.788554 2.2729740 0.9930626 6.183071 2.182287 0.3210672
ENSG00000213214 turquoise
1 0.9801103 8.135249
2 0.8521615 6.033414
3 1.5692256 6.050110
4 0.4066649 7.211567
5 2.3361480 6.996665
6 0.4935823 5.172616
```
## Extract colnames to create formula
```
# Generate formula string
predictor_names1 <- colnames(M1) # exclude the first two columns (time and status)
formula_str1 <- paste("Surv(OS_MONTHS, Status) ~", paste(predictor_names1, collapse = " + "))
# Fit the Cox proportional hazards regression model
my_formula1 <- as.formula(formula_str1)
predictor_names2 <- colnames(M2) # exclude the first two columns (time and status)
formula_str2 <- paste("Surv(OS_MONTHS, Status) ~", paste(predictor_names2, collapse = " + "))
# Fit the Cox proportional hazards regression model
my_formula2 <- as.formula(formula_str2)
```
## Running bootstrap
```
for (i in 1:1000) {
# Generate a bootstrap sample
bootSample <- dataset6[sample.int(nrow(dataset6),replace=TRUE),]
# Calculate the IQR for each module
IQR1 <- IQR(M1$turquoise)
IQR2 <- IQR(M2$red)
# Fit the Cox proportional hazards regression model for each module
bootFit1 <- coxph(my_formula1,data = bootSample)
bootFit2 <- coxph(my_formula2,data = bootSample)
# Calculate the logHR difference between the two modules
logHR1 <- coef(bootFit1)[[2]] * IQR1
logHR2 <- coef(bootFit2)[[2]] * IQR2
logHRdiff <- logHR2 - logHR1
# Store the result
if (i == 1) {
resultStore <- logHRdiff
} else {
resultStore <- c(resultStore, logHRdiff)
}
}
# Sort the results
resultStore <- resultStore[order(resultStore)]
# Calculate confidence limits for logHR difference
conf_limits <- quantile(resultStore, probs = c(0.025, 0.975))
# Print the confidence limits
conf_limits
```
In the above code to find the IQR for the both the modules I used the module scores which is basically turquoise for first module and red for the second module
Before that I also tried something like this where I tried to find IQR of each column I ran this
```
IQR1 <- M1 %>%
select_if(is.numeric) %>%
summarise(across(everything(), IQR))
IQR2 <- M2 %>%
select_if(is.numeric) %>%
summarise(across(everything(), IQR))
```
it didn't work
As suggested in the my previous question
>
if you did 999 bootstrap samples then the 25th and 975th in order give
you 95% confidence limits for the difference between the modules. If a
difference of 0 is outside those confidence limits, then you have
evidence that one of the modules is superior.
To do the above I ran this which in the above code
```
# Calculate confidence limits for logHR difference
conf_limits <- quantile(resultStore, probs = c(0.025, 0.975))
# Print the confidence limits
conf_limits
```
## Output
```
2.5% 97.5%
-1.814380 2.446325
```
So what would be the output interpretation since it contains 0 there both the model are basically same or similar ?
## Coxph model output
```
Call:
coxph(formula = my_formula1, data = bootSample)
coef exp(coef) se(coef) z p
ENSG00000006659 -0.08480 0.91869 0.18335 -0.463 0.643702
ENSG00000008853 -0.71237 0.49048 0.45977 -1.549 0.121284
ENSG00000070808 0.22058 1.24680 0.18767 1.175 0.239853
ENSG00000084734 0.15579 1.16858 0.25058 0.622 0.534113
ENSG00000086570 -0.78152 0.45771 0.33290 -2.348 0.018894
ENSG00000091490 0.30301 1.35392 0.26720 1.134 0.256792
ENSG00000100228 0.82649 2.28528 0.32949 2.508 0.012128
ENSG00000105672 -0.57979 0.56001 0.22165 -2.616 0.008901
ENSG00000106772 0.21472 1.23952 0.21201 1.013 0.311153
ENSG00000108179 0.59817 1.81879 0.42130 1.420 0.155654
ENSG00000116985 -0.16206 0.85039 0.28520 -0.568 0.569867
ENSG00000124196 0.69766 2.00905 0.33448 2.086 0.036994
ENSG00000125740 -0.26467 0.76746 0.21396 -1.237 0.216094
ENSG00000127412 -0.09850 0.90620 0.22435 -0.439 0.660640
ENSG00000127946 0.33344 1.39575 0.26910 1.239 0.215323
ENSG00000128578 -0.33829 0.71299 0.21157 -1.599 0.109822
ENSG00000138622 0.02457 1.02488 0.37422 0.066 0.947643
ENSG00000140836 0.03179 1.03230 0.28102 0.113 0.909943
ENSG00000149635 0.71572 2.04565 0.23854 3.000 0.002696
ENSG00000153208 0.22391 1.25095 0.20699 1.082 0.279368
ENSG00000159228 0.18013 1.19737 0.22749 0.792 0.428466
ENSG00000160999 -0.56201 0.57006 0.28997 -1.938 0.052600
ENSG00000164086 1.17011 3.22234 0.44175 2.649 0.008078
ENSG00000166866 -0.34015 0.71166 0.28358 -1.200 0.230328
ENSG00000166922 0.48040 1.61673 0.30426 1.579 0.114354
ENSG00000171885 -0.37376 0.68814 0.27954 -1.337 0.181199
ENSG00000177138 0.11922 1.12661 0.30421 0.392 0.695140
ENSG00000185338 0.76505 2.14910 0.22498 3.400 0.000673
ENSG00000186510 0.47312 1.60500 0.23003 2.057 0.039711
ENSG00000205856 0.22150 1.24794 0.32990 0.671 0.501960
ENSG00000213214 0.06063 1.06250 0.22885 0.265 0.791064
turquoise NA NA 0.00000 NA NA
Likelihood ratio test=152.9 on 31 df, p=< 2.2e-16
n= 146, number of events= 92
```
Why the turquoise is NA?
[1]: [How to compare two signature for prediction models?](https://stats.stackexchange.com/questions/613879/how-to-compare-two-signature-for-prediction-models)
[2]: [https://stats.stackexchange.com/users/28500/edm](https://stats.stackexchange.com/users/28500/edm)
|
Bootstrapping method survival model interpretation
|
CC BY-SA 4.0
| null |
2023-04-27T20:13:41.147
|
2023-04-28T13:29:06.857
|
2023-04-27T20:29:15.960
|
334559
|
334559
|
[
"survival"
] |
614330
|
1
| null | null |
1
|
32
|
I am trying to understand the proof of Theorem 16.14 of Probability Theory by A. Klenke (3rd version) about the Levy-Khinchin formula.
I would like to know how to prove this:
$$E[X]=\int x e^{-v(\mathcal{R})}\sum_{n=0}^{\infty}\frac{v^{*n}(dx)}{n!}=\int xv(dx)$$
Where:
- $X$ is distributed as a Compound Poisson Distribution with intensity $v$
- $v$ is a $\sigma$-finite measure on $(0,\infty)$.
My attempt
What am I missing in the following steps?
$$\int x \textrm{CPoi}_{v}(dx)=
\int x e^{-v(\mathcal{R})}\sum_{n=0}^{\infty}\frac{v^{*n}(dx)}{n!}
\\=e^{-v(\mathcal{R})}\left[\int x v^{*0}(dx)+\int x v(dx)+\int x \frac{1}{2!}(v*v)(dx)+\dots\right]
\\=e^{-v(\mathcal{R})}\left[0+\int x v(dx)+\iint(s+z)\frac{1}{2!}v(ds)v(dz)+\dots \right]
\\=e^{-v(\mathcal{R})}\left[0+\int x v(dx)+\frac{1}{2!}v(\mathcal{R})\int(s)v(ds)+\dots\right]
\\=e^{-v(\mathcal{R})}\int x v(dx)\left[1+\frac{1}{2!}v(\mathcal{R})+\dots \right ]$$
Doubts
- Is it right my way of calculating the integral involving the convolution?
- I was thinking about using the Taylor expansion of $e^x$ with the terms in the square brackets but it doesn't seem to work.
Edit
I think I found the error in the convolution integral. By exploiting the linearity of the integral, I did not see that for $n=2$ I have 2 identical terms, for $n=3$ I have 3 and so on. This is key to obtain the right factorial in the denominator. Thus I could apply the Taylor expansion above to get the desired result.
$$\int x \textrm{CPoi}_{v}(dx)=
\int x e^{-v(\mathcal{R})}\sum_{n=0}^{\infty}\frac{v^{*n}(dx)}{n!}
\\=e^{-v(\mathcal{R})}\left[\int x v^{*0}(dx)+\int x v(dx)+\int x \frac{1}{2!}(v*v)(dx)+\dots\right]
\\=e^{-v(\mathcal{R})}\left[0+\int x v(dx)+\iint(s+z)\frac{1}{2!}v(ds)v(dz)+\dots \right]
\\=e^{-v(\mathcal{R})}\left[0+\int x v(dx)+v(\mathcal{R})\int(s)v(ds)+\dots\right]
\\=e^{-v(\mathcal{R})}\int x v(dx)\left[1+v(\mathcal{R})+\dots \right ]$$
Thanks for the help.
|
Expectation of a Compound Poisson Distribution
|
CC BY-SA 4.0
| null |
2023-04-27T20:32:02.193
|
2023-04-30T16:12:19.997
|
2023-04-30T16:12:19.997
|
141594
|
141594
|
[
"expected-value",
"integral",
"compound-distributions"
] |
614331
|
2
| null |
614322
|
1
| null |
The "information" referred to in the definition of entropy is not the information contained in the distribution, but the information that is contained in an observation from the distribution relative to the distribution, writing informally. Consider the maximum entropy distribution on $(0,1)$, which is the Uniform distribution, and compare it to, say, $x \sim \text{Beta}(100,100)$ distribution. With the latter distribution, we know the random variable will be very close to $0.5$ with considerable confidence! So... observing $x$ tells us very little because we already know pretty much where in $(0,1)$ $x$ will be. But we have no idea where in $(0,1)$ the r.v. will be if we only have the Uniform distribution to hand, so observing $x$ updates our information quite a lot.
Entropy is related to "surprise" in that the expected increase in information due to observing something is maximized for the maximum entropy distribution. In an anthropomorphic sense, the more we've learned from a single observation, the more likely we are to have been surprised by it.
| null |
CC BY-SA 4.0
| null |
2023-04-27T20:32:40.950
|
2023-04-27T20:32:40.950
| null | null |
7555
| null |
614332
|
1
| null | null |
1
|
8
|
I have 24hrs of temperature measurements taken in one minute intervals for a set of individuals. Thus, there is a non-independent distribution of temperatures for each individual as each measurement is correlated with the previous and subsequent measurement within the individual (repeated measures).
I want to compare the overall distribution of temperatures (at the population level) to a fixed reference value.
A one-way Z-test seems like the most appropriate approach for comparing a distribution to a fixed value, but how can I account for the repeated measures design?
In case it is useful - I am performing the analysis in R.
|
Compairing multiple distributions from repeated measures design to a refrence value
|
CC BY-SA 4.0
| null |
2023-04-27T20:36:20.493
|
2023-04-27T20:44:38.783
|
2023-04-27T20:44:38.783
|
71306
|
71306
|
[
"r",
"repeated-measures",
"non-independent",
"z-test"
] |
614333
|
1
| null | null |
0
|
19
|
I am not used to working with machine learning models, and are currently sitting with an issue i hope you can help me with.
I am sitting with a multi classification problem, where i try to predict whether the bitcoin will either ’increase’, ’decrease’ or ’neutral’ (neutral is a change in price within -0.5 % to 0.5%). The data I use are based on titles and subtitles from news articles from various media sources. This is definitely a hard machine learning task and i am not expecting a high accuracy.
I’m doing data preprocessing where I doing a number of NLP-tasks such as removing stopwords, stemming etc. I am creating a countvectorizer with n-grams which results in a sparse matrix, that i train the model on. I end up with 1.095 instances. I am training the models Random-forest, KNN, Logistic regression and MLP. I am using gridsearchCV on all the different models.
Most of the models i train are overfitted on the training-data and are getting a very bad result on our test data.
I am trying to prevent the models from overfitting, but no matter which hyperparameters i use, it seems to be overfitting.
Can i lower the accuracy on the model when predicting on training data? Or can i simply conclude, that the overfitting is caused by the data and the target variable not having any correlation? Or would this rather be shown through low results on both the training and the test data?
|
Huge overfit on prediction model -due to data with low predictive power or can this be fixed? (Python)
|
CC BY-SA 4.0
| null |
2023-04-27T20:40:51.990
|
2023-04-27T20:40:51.990
| null | null |
386728
|
[
"python",
"overfitting",
"multi-class",
"language-models"
] |
614334
|
1
| null | null |
0
|
30
|
I have weekly sales data for a restaurant, with 113 observations, and every year in January, sales drop significantly and then recover after two or four weeks. I applied a SARIMA model with auto-arima in R, but when I evaluate my model, I realize that the residuals are not constant and not normally distributed. Box-Cox Transformation was already used, lamba was 1.99 but it did not improve the model.
Upon reviewing a time series text, I learned that I need to create a dummy variable. I looked at the observations where the pattern breaks in January and classified them as "1", with the others as zero. I estimated my model again and it resulted in a SARIMA with rgex, but when I reviewed the residuals, I still have extremely high residuals in those periods. Residuals used to be -10000, now they are mostly -5000, so they were reduced. Do you have any suggestions? What types of interventions exist? I placed "1" for the first few weeks of January, then "0" when I see that sales have reached their normal level.
To test for stationarity, I used the Augmented Dickey-Fuller test and found that the data was not stationary. Therefore, I differenced the time series and checked for stationarity again using the ADF test.
Next, I decomposed the time series into its seasonal, trend, and residual components and plotted the results. I then fit a SARIMA model using auto.arima and checked its diagnostic tests for accuracy. Additionally, I performed a Box-Ljung test to check for autocorrelation in the residuals and a Jarque-Bera test for normality. Box-Ljung was fine, but Jarque-Bera was not.
I then added a dummy variable for intervention analysis to account for outliers or extreme values in the data. I fit a second SARIMA model using the dummy variable and checked its diagnostic tests for accuracy. However, yI found that the variance was still high, and upon further investigation, I found that the highest residuals corresponded to the same period that I alreadt included in the dummy variable for intervention. What else could I do? any suggestion? thanks!
|
How to apply intervention analysis to time series?
|
CC BY-SA 4.0
| null |
2023-04-27T20:43:33.627
|
2023-04-27T20:43:33.627
| null | null |
238029
|
[
"r",
"time-series",
"intervention-analysis"
] |
614335
|
2
| null |
18418
|
0
| null |
I am answering this very old question just so it has an answer.
## Formula and numeric answer
$$ \Pi_{i=0}^{1999} \frac{9955-i}{10000-i} \approx .00004 $$
## Detailed Explanation
Let's think about what it would look like for you to take this random sample and not get any of the people you are interested in.
For brevity, let's call the subpopulation of 45 people you are interested in $S$.
When you choose your first person, you fail to get somebody from $S$. There are $10000$ people in total and $9955$ are not in $S$, so the probability of doing this is $9955/10000$.
Now you make the next random choice, but now there are only 9999 people overall, you still have all $45$ of the people in $S$ - and the other $9954$ are people you are not interested in . You make your choice, and once again get somebody you are not interested in - there are $9954/9999$ ways to do this. (You have found the probability of failing to select someone from $S$ given that you have had one previous attempt which was a failure).
...
Repeating this, eventually you get to the last, 2000th selection.
By this point there are $8001$ people in the whole population, and still $45$ in $S$ as you have failed to get any so far. The probability of failing to get any of $S$ on this final selection is $7956/8001$.
Finally, you want the probability of all of these things happening - i.e. failing on each selection.
So you would multiply each of the probabilities (look up conditional probability). We can write this as the formula at the top.
## Code
For reference, here is the Python code I used to calculate the product.
```
product=1
numerator=9955
denom = 10000
for i in range(0,2000):
product*=(numerator)/denom
numerator-=1
denom-=1
print(product)
```
| null |
CC BY-SA 4.0
| null |
2023-04-27T20:44:18.920
|
2023-04-27T20:44:18.920
| null | null |
358991
| null |
614337
|
2
| null |
614327
|
2
| null |
Your solution does not account for the fact that responses $y_i$ and $y_j$ will be positively correlated if they depend on the same value $\alpha_k$.
The correct variance for $\boldsymbol{y}$ in the marginalised model is $\sigma^2_\alpha\mathbf{K}\mathbf{K}^T + \sigma^2\boldsymbol{\Sigma}_y$.
As the design matrix $\mathbf{K}$ always picks out two values from $\boldsymbol{\alpha}$, the diagonal terms in $\mathbf{K}\mathbf{K}^T$ are all equal to 2.
| null |
CC BY-SA 4.0
| null |
2023-04-27T20:56:17.473
|
2023-04-27T20:56:17.473
| null | null |
238285
| null |
614338
|
2
| null |
614319
|
0
| null |
Your MPT is fine, with one small correction: under $H_0$, we have $X \sim \mathrm{Unif}(0,1)$, so $\alpha=\mathbb{P}_{H_0}(2X > k)=1-k/2$, which implies that $k=2(1-\alpha)$.
To find a UMPT, start by picking some (arbitrary) value $\theta_1>0$ and construct the MPT for $H_0:\theta=0$ vs. $H_1':\theta=\theta_1$. If this MPT is the same for any value of $\theta_1$, then it is the UMPT when the alternative is the composite hypothesis $H_1:\theta>0$.
| null |
CC BY-SA 4.0
| null |
2023-04-27T21:09:39.020
|
2023-04-27T21:09:39.020
| null | null |
238285
| null |
614339
|
1
| null | null |
1
|
21
|
I'm trying to figure out the best way to analyze a dataset on 48 youth who were part of am 8-week summer program. These youth were given time-outs and then their behaviors were recorded during time-out as counts. # of time-outs differs across children (some have 1 time-out, others have 25 time-outs). Dataframe is in LONG format (i.e., each time-out has its own line). I'm interested in examining if affect predicts behavioral counts during time-out. So:
---
DV = behavioral counts
IV = affect, tx week, meds
Offset = Log of # of time-outs in a day
---
I originally ran this in SAS using PROC GENMOD:
```
PROC GENMOD DATA = DATA;
Class Meds; *yes or no
Model behavior = meds time x1 x2 x3 / link = log dist = zinb offset = log_numTOs
ZEROMODEL = meds time x1 x2 x3;
RUN;
```
I ran this and got results that made sense but then realized that REPEATED statements don't work with zero-inflated models, nor are RANDOM statements recognized in this model. I'm not inherently interested in within-person effects, just interested in whether affect predicts behavioral counts. I assume it's important to have the REPEATED effect to note that observations are correlated within child. So, I think I've analyzed this incorrectly.
This brings me to my questions:
- Am I correct that this has been analyzed incorrectly?
- I'm seeing that NLMIXED is another way to go, but what do others recommend? I'm not familiar with NLMIXED or R all that much but open to any ideas.
Thanks much.
|
Options for repeated measures ZINB
|
CC BY-SA 4.0
| null |
2023-04-27T21:12:22.630
|
2023-04-27T22:24:05.827
| null | null |
351054
|
[
"panel-data",
"zero-inflation"
] |
614340
|
1
| null | null |
0
|
26
|
[Link to data](https://drive.google.com/file/d/1Y9hLXbkFdw-qf5ofZQXWgugjuXHhgYxN/view?usp=sharing)
Background story: I'm trying to find out how being a grandparent affects working hours. The endogenous variable is `grandparent` (1 if the individual is a grandparent and 0 if not), and its instrument is `child1_female` (1 if the individual's first child is female). The outcome is log(`work_hour`), weekly working hours. I'm trying to use instrumental 2SLS with year fixed effects added, controlling for age. Here is my code:
```
iv <- feols(log(workhour)~age|year|grandparent~child1_female, sample, panel.id=~id+year)
```
However, the coefficient for `grandparent` is horrendously large: $-1.818$.
I'm trying to find out why. Please help me explain this given the context from the data. How can I locate the problem in the data?
|
Why is the coefficient very large (>1) when log(outcome)?
|
CC BY-SA 4.0
| null |
2023-04-27T21:31:13.943
|
2023-04-27T21:31:13.943
| null | null |
336679
|
[
"fixed-effects-model"
] |
614341
|
2
| null |
614339
|
1
| null |
You didn't say whether this was a randomized (cross-over?) experiment. If so, the model you describe makes sense.
In that case, an obvious implementation option (as long as you don't mind Bayes) is the `brms` R package, that lets you pretty flexibly combine things like random effects and [zero-inflated models](https://cran.r-project.org/web/packages/brms/vignettes/brms_families.html#zero-inflated-and-hurdle-models). Something like a [model specification](http://paul-buerkner.github.io/brms/reference/brmsformula.html) like:
```
library(brms)
brm(data=mydata,
formula = bf( behavior | ~ 1 + (1|p|subject) + meds + time + x1 + x2 + x3 + offset(log_numTOs),
zi ~ 1 + (1|p|subject) + meds + time + x1 + x2 + x3,
family = zero_inflated_poisson(link = "log", link_zi = "logit")),
prior = prior(class=b, normal(0,1)) +
prior(class=Intercept, dpar=zi, normal(0, 3.14)) +
prior(class=b, normal(0,1), dpar=zi))
```
with suitable priors might be an option. What I wrote as priors are somewhat vague, but you'd also want to do something for the intercept for the main model, but there it really depends on the scale you are working on what makes sense.
If this is not a randomized experiment, you'd need to also model/account for the process of being assigned to interventions (see the [causal inference literature](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/)).
| null |
CC BY-SA 4.0
| null |
2023-04-27T21:35:13.467
|
2023-04-27T21:35:13.467
| null | null |
86652
| null |
614343
|
1
| null | null |
1
|
55
|
I have synthetic data which is sampled from a non-central chi distribution (similar to what is obtained experimentally). I am fitting a non-linear model to this data to extract three parameters of interest. I have tried both least squares and maximum likelihood, and based on my simulations I have been getting slightly higher mean-squared error from the maximum likelihood method over the least squares method. I find this result very surprising, since MLE has a bunch of nice properties that least squares does not.
(I would provide the code to reproduce it but it's quite lengthy due to the model / synthetic data, but I could work on getting a minimum reproducible example if need be.)
My question: Does this make sense? Most results I've seen online show that MLE is more efficient (i.e., lower MSE) than least squares, except under certain conditions where they're equivalent. Those conditions are not fulfilled here. In order to maximize the likelihood (or minimize least squares) I am using the basinhopping routine in `scipy.optimize`. I have checked extensively and both methods are converging to what I believe is a global optima. Any other ideas?
|
Least squares more efficient than maximum likelihood?
|
CC BY-SA 4.0
| null |
2023-04-27T22:08:31.257
|
2023-04-28T02:08:11.453
|
2023-04-28T02:08:11.453
|
173082
|
60403
|
[
"maximum-likelihood",
"least-squares",
"estimators"
] |
614344
|
1
| null | null |
1
|
12
|
doing a research project and have been trying to use combination of google/chatgpt but am still not sure. the first research question is testing whether or not kids have increased in empathy following an intervention. it would be
- one group (i.e. all participants), repeated measures, categorical data (before the intervention they choose between options A and B, then intervention happens, then they choose again, if statistically significant increase in people choosing option A, then null hypothesis is rejected).
there are technically three groups in the experiment, as the second research question is looking at differences between each group in intervention efficacy, and I know that would require mixed design ANOVA - but since for this research question I'm just seeing if, on the whole, there is a difference, I don't think it's relevant that there are three different groups.
I've found a range of answers - McNemar's, Sign Test etc. But am not sure which is most relevant?
Thank you so much!!!!!
|
What statistical test to use for repeated measures, categorical data?
|
CC BY-SA 4.0
| null |
2023-04-27T22:19:30.177
|
2023-04-27T22:19:30.177
| null | null |
386731
|
[
"categorical-data",
"repeated-measures"
] |
614345
|
2
| null |
614339
|
0
| null |
`brms` is a fine solution. If you want to do something more NLMIXED-like, you can use `glmmTMB`. Advantages: don't have to mess around with priors, MCMC diagnostics, etc. Disadvantages: if your model is too complex for the data to handle, you have fewer options for using priors to regularize the solution (i.e., you'll probably have to simplify the model by throwing things away rather than using a continuous prior to downweight unlikely parameter combinations).
```
glmmTMB(data=mydata,
formula = behavior ~ 1 + meds + time + x1 + x2 + x3 + offset(log_numTOs)
+ (1 + meds + time|subject),
ziformula ~ 1 + meds + time + x1 + x2 + x3,
family = nbinom2)
```
The `(1|p|subject)` terms in the `brms` formula are doing something that `glmmTMB` can't (currently) do, i.e. modeling correlations between the random intercept in the conditional (count) model and the random intercept in the zero-inflation model (e.g., do children who have higher behavioral counts than the population average also have higher (or lower) zero-inflation probabilities than average?
I've made a few other modeling choices here:
- assuming that meds can vary within children (i.e., individual children go on and off meds over the course of the observation period), I've allowed for a random-slope model where in addition to allowing each child to vary in their baseline expected count, the model also allows for different time trends (the time in (1 + meds + time | subject) and different responses to medication for each child
- I did not add a subject-level random effect to the zero-inflation component of the model
In general, it is a good idea to identify the maximal model, i.e. the model that allows all possibly observable effects (i.e., all effects that varied within children over the observation period) to vary among children, and for all of those random effects to be correlated with each other. If we assume temporarily that `x1`, `x2`, `x3` vary only across subjects (i.e. they're constant for each child over the observation period), then the maximal model is that the intercept (`1`), effect of medication (`med`), and effect of time (`time`) of both the count model and the zero-inflation model all vary across children. This would mean each child had a vector of 6 latent variables, all potentially correlated with each other, so estimating a 6x6 covariance matrix. (In `glmmTMB` you have to model this as two separate 3x3 covariance matrices, with the count effects independent of the z-i effects; in `brms` I think you would write this as `(1+med+time|p|subject)` in both formulas.)
Unless you have a very large data set, this complex a model isn't likely to work out of the box. If you're using `brms` you can choose between simplifying the model (leaving out some random effects terms, using `||` instead of `|` to specify uncorrelated random effects, etc.) or using narrow/more informative priors to keep the model out of trouble; if using `glmmTMB` simplification is your only option.
I second Björn's caution about causal effects if (for example) meds are given non-randomly, to children who show more severe behavioural problems ...
| null |
CC BY-SA 4.0
| null |
2023-04-27T22:24:05.827
|
2023-04-27T22:24:05.827
| null | null |
2126
| null |
614346
|
1
|
614408
| null |
2
|
28
|
I am working with a colleague's data from a research project comparing two groups of learners on a test of language ability. Data are repeated measures, between-subjects. Participants in each group (M or C) took the same test at time A and later at time B.
Possible scores on tests could be 0-5, but in reality ranged from 2-5. Scores had to be one of 2, 3, 4, or 5 (which means this is probably more appropriately an ordinal variable, but that's a separate issue.)
Our goal is to test whether there are significant differences between the groups on learning, as measured by any gains on the test score time A to time B. As such I fit a multilevel regression model with an interaction term between time and group, with random intercepts for subjects.
Below I simulate our actual data with the same number of participants per group and approximate probabilities of the score distributions per group / time. The result is a very similar distribution of scores for the two groups and a similar model with significant interaction in the same direction as our data.
```
set.seed(100)
group <- c(rep('M', 18), rep('C', 23), rep('M', 18), rep('C', 23))
time <- c(rep('A', 18), rep('A', 23), rep('B', 18), rep('B', 23))
subject <- c(1:18, 19:41, 1:18, 19:41)
MA <- sample(c(2,3,4,5), prob = c(.43,.47,.08,0), size = 18, replace = T)
MB <- sample(c(2,3,4,5), prob = c(0,.17,.69,.13), size = 18, replace = T)
CA <- sample(c(2,3,4,5), prob = c(.28,.39,.33,0), size = 23, replace = T)
CB <- sample(c(2,3,4,5), prob = c(0,.23,.79,0), size = 23, replace = T)
sim_dat <- tibble(group, time, score = c(MA, CA, MB, CB), subject)
m1 <- lmer(score ~ group*time + (1|subject), data = sim_dat)
summary(m1)
```
For brevity here are the fixed effects from the model:
```
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
(Intercept) 2.9565 0.1210 77.9767 24.438 < 2e-16 ***
groupM -0.4010 0.1826 77.9767 -2.196 0.0311 *
timeB 0.8261 0.1696 38.9999 4.871 1.88e-05 ***
groupM:timeB 0.6739 0.2560 38.9999 2.633 0.0121 *
```
My understanding of how to interpret the coefficients is that the intercept reflects the baseline levels (in this case, Group C at Time A).
The term `groupM` then considers the difference between the intercept and the change from group C to M. So, practically, the comparison of Group M to Group C at time A. This indicates Group M had a lower score at time A than Group C.
This makes sense when looking at the pairwise comparisons, which also indicate no differences between the groups at time B (which is not shown in the model summary):
```
pairs(emmeans(m1, ~group | time))
time = A:
contrast estimate SE df t.ratio p.value
C - M 0.401 0.183 78 2.196 0.0311
time = B:
contrast estimate SE df t.ratio p.value
C - M -0.273 0.183 78 -1.495 0.1390
Degrees-of-freedom method: kenward-roger
```
The term `timeB` then considers the difference between the intercept and the change from time A to time B. So, practically, the comparison of Group C Time A to Group C Time B. This indicates Group C had a higher score at time B than time A.
This also makes sense when looking at the pairwise comparisons, which also show that Group M had a higher score at Time B versus Time A (again, this contrast is not shown in the model summary):
```
pairs(emmeans(m1, ~time | group))
group = C:
contrast estimate SE df t.ratio p.value
A - B -0.826 0.170 39 -4.871 <.0001
group = M:
contrast estimate SE df t.ratio p.value
A - B -1.500 0.192 39 -7.824 <.0001
Degrees-of-freedom method: kenward-roger
```
The interaction term is where I lose the script. My understanding is that `groupM:timeB` is compared against the baseline, so would that mean Group M Time B against Group C Time A?
However, what it seems to be reporting is the overall magnitude of the difference between Time A and Time B for Group M versus the magnitude of the difference between Time A and Time B for Group C (which, of course, is exactly what we want to test in this data!). If that is the case, then does the estimate mean that Group M was predicted on average to have a higher increases from Time A to Time B (of .67 score units) when compared to Group C?
Doing a "pairwise of pairwise" comparison seems to suggest that is the case, since I get the same terms from the model:
```
contrast(emmeans(m1, ~group*time), interaction = 'pairwise')
group_pairwise time_pairwise estimate SE df t.ratio p.value
C - M A - B 0.674 0.256 39 2.633 0.0121
Degrees-of-freedom method: kenward-roger
```
So, my question is, am I just fumbling the interpretation of the labels for the interaction term model? Is my understanding of the contrasts incorrect for interacting factors? I think I understand now what the model is showing (i.e., increased gains for group M versus group C), but I still don't fully understand how the model is doing this and how I am to interpret the coefficient straight from the model (in fact, I didn't think that was possible!).
|
Interpreting coefficient of interacting factors to compare difference in scores from two time points
|
CC BY-SA 4.0
| null |
2023-04-27T22:24:50.080
|
2023-04-28T15:19:31.913
| null | null |
386732
|
[
"r",
"interaction",
"pre-post-comparison"
] |
614347
|
2
| null |
611303
|
2
| null |
This is a way of doing it with the Kolmogorov-Smirnoff test. To plug in your actual date you'd have to change `y_obs`. I think this is what you wanted, but please let me know if you have any questions.
```
% Define the Zipf distribution
N = 100;
alpha = 1; % Shape parameter, 1.5 is apparently a good all-round value to start with
x = 1:N-1;
y_exp = 1./(x.^alpha); % A correct Zipf distribution
% Create a noisy signal
rng(42)
noise = normrnd(0, 0.01, [1, N-1]);
y_obs = y_exp + noise;
% Plot our empirical frequency distribution alongside the Zipf distribution
plot(x, y_obs, 'r--')
hold on
plot(x, y_exp, 'b--')
xlabel('Rank')
ylabel('Frequency')
legend('Observed', 'Zipf')
hold off
% Compute the goodness of fit using the Kolmogorov-Smirnov test
[H, p_value, D] = kstest2(y_obs, y_exp);
% Display the results
fprintf('Kolmogorov-Smirnov statistic = %.4f\n', D)
fprintf('p-value = %.4f\n', p_value)
if p_value < 0.05
fprintf('Conclusion: The data is not from a Zipf distribution.\n')
else
fprintf('Conclusion: The data is from a Zipf distribution.\n')
end
```
[EDIT]
You could also try to fit an exponential function onto your data. In MATLAB, you can use the `anova1` function to perform an analysis of variance (ANOVA) test on the residuals of the exponential regression, which can give you a p-value that indicates whether the model is a good fit for the data.
```
% Define the exponential function to fit
function y = func(x, p)
y = p(1) .* x .^ p(2);
end
% Define the dataset
xdata = [1, 2, 3, 4, 5];
ydata = [2.7, 7.5, 16.6, 30.2, 48.6];
% Fit the exponential function to the dataset using lsqcurvefit
p0 = [1, 1]; % Initial parameter values
popt = lsqcurvefit(@(p, xdata) func(xdata, p), p0, xdata, ydata);
% Extract the fitting parameters
a = popt(1);
b = popt(2);
% Calculate the residuals and perform an ANOVA test
yfit = func(xdata, popt);
resid = ydata - yfit;
[pval, tbl, stats] = anova1(resid);
% Print the fitting parameters and p-value
fprintf('a = %f\n', a);
fprintf('b = %f\n', b);
fprintf('p-value = %f\n', pval);
% Check significance against alpha = 0.05
if pval < 0.05
disp('The exponential regression model is not a good fit for the data.');
else
disp('The exponential regression model is a good fit for the data.');
end
% Plot the original data and the fitted curve
plot(xdata, ydata, 'ko', 'DisplayName', 'Original data')
hold on
plot(xdata, func(xdata, popt), 'r-', 'DisplayName', 'Fitted curve')
xlabel('x')
ylabel('y')
legend('Location', 'best')
hold off
```
| null |
CC BY-SA 4.0
| null |
2023-04-27T23:00:01.057
|
2023-04-28T11:03:45.730
|
2023-04-28T11:03:45.730
|
386734
|
386734
| null |
614348
|
1
| null | null |
1
|
13
|
I am trying to construct a natural cubic spline interpolation using R and test it with a Runge test function. I have implemented the following code; however, the interpolation is not passing through the original y-values of the knots. Based on my understanding of cubic spline, the first condition is that Sj(xj) = f(xj) and Sj(xj+1) = f(xj+1) for j= 0, 1, .. n-1, meaning the spline should interpolate all the given knots.
[](https://i.stack.imgur.com/Jk2Hr.png)
library(ggplot2)
library(splines)
#Create data
x <- seq(-1, 1, length.out = 201)
y <- runge(x)
df <- data.frame(x, y)
# Create knots
knots_3 <- quantile(x, probs = c(1/4, 2/4, 3/4))
knots_5 <- quantile(x, probs = c(1/5, 2/5, 3/5, 4/5))
knots_10 <- quantile(x, probs = seq(0.1, 0.9, by = 0.1))
# Construct natural spline with 3 knots
spline_fit_3 <- lm(y ~ ns(x, knots = knots_3))
pred_3 <- predict(spline_fit_3, data.frame(x =x))
error_3 <- max(abs(pred_3 - y))
cat("Maximum Absolute Error with 3 Knots:", error_3, "\n")
# Construct natural spline with 5 knots
spline_fit_5 <- lm(y ~ ns(x, knots = knots_5))
pred_5 <- predict(spline_fit_5, data.frame(x = x))
error_5 <- max(abs(pred_5 - y))
cat("Maximum Absolute Error with 5 Knots:", error_5, "\n")
# Construct natural spline with 10 knots
spline_fit_10 <- lm(y ~ ns(x, knots = knots_10))
pred_10 <- predict(spline_fit_10, data.frame(x = x))
error_10 <- max(abs(pred_10 - y))
cat("Maximum Absolute Error with 10 Knots:", error_10, "\n")
# Plot Runge function and natural splines
ggplot(df, aes(x = x, y = y)) +
geom_line(color = "navy", size = 1) +
geom_line(aes(y = pred_3), color = "green", size = 1) +
geom_line(aes(y = pred_5), color = "red", size = 1) +
geom_line(aes(y = pred_10), color = "orange", size = 1) +
ylim(0, 1) +
labs(title = "Natural Splines with 3, 5, and 10 Knots", x = "x", y = "y") +
theme_bw()
|
Natural Spline Interpolation in R
|
CC BY-SA 4.0
| null |
2023-04-27T23:17:52.050
|
2023-04-27T23:17:52.050
| null | null |
386733
|
[
"splines",
"interpolation"
] |
614349
|
2
| null |
614271
|
0
| null |
The [MCMC-MH algorithm](https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm) requires the ratio of the PDFs at the proposal value $x'$ and the existing value $x_t$. Let $K$ be the dimension of $X$.
We can approximate the ratio by looking at the ratio of probabilities that $X$ is in a neighbourhood of $x'$ or $x_t$
$$
\frac{f(x')}{f(x)} = \frac{\lim_{e \to 0}P[X \in x' \pm e]/e^K}{
\lim_{e \to 0}P[X \in x_t \pm e]/e^K}
= \lim_{e \to 0}
\frac{P[X \in x' \pm e]}{
P[X \in x_t \pm e]}
\approx
\frac{P[X \in x' \pm \epsilon]}{
P[X \in x_t \pm \epsilon]},
$$
where the approximation should hold for small $\epsilon$.
However it's not obvious how to use that approximation in log-space, which will likely be important given that both the numerator and the denominator will be very close to 0. In particular,
$$
\log P[X \in x' \pm \epsilon] = \log\left(P[X < x' + \epsilon]
- P[X < x' - \epsilon] \right) \\ \neq \log\left(P[X < x' + \epsilon]\right)
- \log \left(P[X < x' - \epsilon] \right).
$$
| null |
CC BY-SA 4.0
| null |
2023-04-27T23:24:16.067
|
2023-04-27T23:24:16.067
| null | null |
161943
| null |
614350
|
1
| null | null |
1
|
52
|
This question is motivated by the need to do vector-valued discrete time series forecasting with some guarantees of "continuity" (or rather, a discretized analog of continuity expressed in terms of uniform bounds on the magnitude of coordinate-wise first derivatives).
In what follows, let $h,f,k$ be integers, where $h\geqslant 0$, and $f,k\geqslant 1$.
Consider a $k$-dimensional vector time series $v_0, v_1, \ldots, v_t, \ldots$ where the element $v_t \in {\mathbb R}^{k}$ at time $t$ is expressed as
$$
v_t = (x_t^1, x_t^2, \ldots, x_t^{k-1}, x_t^k).
$$
Fix integer $T\gg 0$. We seek a flexible family of mathematical forecasting models that can be fitted to the sequence $v_0, v_1, \ldots, v_T$, such that the fitted model ${\cal M}$ generates forecasts $f$ time steps into the future. ${\cal M}$ is to satisfy the two conditions given below, of which the first is "standard".
- The model ${\cal M}$ can be applied at any time $t\in \{h, h+1, \ldots, T-1,T\}$ and computes its forecast $\tilde{v}_{t+f}$ as a function of the $h+1$ most recent series elements upto and including time step $t$, i.e.
$$
\tilde{v}_{t+f} \stackrel{def}{=} {\cal M}(t) \stackrel{def}{=} {\cal M}(v_t, v_{t-1}, v_{t-2}, \ldots, v_{t-h}).
$$
Here the forecasted vector is of the form
$$
\tilde v_{t+f} = (\tilde x_{t+f}^1, \tilde x_{t+f}^2, \ldots, \tilde x_{t+f}^{k-1}, \tilde x_{t+f}^k)
$$
The fitted model can thus be seen as a function ${\cal M}: {\mathbb R}^{(h+1)k} \rightarrow {\mathbb R}^{k}$.
- Choose $L\in {\mathbb R}^{+}$ such that the coordinates of the vector series upto time $T$ satisfy Lipschitz conditions
$$
|x_t^j - x_{t-1}^j| \leqslant L
$$
for $0 \lt t \leqslant T$ and $1 \leqslant j \leqslant k$. Then if we apply our model ${\cal M}$ at the $f$ times points $t = T\!\!-\!\!f\!\!+\!\!1,\; T\!\!-\!\!f\!\!+\!\!2,\; \ldots,\; T\!\!-\!\!1,\; T,$ the model will generate $f$ corresponding forecasts, $\tilde{v}_{T+1}, \tilde{v}_{T+2}, \ldots \tilde{v}_{T+f}$. Prepending these $f$ forecasts with the most recent observed value $v_{T}$, we obtain the length $f+1$ sequence of $(k+1)$-dimensional vectors
$$
v_T, \tilde{v}_{T+1}, \tilde{v}_{T+2}, \ldots \tilde{v}_{T+f}.
$$
The modeling technique should guarantee that this sequence of $f+1$ vectors also satisfies the same Lipshitz conditions, for the same value of $L$. (Or, if this statement (2) is too severe a requirement, I will settle for any justifiable definition of "continuity")
Condition (1) is easy to achieve via, e.g. lagged vector autoregression. Can such modeling techniques be extended (or are there other techniques) by which one can ensure that the flexible model ${\cal M}$ also satisfies requirement (2)?
If a reader feels they follow the requirement definition, but knows that the standard techniques (e.g. VARMAX or other variants of VAR) cannot be made to achieve condition (2), I would greatly appreciate a comment to this effect.
|
What discrete vector timeseries modeling (e.g. autoregression) methods support "continuity" requirements?
|
CC BY-SA 4.0
| null |
2023-04-27T23:30:59.270
|
2023-04-29T13:55:19.070
|
2023-04-29T13:55:19.070
|
386729
|
386729
|
[
"time-series",
"forecasting",
"autoregressive",
"vector-autoregression"
] |
614351
|
1
| null | null |
2
|
35
|
I'm currently using R to run a linear regression with 3 predictors. I am analysing the outcome likelihood of electoral success by descriptive identity of election candidates in Germany.
My aim is to find the interaction between female, electoral tier and political party as well as between female and electoral tier. I see three possible ways to do this.
Option 1 (estimating both interaction terms and predictor variables in one model)
```
lm(formula = elected ~ female + estier + PartyID + female:estier + female:estier:PartyID)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.18082 0.02501 7.230 6.40e-13 ***
female -0.02850 0.02374 -1.201 0.23000
estier 0.02393 0.02189 1.093 0.27456
PartyIDCDU 0.07542 0.02704 2.789 0.00533 **
PartyIDGREEN 0.02731 0.02867 0.953 0.34088
PartyIDSPD 0.16666 0.02871 5.804 7.29e-09 ***
female:estier -0.12361 0.07676 -1.610 0.10742
female:estier:PartyIDCDU 0.24182 0.08886 2.721 0.00655 **
female:estier:PartyIDGREEN -0.01744 0.08233 -0.212 0.83223
female:estier:PartyIDSPD 0.11122 0.08399 1.324 0.18556
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.4232 on 2498 degrees of freedom
Multiple R-squared: 0.0353, Adjusted R-squared: 0.03183
F-statistic: 10.16 on 9 and 2498 DF, p-value: 1.704e-15
```
Option 2 (estimating interaction terms in separate models and keeping predictor variables)
```
lm(elected ~ female + estier + female:estier)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.25070 0.01613 15.543 <2e-16 ***
female -0.02006 0.02376 -0.844 0.399
estier 0.01702 0.02205 0.772 0.440
female:estier -0.04123 0.03586 -1.150 0.250
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.4298 on 2504 degrees of freedom
Multiple R-squared: 0.002427, Adjusted R-squared: 0.001232
F-statistic: 2.031 on 3 and 2504 DF, p-value: 0.1075
lm(elected ~ female + estier + PartyID + female:estier:PartyID)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.18082 0.02501 7.230 6.40e-13 ***
female -0.02850 0.02374 -1.201 0.23000
estier 0.02393 0.02189 1.093 0.27456
PartyIDCDU 0.07542 0.02704 2.789 0.00533 **
PartyIDGREEN 0.02731 0.02867 0.953 0.34088
PartyIDSPD 0.16666 0.02871 5.804 7.29e-09 ***
female:estier:PartyIDAfD -0.12361 0.07676 -1.610 0.10742
female:estier:PartyIDCDU 0.11820 0.05840 2.024 0.04307 *
female:estier:PartyIDGREEN -0.14106 0.04724 -2.986 0.00286 **
female:estier:PartyIDSPD -0.01240 0.05054 -0.245 0.80630
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.4232 on 2498 degrees of freedom
Multiple R-squared: 0.0353, Adjusted R-squared: 0.03183
F-statistic: 10.16 on 9 and 2498 DF, p-value: 1.704e-15
```
Option 3 (including interaction terms without predictor variables in separate models)
```
lm(elected ~ female:estier)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.251522 0.009303 27.037 <2e-16 ***
female:estier -0.045088 0.024123 -1.869 0.0617 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.4299 on 2506 degrees of freedom
Multiple R-squared: 0.001392, Adjusted R-squared: 0.0009936
F-statistic: 3.494 on 1 and 2506 DF, p-value: 0.06173
lm(elected ~ female:estier:PartyID)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.251522 0.009231 27.247 < 2e-16
female:estier:PartyIDAfD -0.198891 0.069807 -2.849 0.00442
female:estier:PartyIDCDU 0.118341 0.050769 2.331 0.01983
female:estier:PartyIDGREEN -0.189022 0.036724 -5.147 2.85e-07
female:estier:PartyIDSPD 0.078986 0.040337 1.958 0.05032
(Intercept) ***
female:estier:PartyIDAfD **
female:estier:PartyIDCDU *
female:estier:PartyIDGREEN ***
female:estier:PartyIDSPD .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.4265 on 2503 degrees of freedom
Multiple R-squared: 0.01789, Adjusted R-squared: 0.01632
F-statistic: 11.4 on 4 and 2503 DF, p-value: 3.622e-09
```
I have been running my analysis under the logic of Options 3, estimating interactions without the predictor variables. Given that the DF is approximately the same between the models, what is the substantive difference in the interaction coefficients between the models?
Is there something inherently wrong with excluding the predictor variables when estimating the interaction? Which of the three options would be best for my aim?
For reference, female and estier are binary variables, and PartyID is categorical, with four categories.
|
Should I Include the predictor variables alongside the interaction coefficient within a linear regression?
|
CC BY-SA 4.0
| null |
2023-04-27T23:45:44.827
|
2023-04-28T13:04:24.873
| null | null |
386735
|
[
"r",
"regression",
"multiple-regression",
"interaction",
"regression-coefficients"
] |
614353
|
2
| null |
614168
|
4
| null |
I think you may be overcomplicating the situation here, by failing to recognise the kinds of optimisation problems and standard methods that arise in these cases. As presently written, your optimisation problem is a standard problem that can be solved using standard calculus methods and numerical methods. The partial derivatives of your objective function are:
$$\begin{align}
\frac{\partial ML_{x,t}}{\partial \theta_k}(\theta)
&= \frac{t-1}{F(x|\theta)} \cdot \frac{\partial F}{\partial \theta_k}(x|\theta) - \frac{1}{1-F(x|\theta)} \cdot \frac{\partial F}{\partial \theta_k}(x|\theta) \\[12pt]
&= \bigg[ \frac{t-1}{F(x|\theta)} - \frac{1}{1-F(x|\theta)} \bigg] \frac{\partial F}{\partial \theta_k}(x|\theta) \\[12pt]
&= \frac{1}{F(x|\theta) (1-F(x|\theta))} \bigg[ (t-1) (1-F(x|\theta)) - F(x|\theta) \bigg] \frac{\partial F}{\partial \theta_k}(x|\theta) \\[12pt]
&= \frac{1}{F(x|\theta) (1-F(x|\theta))} \bigg[ (t-1) - t F(x|\theta) \bigg] \frac{\partial F}{\partial \theta_k}(x|\theta) \\[12pt]
&= \frac{t}{F(x|\theta) (1-F(x|\theta))} \bigg[ \frac{t-1}{t} - F(x|\theta) \bigg] \frac{\partial F}{\partial \theta_k}(x|\theta), \\[12pt]
\end{align}$$
which means that the gradient of the objective function is:
$$\nabla_\theta ML_{x,t} (\theta) = \frac{t}{F(x|\theta) (1-F(x|\theta))} \bigg[ \frac{t-1}{t} - F(x|\theta) \bigg] \nabla_\theta F(x|\hat{\theta}).$$
The critical points of the function occur when:
$$F(x|\hat{\theta}) = \frac{t-1}{t}
\quad \quad \text{or} \quad \quad
\nabla_\theta F(x|\hat{\theta}) = 0.$$
In this particular case the first condition gives you an explicit solution for the critical points, and the second condition can be solved using standard iterative root-finding methods like Newton-Raphson. There is no need for any novel iterative method, and it is not clear that your proposed methods would work properly, or outperform existing methods even if it works.
---
Let's step back from your specific problem and look at general forms of optimisation problems that you might encounter. In the linked answers to your related question, we find that your problem leads to an optimisation of the form:
$$\text{Maximise } f(\mathbf{x})
\quad \quad \quad \text{for } 0 < x_1 \leqslant \cdots \leqslant x_n < 1.$$
This is an example of a constrained optimisation; there are a number of ways you can solve it.
In one of the answers in the linked question, there is an analytical solution derived that first computes the unconstrained solution (assuming that the required ordering holds in this solution) and then asks what would happen if the ordering does not hold in the solution. This leads to a solution form where the user finds out that there are certain conditions leading adjacent parameters to be equal, and derives an explicit form for the solution using standard calculus methods.
In another of the answers in the linked question (my answer), an alternative method is deployed, whereby the input parameter for the optimisation is written as a smooth function of an underlying unconstrained parameter vector. This then allows the optimisation to be rewritten as an unconstrained optimisation, using the chain rule to find relevant derivatives. The unconstrained optimisation is then solved using ordinary optimisation methods (including numerical methods to iterate towards the solution) and the optima is then converted back to be stated in terms of the constrained parameter. This latter method is less accurate than an explicit solution, but it is a more general method that can be deployed on a wide range of objective functions.
| null |
CC BY-SA 4.0
| null |
2023-04-28T00:18:43.797
|
2023-04-28T00:18:43.797
| null | null |
173082
| null |
614355
|
1
|
614528
| null |
1
|
51
|
Let's say a professor gives different versions of an exam to her two sections. Despite attempts to make the exams equal in difficulty, the minimum score, median, and mean are all significantly higher for section 1, while the standard deviation is lower. Is there a way to normalize the scores for section 2 to approximate how the students would have scored had they taken section 1's version of the exam? Other assessments (which were the same across sections) showed the two sections to have roughly the same performance.
Update
Here are the numbers:
Final 1:
- Median: 92.5
- Mean: 86.66
- Standard deviation: 17.51
Final 2:
- Median: 85.0
- Mean: 74.3
- Standard deviation: 26.24
About 45 students took each version.
|
How to normalize scores on different versions of an exam?
|
CC BY-SA 4.0
| null |
2023-04-28T00:57:43.120
|
2023-05-02T12:10:53.513
|
2023-05-01T20:49:26.907
|
339876
|
339876
|
[
"standard-deviation",
"normalization"
] |
614356
|
1
| null | null |
0
|
44
|
I have the following data, which produced the following plot.
```
Predictor_Variable <- c(rnorm(1000, 0, 25), rnorm(1000, 20, 2), rnorm(100, 20, 10))
Response_Variable <- c(rnorm(1000, 0, 2), rnorm(1000, 0, 25), rnorm(100, 0, 10))
Data_Frame <- data.frame(Predictor_Variable = Predictor_Variable, Response_Variable = Response_Variable)
plot(Response_Variable ~ Predictor_Variable, Data_Frame, main = 'Example Plot', xlab = 'Predictor Variable', ylab = 'Response Variable')
```
[](https://i.stack.imgur.com/UOEjC.jpg)
I need to fit a vertical line to the points on the plot that extend vertically up and down from the horizontal axis. This vertical portion is not perfectly centered, so I can't simply take the average of the values.
Is there a nice, programmatic way of finding this vertical line?
|
Fitting a Vertical Line to Points on a Plot
|
CC BY-SA 4.0
| null |
2023-04-28T01:04:44.930
|
2023-04-28T01:04:44.930
| null | null |
315722
|
[
"r",
"regression",
"least-squares",
"modeling"
] |
614357
|
1
| null | null |
0
|
16
|
I'm performing a DoD (difference of differences) hypothesis test, using a Bayesian model.
I analyzed the data post-experiment and found that the mean posterior lift in metric attributable to treatment exposure was 1.53% and the proportion of the posterior > 0 was 94% (94% chance the treatment had a real effect on metric.)
A project manager asked me to re-analyze the experiment using a subset of data. He believed that some samples were 'more representative' of treatment exposure.
I did so and opened a real can of worms. The trend reversed. So now the question is, what minimum sample size do we need for the posterior to be credibly greater than 0.
I'm curious what power analysis analogs are available?
In my use case, the exact samples used is quite noisy. For example, we switched from 38 individuals to 19, inducing the trend reversal. We're keen to know-- at what sample size is so large that a trend reversal is highly unlikely?
The obvious answer is that my sample size is simply too small. But I need to know by how much and why so. I don't think that a heuristic would be very persuasive in my organization.
I've read that Bayes Factor is applicable- but isn't that a comparison of priors? Here I'm interested in the likelihood and posterior as well.
I'm considering using a bootstrap scheme to sample x individuals from each group, compute the posterior and record whether it was > 0 (or not.) Then consider the average of this binary outcome as 'coherence'. If I repeat this for varying sample sizes, I suspect that an elbow plot will emerge that would guide me to a reasonably choice of n.
But this feels handwavy rather than proof driven.
|
Power analysis equivalent for Bayesian models?
|
CC BY-SA 4.0
| null |
2023-04-28T01:05:25.560
|
2023-04-28T01:05:25.560
| null | null |
288172
|
[
"bayesian",
"bootstrap",
"statistical-power"
] |
614358
|
1
| null | null |
0
|
12
|
I apologize my knowledge in the field of statistics is limited as I am a newcomer to this area. I am trying to expand my understanding to concepts beyond simple linear regression and simple mixed effects models.
I have a situation that requires the application of tobit or tobin models to a dataset that has different left censored values depending on the lab that analyzed the blood sample.
I came across this example from `survival` package, [https://cran.r-project.org/web/packages/survival/survival.pdf](https://cran.r-project.org/web/packages/survival/survival.pdf) where the author describes the use of this model using tobin dataset
```
tobinfit <- survreg(Surv(durable, durable>0, type='left') ~ age + quant,
data=tobin, dist='gaussian')
```
My question is how do I expand this example to include random effects scenario where random intercepts represents the lab evaluating the sample and each lab has 2-3 different left censoring values. My covariates are age, education and BMI. I appreciate your patience and help in expanding my grasp on this subject.
|
r tobit regression with random intercept
|
CC BY-SA 4.0
| null |
2023-04-28T01:20:34.893
|
2023-04-28T01:20:34.893
| null | null |
154080
|
[
"mixed-model",
"survival",
"tobit-regression"
] |
614359
|
1
| null | null |
0
|
22
|
I am trying to understand how xgboost prediction evolves by adding more trees (of course the ROC AUC will increase). But to my surprise, when there are small number of trees in the model, the average prediction for the entire training sample is always around 0.5, although my training data average of target variable is 0.1. (I also switch 0, and 1 of my target variable, such that the average becomes 0.9. The average prediction is still 0.5).
Could you explain why the average prediction is not starting somewhere close to the average of target variable? I understand this is not regression, and we don't have an intercept. But still this result seems not intuitive.
In addition, adding more trees (or increase learning rate) will shift the prediction average towards the observed average. Does it mean there is a minimum number of trees needed in the model? Otherwise, the prediction average is not aligned to the observed average.
Any comments are very well appreciated. Thank you in advance.
```
mkwargs = {'learning_rate':0.01, 'reg_alpha':1.0, 'reg_lambda':100.0, 'gamma':1.0,
'objective':'binary:logistic', 'subsample':0.8, 'colsample_bytree':0.8}
```
|
XGBoost average prediction when there are just 2 trees
|
CC BY-SA 4.0
| null |
2023-04-28T01:32:43.623
|
2023-04-28T01:32:43.623
| null | null |
44356
|
[
"predictive-models",
"boosting"
] |
614361
|
2
| null |
347823
|
0
| null |
Actually when you enable intercept in most of the Lasso algorithms, what these algorithms do at backend is simply centering both X and y before regression. So if you enable intercept, you do not need to center y yourself.
| null |
CC BY-SA 4.0
| null |
2023-04-28T02:04:49.343
|
2023-04-28T02:04:49.343
| null | null |
68424
| null |
614362
|
1
| null | null |
0
|
11
|
I am doing gwas analyses using twins data. after quality control in plink, over 6 millions variants and 2400 individuals are left for conducting association analyses. i didvided the variants into 22 subsets based on chromosomes and run parallel analyses for each variant in each subset in `R/4.2.0`. i used `lme4`
package, C1 to C10 are principal components. my model is as follows:
`lmm=lmer(phenotype~variant_i+C1+C2+C3+C4+C5+C6+C7+C8+C9+C10+(1|FamilyID)+(1|zygosity),data=chr1`
But there is no p value for variant_i from the `summary(lmm)`. how can I get the p values and identify the genome-wide significant variants for the phenotype?
|
how to know the p value of the association between a phenotype and a variant (fixed effect) when using linear mixed effect model
|
CC BY-SA 4.0
| null |
2023-04-28T02:12:46.037
|
2023-04-28T02:13:54.553
|
2023-04-28T02:13:54.553
|
386743
|
386743
|
[
"r",
"mixed-model",
"lme4-nlme",
"p-value",
"gwas"
] |
614363
|
2
| null |
585860
|
0
| null |
- 'LSTM are well known that they overcome the "long term dependency"' this is incorrect. See Copy Memory experiments in https://arxiv.org/abs/1901.08428 for evidences that LSTM suffers from long term memory at L=1000 and beyond
- 'thus $x_t$ is "decently reconstructable" until time t+N'. If RNN is stable (vanishing gradient is inevitable), the best case you would have is not a suddenly vanishing gradient at step t+N+1, but gradients become smaller and smaller as the sequence length increases.
- Generally, stacking layers means there are more pathway of signal between $y_{t_2}$ back to $x_{t_1}$. For example, there are $(t_2-t_1)(t_2-t_1-1)+1$ path way for 2 layers. This reduces the chance for vanishing gradient to happen.
p/s: In 3, the worst case scenario can still happen when vanishing gradient happens early (near $t_1$).
| null |
CC BY-SA 4.0
| null |
2023-04-28T02:25:49.053
|
2023-04-28T02:25:49.053
| null | null |
309089
| null |
614364
|
2
| null |
614351
|
2
| null |
When running a multiple regression model with interacting terms, you will want to include all of the main effects comprising those interactions in your model. Furthermore, if you have 3-way interactions, you should include all the main effects along with all of the sub-2-way interactions.
I recommend running the model with the three-way interaction and all sub interactions: lm(elected ~ female * estier * PartyID). Because you have a categorical variable, I would look at the anova(·) of this lm(·) output to determine if the 3-way interaction is indeed statistically significant. If not, then you can remove it and focus on just the 2-way interactions.
Happy to elaborate more if needed.
---
Update #1
Like polynomial predictors, interacting predictors really only depend on their highest powered term or terms. In fact, with a point-centering transformation, you can change the parameters of your lower termed powers to just about anything you might want. Thus, the focus is the highest power.
However, the behavior of that higher power may be obscured if the lower powers aren't actually included. Said another way, what you might be seeing with an interaction-only model is just the linear effect of one or both of the predictors "sneaking" through.
Here is a small simulation that demonstrates this:
```
set.seed(1234)
n <- 25
eps <- rnorm(n,0,15)
x <- rnorm(n,50,10)
y <- 100 - 1.2 * x + eps
z <- 10 + 2 * x - 3 * y + 0.00 * x*y + rnorm(n,0,10)
summary(lm(z ~ x * y)) # proper model, intxn n.s.
summary(lm(z ~ x : y)) # incorrect model, int'n stat.sig.
```
So, you want to include all the main-effects whenever you include an interaction.
Hope this helps.
| null |
CC BY-SA 4.0
| null |
2023-04-28T02:44:20.227
|
2023-04-28T13:04:24.873
|
2023-04-28T13:04:24.873
|
199063
|
199063
| null |
614365
|
1
| null | null |
1
|
37
|
Is it possible to find the mean difference between different conditions using Regression as well? I have a bit of confusion here. I am trying to see if there are differences between 6 street conditions based on the participants' perception (dependent variable) ratings.
As regression is all about predicting the outcome variable based on the model but does it also give information on group differences?
|
ANOVA or Multiple Regression?
|
CC BY-SA 4.0
| null |
2023-04-28T03:31:04.037
|
2023-04-28T03:31:04.037
| null | null |
386748
|
[
"regression",
"anova",
"spss"
] |
614366
|
1
| null | null |
1
|
26
|
I have some confusion regarding Bayes rule for joint posterior (I think primarily due to the context in which notations are used)
Let $P(\theta_{1}, \theta_{2} | X)$ be a joint posterior over the parameter space $\theta_{1}, \theta_{2}$.
Then, the application of Bayes rule should yield:
$P(\theta_{1}, \theta_{2} | X) = \frac{L(\theta_{1}, \theta_{2} | X) . P(\theta_{1}, \theta_{2})}{P(\theta_{2})} = \frac{L(\theta_{1} | \theta_{2}, X).P(\theta_{2} | X).P(\theta_{1}, \theta_{2})}{P(\theta_{2})}$
But $P(\theta_{1}, \theta_{2}) = P(\theta_{1} | \theta_{2}) P(\theta_{2})$
This gives
$\frac{L(\theta_{1} | \theta_{2}, X).P(\theta_{2} | X).P(\theta_{1} | \theta_{2}) P(\theta_{2})}{P(\theta_{2})}$ = $L(\theta_{1} | \theta_{2}, X).P(\theta_{2} | X).P(\theta_{1} | \theta_{2})$
I wonder if the above expression is correct?
|
Bayes rule and conditional - marginal factorisation (joint posterior)
|
CC BY-SA 4.0
| null |
2023-04-28T03:46:03.563
|
2023-04-28T03:46:03.563
| null | null |
109101
|
[
"self-study",
"bayesian"
] |
614367
|
1
| null | null |
2
|
36
|
I am trying to understand if it's possible to estimate conditional effect in a GLM using an approach similar to how we would in a linear model.
Specifically in a linear model, I can use a residual method to estimate the conditional effect ($\beta^\ast$) and standard error of variable $g$ in this linear model $E(y) = X \beta + g \beta^\ast$, where $X$ is a matrix of covariates and intercept. The method is described as follows:
- Fit a covariate-only model $E(y) = X \beta$ and get estimates $\hat{\beta}$
- Calculate the residuals $E(\tilde{y}) = y - X \hat{\beta}$.
- Project out the covariates from g by $\tilde{g} = g - X (X^T X)^{-1}X^T g $. Or equivalently regressing out $X$ from $g$ in linear model.
- Refit a model $E(\tilde{y}) = \tilde{g} \beta^\ast $, which should give the same slope and (very close) standard error estimate as the estimates from fitting the original full model.
I attempted to apply this approach to GLM model using offset but found this residual method gives different estimates compared to the full model result. For example, for logistic regression, I tried the following using `statsmodel` in python:
- Fit $logit(\mu) = X \beta$, get estimates $\hat{\beta}$ and the weight in logistic regression at convergence, where the weight is $\text{weight}_i = \hat{\mu_i} (1-\hat{\mu_i})$.
- Calculate the linear predictor $\eta = X \hat{\beta}$.
- Project out the covariates from g by $\tilde{g} = g - X (X^T W X)^{-1}X^T Wg $, where $W$ is a diagonal matrix of individual weight.
- Fit the logistic model $logit(y) = \tilde{g} \beta^\ast$ with the linear predictor $\eta$ as offset. However, I observed very different results from the correct estimates in the full model.
Given that we regress out the MLE estimates from $g$ and include its prediction to offset $y$, I would have thought that the effect-size estimate for $g$ would match its estimate under the joint model. What am I misunderstanding?
See the example result output below:
```
import statsmodels.api as sm
import numpy as np
import numpy.linalg as npla
from numpy import random
# simulate under logistic model
n = 100
random.seed(1)
X1 = random.normal(size=n).reshape((n,1)) # covariate
g = random.normal(size=n).reshape((n, 1)) # variable of primary interest
intercept = 0.5
X = np.hstack([np.ones_like(X1), X1, g])
beta = np.array([intercept, 1.0, 2.0]).reshape([3,1])
eta = X @ beta
prob = 1 / (1 + np.exp(-eta))
y = random.binomial(n=1, p=prob)
# fit full model y ~ X1 + X2
mod_full = sm.GLM(y, X, family=sm.families.Binomial()).fit()
# covariate only model: y ~ X1
covar = X[:, 0:2]
mod_covar = sm.GLM(y, covar, family=sm.families.Binomial()).fit()
mod_null_eta = mod_covar.get_prediction(covar, which="linear").predicted
mod_null_mu = mod_covar.get_prediction(covar, which="mean").predicted
glm_weight = mod_null_mu * (1 - mod_null_mu)
w_half_X = np.diag(np.sqrt(glm_weight)) @ covar
w_X = np.diag(glm_weight) @ covar
# regress out covar from X2
projection_covar = covar @ npla.inv(w_half_X.T @ w_half_X) @ w_X.T # nxn
X2_resid = g - projection_covar @ g
# fit model y ~ X2_resid with offset
mod_X2 = sm.GLM(y, X2_resid, family=sm.families.Binomial(), offset=mod_null_eta).fit()
```
The result from full model is:
```
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
const 0.0779 0.285 0.274 0.784 -0.480 0.636
x1 1.3113 0.417 3.146 0.002 0.494 2.128
g 2.5512 0.552 4.622 0.000 1.469 3.633
==============================================================================
```
The result under residual method is:
```
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
g 2.1360 0.433 4.934 0.000 1.288 2.984
==============================================================================
```
```
|
Estimating conditional effect in generalized linear regression model (GLM)
|
CC BY-SA 4.0
| null |
2023-04-28T03:51:03.427
|
2023-05-03T17:00:31.290
|
2023-05-03T17:00:31.290
|
386730
|
386730
|
[
"regression",
"generalized-linear-model",
"conditional-expectation",
"statsmodels"
] |
614368
|
2
| null |
319085
|
0
| null |
We have generally observed that nonlinear input-output dependencies captured by ML lead to better prediction than regression models developed using CCD DOE. We used ideantical data pairs for both. The RMS errors were generally lower for ML that carefully avoided overtraining. A good data collection strategy appears to be a grid design.
| null |
CC BY-SA 4.0
| null |
2023-04-28T04:16:16.530
|
2023-04-28T04:16:16.530
| null | null |
386749
| null |
614371
|
1
| null | null |
0
|
40
|
So I have a dataset of behavior of a certain animal (n = 36 individuals) and the observations were taken for 4 months. The data has lots of zeroes in it and is right skewed and I wanted to measure the effects of a certain set of variables on the response variable (the behavior). But I am unable to point out which model I should go for LME, GLM or GLMM
What I have tried using? I tried using lme and I was getting the desired results but then I read that it is for normally distributed data
|
Confused between GLM,LME and GLMM. What should i use for my non normal data of animal behaviour?
|
CC BY-SA 4.0
| null |
2023-04-28T06:05:41.930
|
2023-05-01T07:22:09.387
|
2023-05-01T07:22:09.387
|
11887
|
386838
|
[
"r",
"generalized-linear-model",
"lme4-nlme",
"biostatistics"
] |
614372
|
2
| null |
74091
|
1
| null |
I 'm interested in this,
While I doubt there is an analytical solution, it's easy with Monte Carlo.
So if you want to apply this to the races (human, animal, snail ...) just use Monte Carlo.
But for some purposes such as if you are trying to work out the standard deviation model of your runners, you may need to do this integration many times and it may slow you down.
You can however always use Simpson's method to compute the integrals.
I used Laplace distributions instead of normal and worked out the integral for two runners, in analytical form.
Further than that I don't know. In (5) above what is Q ?
| null |
CC BY-SA 4.0
| null |
2023-04-28T07:04:51.253
|
2023-04-28T07:06:34.353
|
2023-04-28T07:06:34.353
|
253133
|
253133
| null |
614375
|
2
| null |
614265
|
4
| null |
(This answer ignores the issue with low sample size.)
I'd like to add a bit of nuance to the answers here, as it's tempting to read them and come away with this rule:
>
If a single p-value is observed, then correction is unnecessary; the type I error of the testing procedure is not inflated.
Type I error is inflated if any part of the testing procedure, i.e., any part of the pipeline from observed dataset -> observed p-value, depends on the data. Read [The garden of forking paths](http://stat.columbia.edu/%7Egelman/research/unpublished/forking.pdf)1 paper for more info.
For example, if—
- (in another universe) the observations turn out to be so skewed that after plotting them you opt to test for the equality of medians rather than means, and
- you do not want to inflate the type I error
—then a p-value adjustment needs to be made. For example, if you were to use a Bonferroni correction, then the number of null hypotheses here is 2: one for the mean test, and one for the median test. Note that you did not have to observe 2 p-values; you just had to use a testing procedure which allowed for 2 different hypotheses (whether you knew it or not!).
Adjustment is unnecessary if universes like (1) don't exist, or you don't care about inflating type I error. To destroy universes like (1), the entire testing procedure needs to be pre-specified, e.g., force yourself to test for the equality of means regardless of the observed distribution (such a decision should usually be based on the science—not the data—anyway). In practical terms, this means implementing the entire data analysis code without looking at any data from your experiment.
## References
- Gelman, Andrew, and Eric Loken. "The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time." Department of Statistics, Columbia University 348 (2013): 1-17.
| null |
CC BY-SA 4.0
| null |
2023-04-28T08:36:16.607
|
2023-05-23T11:06:36.197
|
2023-05-23T11:06:36.197
|
337906
|
337906
| null |
614377
|
1
| null | null |
0
|
19
|
I have matched paired nominal data, transmissions and non-transmissions of a risk allele M1 from parents with genotypes M1M2, M1M1, M2M2 to affected children. We can build a 2x2 contingency table and apply the McNemar test to test whether the risk allele M1 and the non-risk allele M2 are equally transmitted to affected children:
| |Transmitted |Transmitted |
||-----------|-----------|
|Non-transmitted |M1 |M2 |
|M1 |a |b |
|M2 |c |d |
Additionally, according to Breslow and Day 1980, p166 we can also use the discordant pairs in that contingency table to get a MLE of the odds ratio (b/c). What is the full explanation of that formula for the odds ratio? What is the relationship between the b/c and the odds ratio for unconditional data?
|
Relationship odds for conditional data and odds ratio for unconditional data
|
CC BY-SA 4.0
| null |
2023-04-28T09:19:23.753
|
2023-04-28T09:19:23.753
| null | null |
380719
|
[
"conditional-probability",
"paired-data",
"odds-ratio",
"mcnemar-test"
] |
614378
|
1
| null | null |
0
|
10
|
I have two relatively simple questions on a two-Part Mixed Effects Model for Semi-Continuous Data I am running using ` GLMMadaptive` .
First of all, I would like to check if I am selecting the appropriate analysis.
The design is repeated measure design in which I test the effect of treatment (2 levels; Between subject) over time. My outcome is continuous but 2 out of 3 data points are 0. By using the function `mixed_model` and a family ` hurdle.lognormal`, I understand I run two models: a linear mixed model to test how strong is the change in outcome as a function of the predictors considering only the data points that non-zero; a logistic mixed model regression to estimate the probability the outcome is non-zero compared to zero as a function of the predictors. Is it correct? Is the correct approach for my data?
Second, the estimates of the logistic mixed model are log odds or odd ratios?
Third, based on what do you change parameter of `n_phis `?
Thank you in advance
|
Is two-Part Mixed Effects Model for Semi-Continuous Data (GLMMadaptive) the correct analyssi for my data?
|
CC BY-SA 4.0
| null |
2023-04-28T09:37:48.170
|
2023-04-28T09:37:48.170
| null | null |
386764
|
[
"mixed-model",
"glmmadaptive"
] |
614379
|
1
| null | null |
0
|
29
|
I have two dataframes with the same columns but with varying sample sizes. I want to compare corresponding columns for homogeneity (i.e., do they come from the same distribution?). There are different types of columns (variables) - I have categorical (nominal and ordinal) and numeric (discrete and continuous). I want to quantify the dissimilarity between the corresponding variables, and I would like for this dissimilarity measure to be comparable across all the different types of variables. For example, when comparing two dataframes with different samples that come from the same distributions, I want, on average, similar dissimilarity across all types of variables.
For more context, this is for quantifying the covariate shift.
How to accomplish the task? I was thinking about [first Wasserstein distance](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.wasserstein_distance.html), but I'm not sure if that's the right choice. I don't understand much of the math behind it.
I've read from [On Wasserstein Two Sample Testing and Related Families of Nonparametric Tests](https://arxiv.org/pdf/1509.02237.pdf) that I might need to compare
>
differences between quantities that completely specify a distribution
— (a) cumulative distribution functions (CDFs), (b) quantile functions
(QFs), and (c) characteristic functions (CFs).
Below are the two dataframes that we might wish to compare. Both sample independently from the same distributions but differ in their sample sizes.
```
import pandas as pd
from scipy.stats import norm, gamma, bernoulli, randint, poisson
df1 = pd.DataFrame({
'Normal': norm.rvs(loc=0, scale=1, size=100),
'Gamma': gamma.rvs(a=3, loc=1, scale=1, size=100),
'Bernoulli': bernoulli.rvs(p=0.6, size=100),
'Uniform Discrete': randint.rvs(low=1, high=50, size=100),
'Poisson': poisson.rvs(mu=5, size=100),
})
df2 = pd.DataFrame({
'Normal': norm.rvs(loc=0, scale=1, size=40),
'Gamma': gamma.rvs(a=3, loc=1, scale=1, size=40),
'Bernoulli': bernoulli.rvs(p=0.6, size=40),
'Uniform Discrete': randint.rvs(low=1, high=50, size=40),
'Poisson': poisson.rvs(mu=5, size=40),
})
```
|
How to quantify the dissimilarity across different types of variables?
|
CC BY-SA 4.0
| null |
2023-04-28T09:39:40.657
|
2023-04-28T10:33:26.567
|
2023-04-28T10:33:26.567
|
347904
|
347904
|
[
"hypothesis-testing",
"distance",
"mixed-type-data",
"wasserstein",
"covariate-shift"
] |
614380
|
1
| null | null |
0
|
34
|
### Problem
Hey, i need some help for power calculation for a three condition counterbalanced linear mixed model with an additional within-subject factor. This is an fictious example, which doesn't make a lot of the sense, but the design we want to use is equal. We assume that stimuli are nested within participants, since all participants are exposed to the same 5 stimuli (photos of 5 people's faces) apart from the experimental variation (neutral or aggressive face expression) of these stimuli. I want to include random intercepts for participants and abstracts.
### Variables
##### Factor "condition": Three Between-Subject Levels
- Medication = "med"
- Psychotherapy = "psych"
- Control Group = reference group
##### Factor Emotional Expression in Stimulus "expression": 2 Within-Subject Levels
- Aggresive = 1
- Neutral = 0
##### Stimulus "stim"
Pictures of faces from 5 different people in 2 different emotional expressions 'Neutral' or 'Aggressive'. Randomized Order, replacement = FALSE. 5 people * 2 expressions = 10 pictures.
##### Dependent Variable "startle"
Metric Variable: 5 measurement points of measuring an individuals startle reflex
### Design
|Condition |t1 |t2 |t3 |t4 |t5 |
|---------|--|--|--|--|--|
|Medication |stim 1/2/3/4 OR 5 & neu/agg |stim 1/2/3/4 OR 5 & neu/agg |stim 1/2/3/4 OR 5 & neu/agg |stim 1/2/3/4 OR 5 & neu/agg |stim 1/2/3/4 OR 5 & neu/agg |
|Psychotherapy |stim 1/2/3/4 OR 5 & neu/agg |stim 1/2/3/4 OR 5 & neu/agg |stim 1/2/3/4 OR 5 & neu/agg |stim 1/2/3/4 OR 5 & neu/agg |stim 1/2/3/4 OR 5 & neu/agg |
|Control Group |stim 1/2/3/4 OR 5 & neu/agg |stim 1/2/3/4 OR 5 & neu/agg |stim 1/2/3/4 OR 5 & neu/agg |stim 1/2/3/4 OR 5 & neu/agg |stim 1/2/3/4 OR 5 & neu/agg |
### Model
The model i want to fit should look like this:
```
lmer(startle ~ med + psych + expression + med * expression + psych +
expression +(1|participant) + (1|stim), data=dat)
```
## Power Parameters
condition: eta sqaured = 0.06;
expression: d = 0.4;
Power = 0.8;
How could I do a Power Analysis for this counterbalanced purpose? What would be an appropriate power analysis?
I am aware of [Westfall and Judd's](https://jakewestfall.shinyapps.io/two_factor_power/) online power analysis for Counterbalanced Linear Mixed Model. But it doesn't include a further between-subject factor and it includes random slopes by default.
Tools like G*Power are also not flexible enough for designs like this.
|
Power calculation for a three condition between-subject counterbalanced linear mixed model with an additional within-subject factor
|
CC BY-SA 4.0
| null |
2023-04-28T09:39:44.910
|
2023-04-28T09:58:16.983
|
2023-04-28T09:58:16.983
|
386762
|
386762
|
[
"r",
"mixed-model",
"statistical-power",
"counterbalancing"
] |
614382
|
1
| null | null |
4
|
73
|
I have a group of individuals for which I would like to report a mean and a weighted error. The data that I observe on a daily basis are two independent $iid$ random variables with unknown distributions that can be quantified as 'number of individuals per day' which i call $W$ and some measure of 'revenue' called $R$ that represents the sum of the revenue brought by all the individuals of that specific day.
Each day, those two variables are updated as they are observed. Using this data, I can compute the thing that I am intereseted in reporting which is the 'revenue per individual per day' called $X$ by computing $x_i = r_i/w_i$ on each day ($i.e.$ for each day $i$, I divide the sum of revenue for all the individuals by the number of individuals). This leaves me with the following dataframe:
|W |R |X |
|-|-|-|
|12 |100 |8.33 |
|15 |110 |7.33 |
|65 |740 |11.38 |
Using this data, I would like to finally report my KPI which is the mean revenue per individual per day over this data collection period. Since not all days are equally represented with the same number of individuals bringing revenue ($e.g.$ day $3$ in the data above has a much higher individual count of $65$ compared to day $1$ which contains only $12$), I would like to represent my KPI as a weighted mean where the individual weights are the number of individuals $w_i$ such that days with higher individual count are better represented in the final mean I report like so:
$$
\bar{x}_w = \frac{\sum_i w_i \times x_i}{\sum_i w_i}
$$
Then, I can calculate the weighted sample standard deviation like so (according to [this](https://www.analyticalgroup.com/download/weighted_mean.pdf) note):
$$
s_w = \sqrt{\frac{ \sum_iw_i (x_i - \bar{x}_w)^2}{ \sum_i w_i -1}}
$$
Finally, I can compute the standard error of the weighted mean which is defined as $se = s_w/ \sqrt{n}$.
First question
What is $n$ in the standard error calculation in my case? Is it the number of observations which is $3$ in the example dataframe that I have outlined above? Or is it the sum of the weights used to compute the weighted standard deviation $\sum_i w_i$ which is equal to $92$? In case we use the sum of the weights, the error may get quite small which is something I do not typically observe since my data has high variability. In case it is the number of observations, the error will systematically decrease when I gather more data which also seems unusual. So how may i correctly estimate this empirically with the data I have? Is there any documentation anyone can point me to for reference?
Second question
My second question is somewhat trickier. In a production environment, when I report the KPI alongside its standard error, I do not have access to the full dataframe. I only have access to the first two data points to start the calculation procedure of the weighted mean and weighted standard error deviation (and the standard error).
Given that the current observation count is $n$, I would like to iteratively calculate the weighted mean and weighted standard deviation and the standard error of the weighted mean in computer code knowing only the previous weighted mean $\bar{x}_{w,n}$, previous weighted standard deviation $s_{w,n}$, the cumulative sum of previous weights $\sum_i^n w_i$, and of course the new row of data points that contains the new observation $x_{n+1}$ and its new weight $w_{n+1}$ where $n+1$ indicates the new row.
Thank you for the help!
|
How to iteratively calculate weighted standard error to report alongside a weighted mean
|
CC BY-SA 4.0
| null |
2023-04-28T10:04:59.173
|
2023-05-31T14:45:11.277
|
2023-05-31T14:45:11.277
|
346672
|
346672
|
[
"mathematical-statistics",
"standard-deviation",
"standard-error",
"computational-statistics",
"weighted-mean"
] |
614383
|
1
| null | null |
1
|
21
|
I am using Hamiltonian Monte Carlo (HMC) to sample the posterior of a continuous-time Markov Chain (CTMC).
However, after running 10 parallel chains with 100 draws each, the effective sample size (ESS) rises in a certain pattern, way beyond the actual sample size (which should be 10 * 100 = 1000 samples).

Has anyone seen this pattern before? What could be the cause of this misbehavior?
To calculate the ESS I am using the `ess()` function from the Arviz Python package with the parameters left to default (i.e. `method="bulk"`).
The trace plot of one of the chains for this parameter looks like this:

I figured the oscillations come from the convexity of my target distribution and the fact that I am using HMC to sample it.
|
What could lead to this misbehavior for the expected sample size (ESS)?
|
CC BY-SA 4.0
| null |
2023-04-28T10:27:32.073
|
2023-05-22T12:20:20.770
| null | null |
386706
|
[
"sampling",
"sample-size",
"markov-chain-montecarlo",
"hamiltonian-monte-carlo"
] |
614384
|
1
| null | null |
0
|
61
|
Let's say that some quantity is modelled by a time-dependent Poisson distribution,
$$
y(t) \sim \text{Pois}(\mu(t))
$$
where
$$
\mu(t) = \alpha_0 \exp(-\alpha_1 e^{-\alpha_2 t})
$$
and $\alpha_k > 0$ for $k = 0, 1, 2$
For a given parameter $\theta_k = log(\alpha_k)$, I computed the log-likelihood and obtained the following result and data $\mathcal{D}_i$,
$$
l(\vec{\theta}|\mathcal{D}_i) = -e^{\theta_0 - \eta(\theta_1, \theta_2)} + y_i \theta_0 - y_i \eta(\theta_1, \theta_2)
$$
where $\eta(\theta_1, \theta_2) \equiv \exp{(\theta_1 - e^{\theta_2} t_j)}$
Now I would like to compute the log-posterior for large variance priors. I use the equation for $\mu(t)$ as the prior but I'm not so sure it is what I should use. With that, I get the following result for the prior,
\begin{align*}
P(\vec{\theta})
&= e^{\theta_0} \exp(-e^{\theta_1} e^{-e^{\theta_2} t_j}) \\
&= e^{\theta_0} \exp(e^{\theta_1 - e^{\theta_2} t_j}) \\
&= e^{\theta_0} \exp(\eta(\theta_1, \theta_2))
\end{align*}
I find then the log-posterior,
\begin{align*}
p(\vec{\theta}|\mathcal{D}_i)
&\propto l(\vec{\theta}|\mathcal{D}_i) \cdot P(\vec{\theta}) \\
&=\left( -e^{\theta_0 - \eta(\theta_1, \theta_2)} + y_i \theta_0 - y_i \eta(\theta_1, \theta_2) \right) \cdot e^{\theta_0} \exp(\eta(\theta_1, \theta_2))
\end{align*}
Does that make sense ?
|
Computing log-posterior for large variance priors
|
CC BY-SA 4.0
| null |
2023-04-28T11:00:33.600
|
2023-04-28T20:11:33.307
|
2023-04-28T20:11:33.307
|
355645
|
355645
|
[
"bayesian",
"likelihood",
"prior",
"posterior",
"quasi-likelihood"
] |
614385
|
1
| null | null |
1
|
36
|
Let say I've a dataframe as :
```
image_id image_group group x y nn
```
I've 20 images defined by image_id ; each image belongs to a group G1 or G2 defined in the image_group column.
Eeach image is a list of points from different groups (e.g. A, B and C) defined by their coordinates x and y. nn represents the number of neighbour of the point in a predefined radius e.g. r=100.
My goal is to compare two groups of points by their minimal distances. As an example A vs B : I compute for each point of A : the distance to all points of B and report the minimal distance. Thus if I've 100 A points it will report 100 distances.
Using these distances I want to compare group G1 vs G2 and see if A vs B distances are different.
My idea was to use a linear mixed model and use nn as a covariate but not sure if it should be used as fixed or random effect.
In R (for the example I generate a dist column to simulate the distance between points. Here group G1 has a lower number of neihgbour compared to G2.
```
require(lme4)
require(lmerTest)
set.seed(123)
dat <-
data.frame(
image_id = c(rep(1:5,1000),rep(6:10,1000)),
image_group = c(rep("G1",1000),rep("G2",1000)),
dist = rnorm(2000,mean=50,sd=10),
nn = c( round(rnorm(1000,mean = 20,sd=5)),round(rnorm(1000,mean=30,sd=5))))
# version without using nn as covariate
mixed.lmer <- lmer(dist ~ image_group + (1|image_id), data = dat)
# version with nn as random effect
mixed.lmer2 <- lmer(dist ~ image_group + (1|image_id) + (1|nn), data = dat)
# version with nn as fixed effect
mixed.lmer3 <- lmer(dist ~ image_group + nn + (1|image_id) , data = dat)
anova(mixed.lmer)
anova(mixed.lmer2)
anova(mixed.lmer3)
```
Hereby the results
```
> anova(mixed.lmer)
Type III Analysis of Variance Table with Satterthwaite's method
Sum Sq Mean Sq NumDF DenDF F value Pr(>F)
image_group 173.41 173.41 1 9998 1.733 0.1881
> anova(mixed.lmer2)
Type III Analysis of Variance Table with Satterthwaite's method
Sum Sq Mean Sq NumDF DenDF F value Pr(>F)
image_group 90.472 90.472 1 1793.8 0.9183 0.3381
> anova(mixed.lmer3)
Type III Analysis of Variance Table with Satterthwaite's method
Sum Sq Mean Sq NumDF DenDF F value Pr(>F)
image_group 385.83 385.83 1 9997 3.8563 0.04959 *
nn 213.12 213.12 1 9997 2.1300 0.14447
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
Which one will be the most suitable ? In this simulated example, results can be really different depending on the model. As number of neighbours can obviously bias the minimal distance calculation (More points in the vicinity -> more chance to have a point of group B close )
Thank you the help and comments
EDIT
Following @Doctor Milt advice I used a gam
Here's the plot of the model
```
model4 <- gam(dmin ~ image_group + s(nn) + s(image_id, bs = "re"), data = dat)
plot(model4)
```
[](https://i.stack.imgur.com/Y3gJs.png)
|
Fixed or random effect in linear mixed effect
|
CC BY-SA 4.0
| null |
2023-04-28T11:25:52.463
|
2023-04-28T13:43:07.923
|
2023-04-28T13:43:07.923
|
7835
|
7835
|
[
"r",
"mixed-model",
"linear-model"
] |
614386
|
1
| null | null |
0
|
8
|
I am looking into assessing the effect of a condition. I have a population of several individuals that are not affected by this condition and a smaller proportion that is. Observations of a single variable are made at several points in time. I would like to assess how early (in time) I can find a significant difference between groups. The idea is that at any time of observation, I would use all the available data to make the assessment. My current idea is to run a test for the difference between groups (let’s say t-test for sake of argument), at several points in time, using all data available until then.
But somehow this doesn’t feel right (using the same data multiple times?) and get the feeling there must be an elegant way of addressing this problem. Any comments or suggestions?
|
assessing significance of difference between two groups using difference sample sizes
|
CC BY-SA 4.0
| null |
2023-04-28T11:29:18.480
|
2023-04-28T11:29:18.480
| null | null |
386770
|
[
"statistical-significance",
"sample-size"
] |
614387
|
2
| null |
614385
|
1
| null |
Deciding whether to include a covariate using fixed or random effects is relevant when the variable is categorical. In this situation, `nn` is numeric (an integer).
For example, here are the estimates of the random effects and their standard errors from your second model (`mixed.lmer2`).
[](https://i.stack.imgur.com/vRsRU.png)
Clearly not what we want!
As I mentioned in my comment, your argument about the number of neighbours having an effect on the minimal distance is sensible, though the relationship between the two might be nonlinear. You could use the mgcv package to fit a GAM where you include `nn` as a spline.
mgcv allows you to add random effects, so you could keep `image_id` in there. The syntax would be something like
```
library(mgcv)
model4 <- gam(dist ~ image_group + s(nn) + s(image_id, bs = "re") , data = dat)
```
| null |
CC BY-SA 4.0
| null |
2023-04-28T11:52:54.350
|
2023-04-28T11:52:54.350
| null | null |
238285
| null |
614388
|
2
| null |
98987
|
0
| null |
It sounds like your reference (link is broken) is doing a hypothesis test for if their classification accuracy is higher than $50\%$, as a naïve model should be able to get half of the classifications right (just guess one class every time).
For your problem, a naïve model should be able to predict the correct outcome $60\%$ of the time by predicting the majority class every time. Therefore, the hypothesis test would not be if your model achieves an accuracy above $50\%$ but above $60\%$. It sounds like your link would would have proposed a one-sided, one-sample proportion test for your model proportion classified correctly (accuracy as a proportion (e.g., $0.8$ instead of $80\%$)). Call your proportion classified correctly as $p$.
$$
H_0: p = 0.6\\
H_a: p > 0.6
$$
A z-test might be the most straightforward way to do this. Example S.6.2 at the Penn State University link [here](https://online.stat.psu.edu/statprogram/reviews/statistical-concepts/proportions) gives an example.
Classification accuracy has its issues [[1](https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models), [2](https://stats.stackexchange.com/questions/603663/academic-reference-on-the-drawbacks-of-accuracy-f1-score-sensitivity-and-or-sp)], but if you insist on using it, comparing to the accuracy of a model that always predicts the majority category seems like the reasonable baseline. My answer [here](https://stats.stackexchange.com/a/595958/247274) gives a justification for this (and another answer to that question gives a reasonable explanation for why a different baseline might be reasonable, under the assumption that classification accuracy is not quite the right measure of model performance).
| null |
CC BY-SA 4.0
| null |
2023-04-28T11:57:15.033
|
2023-05-03T11:03:03.940
|
2023-05-03T11:03:03.940
|
247274
|
247274
| null |
614390
|
1
| null | null |
1
|
27
|
I have a dataset of photos of dogs with both black and white fur. One person assessed the images by two criteria:
Criteria 1: how much white fur does the dog have:
- More than 0% and less than 25%
- More than 25% but less than 50%
- More than 50%
Criteria 2: how is the distribution of the white fur:
- The white fur forms small spots scattered through the body of the dog.
- The white fur is comprised in bigger, not too sparse spots on the fur of the dog.
- The white fur is mainly comprised in one spot.
I want to test two things. The main one is:
- Are the two categories essentially different?
That might sound like a weird question, since the definitions of the categories are clearly different. But I want to test if, when brought into practice, these two assessments are redundant.
For this I am thinking on performing a McNemar-Bowker test, since I have paired ordinal data. However, I am not sure since this scenario is slightly different to the examples I've found. So my first question is: is the McNemar-Bowker test appropriate in my case?
The second thing I want to test is of there is a relationship between the categories. My second question is: is Wilcox on signed-ranks test appropriate for this paired data?
|
Using McNemar-Bowker test to see if two metrics are measuring the same characteristic
|
CC BY-SA 4.0
| null |
2023-04-28T12:06:27.510
|
2023-04-28T12:06:27.510
| null | null |
386771
|
[
"categorical-data",
"ordinal-data",
"mcnemar-test"
] |
614391
|
1
| null | null |
0
|
42
|
This is really basic, but I'm having trouble interpreting the `summary` output of a multiple regression, including one or more interactions with a categorical variable and I couldn't find satisfactory explanations elsewhere. Consider an example based on the `mpg` data set:
```
easypackages::libraries("tidyverse", "nlme", "car")
mod1 <- lm(hwy ~ displ + drv + displ:drv, data = mpg)
Anova(mod1, type = 3)
#> Anova Table (Type III tests)
#>
#> Response: hwy
#> Sum Sq Df F value Pr(>F)
#> (Intercept) 7211.6 1 783.660 < 2.2e-16 ***
#> displ 1096.0 1 119.102 < 2.2e-16 ***
#> drv 204.4 2 11.105 2.499e-05 ***
#> displ:drv 86.0 2 4.673 0.01026 *
#> Residuals 2098.2 228
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
This suggests a significant interaction between `displ` and `drv`, i.e. the effect of `displ` will differ, depending on the level of `drv`. Let's look at this in more detail:
```
summary(mod1)
#>
#> Call:
#> lm(formula = hwy ~ displ + drv + displ:drv, data = mpg)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -8.489 -1.895 -0.191 1.797 13.467
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 30.6831 1.0961 27.994 < 2e-16 ***
#> displ -2.8785 0.2638 -10.913 < 2e-16 ***
#> drvf 6.6950 1.5670 4.272 2.84e-05 ***
#> drvr -4.9034 4.1821 -1.172 0.2422
#> displ:drvf -0.7243 0.4979 -1.455 0.1471
#> displ:drvr 1.9550 0.8148 2.400 0.0172 *
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 3.034 on 228 degrees of freedom
#> Multiple R-squared: 0.746, Adjusted R-squared: 0.7405
#> F-statistic: 134 on 5 and 228 DF, p-value: < 2.2e-16
```
This tells me that the relationship between `displ:drv` for the `drv`-level `r` is significant, while this is not the case for the `drv`-level `f`. Do I understand it correctly that the p-value associated with the `drv`-level `4`, which is the reference level in this model is given in the line of the intercept, i.e. significant in this case? Let's plot the interaction:
```
mpg %>%
ggplot(aes(x = displ, y = hwy, col = drv)) +
geom_point(size = 3, alpha = 0.4) +
geom_smooth(method = "lm", lwd = 1.5)
```
[](https://i.stack.imgur.com/A06v2.png)
Hmmm, judging by the regression lines and associated scatter, I'm somewhat surprised that the results for `drv`-level `r` are significant, while those for `drv`-level `f` are not. What happens, if we add another term plus associated interaction?
```
mod2 <- lm(hwy ~ displ + drv + cyl + displ:drv + cyl:drv, data = mpg)
summary(mod2)
#>
#> Call:
#> lm(formula = hwy ~ displ + drv + cyl + displ:drv + cyl:drv, data = mpg)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -8.2265 -1.8421 0.0316 1.4962 13.3950
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 31.7350 1.2197 26.018 < 2e-16 ***
#> displ -1.8893 0.6279 -3.009 0.00292 **
#> drvf 8.2000 1.8886 4.342 2.14e-05 ***
#> drvr 9.6730 6.3080 1.533 0.12657
#> cyl -0.7720 0.4481 -1.723 0.08627 .
#> displ:drvf 0.3036 1.0626 0.286 0.77533
#> displ:drvr 3.3796 1.2243 2.761 0.00625 **
#> drvf:cyl -0.8073 0.7415 -1.089 0.27743
#> drvr:cyl -2.8897 1.2139 -2.381 0.01812 *
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 2.923 on 225 degrees of freedom
#> Multiple R-squared: 0.7674, Adjusted R-squared: 0.7591
#> F-statistic: 92.78 on 8 and 225 DF, p-value: < 2.2e-16
```
OK, so we now have significant interactions for some levels of both `displ:drv` and `cyl:drv`, but what happened to the results for the reference-level of `drv`, i.e. `4`? Does the p-value shown for the intercept give the significance for both interactions of `displ:drv` and `cyl:drv`, when `drv` equals `4`?
To sum this up:
- Is it correct to interpret p-values for the different levels of an interaction term with a categorical variable separately?
- Is the p-value for the reference level of the categorical variable (in this case drv-level 4) that shown in the line of the intercept of the summary output?
- Is that still the case in the presence of multiple interactions?
|
Interpreting multiple regression summary output with (multiple) interactions
|
CC BY-SA 4.0
| null |
2023-04-28T12:09:21.873
|
2023-04-28T15:42:16.650
| null | null |
251270
|
[
"r",
"mixed-model",
"multiple-regression",
"p-value",
"interaction"
] |
614392
|
2
| null |
614227
|
1
| null |
The answers to your questions depend on your problem and distribution of your data. First, you should have an idea on how often those edge cases occur (which maybe you already do). If those cases are, let say <1% of the examples, then MAYBE they are not so important to consider. However, if for your specific application missclassifying those (potentially rare) edge cases result in a high cost for your business or are highly impactful for the problem you want to solve, then it could be important to approach the classification task differently. For instance, as you said, upsampling those cases.
Would upsampling add bias? If your data is not very noisy, your model should generalize better in those edge cases when upsampling. Also, consider your model might be better at recognizing those edge cases at the cost of reducing its capabilities on non-edge cases, but this will not always happen.
In any case, especially if edge examples are not very unfrequent and concern you, I believe it could be worth it experimenting on a slightly upsampled train set on those edge cases.
| null |
CC BY-SA 4.0
| null |
2023-04-28T12:10:10.047
|
2023-04-28T12:10:10.047
| null | null |
386777
| null |
614393
|
2
| null |
614162
|
0
| null |
Yes, there are [error propagation](https://en.wikipedia.org/wiki/Variance#Linear_combinations) formulae available. This is technically just a version of the variance formula for linear combinations of random variables.
First, to clarify some language. We are trying to calculate the estimate of the mean, and the standard error on the mean (not the SD), by aggregating the means of each experiment.
The simplest method is assuming your $n$ experiments are independent. Let's say we have experimental means which are $\bar Y_1, \cdots, \bar Y_n$, each deriving from $X$ observations. Then the variance of the overall mean $\bar Y = \frac{1}{n}\sum_{i=1}^{n} Y_i$
$$
\text{var}(\bar Y) = \frac{1}{n^2}\sum_{i=1}^n \text{var}(\bar Y_i)
$$
where $\text{var}(\bar Y_i) = s^2_i/X$.
The standard error of $\bar Y$ is the square root of the variance $\sqrt{\text{var}(\bar Y)}$.
Note - taking the overall mean $\bar Y = \frac{1}{n}\sum_{i=1}^{n} Y_i$ is only valid/useful, in my opinion, if you have the same amount of observations in each experiment. Which you do in this case.
| null |
CC BY-SA 4.0
| null |
2023-04-28T12:12:29.483
|
2023-04-28T12:12:29.483
| null | null |
369002
| null |
614394
|
2
| null |
518981
|
-1
| null |
The ELO ratings apply to chess.
If player 1 has the rating R1 and player 2 has the rating R2 then the probability of player 1 beating player 2 is as you suggest above.
They also apply to soccer, despite soccer matches frequently ending in a draw.
You can use archives of past soccer results in which the respective elos before the match are included to work out a formula for P(home win), P(draw), P(away win).
In soccer the team playing at home is usually awarded a bonus.
I have a ready reckonner somewhere in java.
But in chess also draw is a possibility, so that too can be worked out.
However what you are really interested in is multiplayer events.
Such as discus throwing.
A discus throwing event between 3 athletes can be arranged without loss of generality by making it into a series of 2 person contests.
So if the three guys are A,B and C, we can for example ask C to wait and then face the winner of the contest between A and B.
So with this rule you can work out the probabilities of A,B an C for the three way event.
Then as there are 3 possibilities of doing that you take the averages.
What happens when there are N contestants though ?
It's N! orderings and N! can get big.
Is there some simplification ?
| null |
CC BY-SA 4.0
| null |
2023-04-28T12:19:57.787
|
2023-04-28T12:19:57.787
| null | null |
253133
| null |
614395
|
1
| null | null |
0
|
11
|
I am estimating the effect of certain bank characteristics on the bank lending in monetary policy in the euro area. Therefore I am looking at different bank characteristics (size, liquidity and capitalization), the evolution of net loans and some control variables like GDP and inflation for the euro zone. This is my regression equation:
[](https://i.stack.imgur.com/2cVbz.png)
I am using Panel data since I have quarterly data for every bank I am looking at. My question is: how would you start estimating? So far, I have used panel data regression in R with pooled and fixed effects models, like the following example:
```
## Fixed effects
# Factor size
MOD_FE_9 <- plm(x ~ x_1 + x_2 + x_3 + x_4 + MP_1 + MP_2 + MP_3 + MP_4 + M_S_1 +
M_S_2 + M_S_3 + M_S_4 + size + HICP_1 + HICP_2 + HICP_3 +
HICP_4 + GDP_1 + GDP_2 + GDP_3 + GDP_4, data = df,
model = "within", effect = "twoways")
summary(MOD_FE_9)
```
At first I even tried the GMM estimation technique - however this was more than just complicated. Am I on the right path with the plm function or would you do anything differently? This is how you analyse Panel Data right? And I am right that for the gmm method, I have to create 21 moment functions since I am estimating 21 parameters? Furthermore, I decided to go for the fixed model because the F-test always shows a small p-value.
|
Estimation of panel data
|
CC BY-SA 4.0
| null |
2023-04-28T12:41:13.760
|
2023-04-28T12:41:13.760
| null | null |
386780
|
[
"regression",
"panel-data",
"generalized-moments",
"plm"
] |
614396
|
2
| null |
392387
|
0
| null |
A two-sample t-test is an option here and is likely to perform decently. While the t-test does have assumptions about the two groups each having normal distributions, the test is rather robust to violations of that assumed normality.
However, you know your variables to have binomial (Bernoulli, even) distributions. Consequently, you can use a test specific to such distributions, and you do not have to rely on the t-test being rather robust.
A proportion chi-squared test is one of the popular ways to test this. I like the G-test but not enough to get into a heated debate about G vs chi-squared.
| null |
CC BY-SA 4.0
| null |
2023-04-28T12:45:46.823
|
2023-04-28T12:45:46.823
| null | null |
247274
| null |
614397
|
1
| null | null |
1
|
19
|
I have some percentages calculated as follows:
8/51 = 15.69
12/91 =7.19
12/167 = 13.19
Is there a way to calculate a mean percentage taking into account that the denominator differs?
Thank you in advance
|
Calculate a mean percentage
|
CC BY-SA 4.0
| null |
2023-04-28T12:58:04.927
|
2023-04-28T12:58:04.927
| null | null |
14270
|
[
"descriptive-statistics",
"percentage"
] |
614398
|
1
| null | null |
1
|
18
|
I do understand the IMHO simple concept using weights to fit a sample to another (real) population. Calculating the weight factors is not hard. Regarding to my MWE there maybe is a more elegant R way but it works in my case.
But what I don't get is to how to apply the calculated weight factors to my real data.
In the example below there are 50 people in the sample: 36% female and 64% male.
The "real" population is 60% female and 40% male. So weight factors I calculated are
```
female male
1.666667 0.625000
```
In this MWE I have to types of result variables. The `foo` is a categorical variable with `TRUE` and `FALSE`. How to apply the weight factors on this? Imagine a survey question the people can answer with "Yes" or "No".
The second more complicated (for me) is kind of a likert scale from 0 to 4. How do I weight there? Imagine this as a survey question like "How do you feel on that scale from 0 (absolute bad) to 4 (awesome)?".
```
set.seed(0)
k = 50
df <- data.frame(
gender = sample(c("female", "male", "male"), k, replace = TRUE),
foo = sample(c(TRUE, FALSE), k, replace = TRUE),
bar = sample(c(0, 1, 2, 3, 4), k, replace = TRUE)
)
# Frequency
# tab = xtabs(~ gender + foo, data=df)
tab = table(df$gender)
print(tab)
# Fraction
frac = addmargins(prop.table(tab))
print(frac)
# Real poluatin gender fraction
real_female_frac = 0.6
real_male_frac = 0.4
# Weights
weight_female = real_female_frac / frac["female"]
weight_male = real_male_frac / frac["male"]
print(c(weight_female, weight_male))
print(table(df$foo))
print(table(df$bar))
```
# Example with foo
```
> table(df$gender, df$foo)
FALSE TRUE
female 9 8
male 20 13
```
This is the frequency in the sample. Now do I have to mulitplicate the `female` row with `1.666667` and the `male` with `0.625`?
```
> t["female",] * weight_female
FALSE TRUE
15.88235 14.11765
> t["male",] * weight_male
FALSE TRUE
12.121212 7.878788
```
# Example with bar
```
> addmargins(table(df$gender, df$bar), 1)
0 1 2 3 4
female 3 5 6 1 3
male 7 6 8 7 4
Sum 10 11 14 8 7
```
|
Where does weight factors come into play with real data (in R)?
|
CC BY-SA 4.0
| null |
2023-04-28T13:11:12.107
|
2023-05-02T10:15:30.800
|
2023-05-02T10:15:30.800
|
126967
|
126967
|
[
"r",
"weights",
"survey-weights"
] |
614399
|
1
| null | null |
0
|
26
|
I am having difficulties in understanding the difference between interaction terms and test for interaction. I am using logistic regression models so maybe the concept is different compared to linear regression?
As far as I understood from [this](https://www.theanalysisfactor.com/interpreting-interactions-in-regression/#:%7E:text=Adding%20interaction%20terms%20to%20a%20regression%20model%20can,is%20located%20in%20partial%20or%20full%20sun%20%28Sun%29.) explanation, interaction terms let you know how the relationship between two variables changes when a third variables assumes different values.
For example: how relationship between death and age changes according to sex (e.g. if sex=male or sex=female how the OR changes).
But from the interaction terms I can also know if there is a significant difference between the changes due to sex, is this correct? [This](https://stats.stackexchange.com/questions/331244/how-to-test-if-an-interaction-is-significant-interaction-terms-or-model-compari) is the source from what I understood this.
So what is then the point of performing tests for interactio)? For instance, I run a logistic regression in males to obtain the OR for death and age and then I run the logistic regression for females to obtain the OR, and then I compare the two ORs using test for interaction.
What is the correct method to assume that two subgroups (e.g. males and females) have a significantly difference relationship between age and death? What are the differences between interaction terms and test for interaction?
|
Difference between interaction terms and test for interaction
|
CC BY-SA 4.0
| null |
2023-04-28T13:17:30.057
|
2023-04-28T16:20:59.477
| null | null |
384938
|
[
"hypothesis-testing",
"statistical-significance",
"interaction"
] |
614400
|
2
| null |
614329
|
1
| null |
The failure to get coefficients for the "turquoise"
module itself shows the inherent problem in your Cox model. You are double-counting the gene-expression values that go into calculating each case's module score.
A "module" of the type you describe is a linear combination of a set of gene-expression values. That linear combination might come, for example, from a LASSO model based on expression of all genes. For each case you then get a "module" score by taking the sum of the expression values for the genes retained by the LASSO model, with each gene-expression value weighted by its corresponding (penalized) LASSO regression coefficient.
What have done for your survival model is to use not only the module score but also the individual gene-expression values that go into the module. The module score is a linear combination of the individual gene-expression values, so including both the individual values and their combination score leads to a linearly dependent set of predictors. R will let you try to set up such a regression, but it will omit coefficients for those that are linearly dependent on predictors that appear earlier in the formula.
That's why you aren't getting a coefficient for the module.
What you are doing with the Cox model is double-counting the set of selected genes in a way that undercuts the whole point of constructing the module in the first place. If you want to proceed with this type of analysis, then for the Cox model use only the module score and omit all of the individual gene-expression values that go into creating it.
In terms of your problems with `IQR()`, I'm not fluent in the "tidyverse" and coding-specific matters are off-topic on this site.
| null |
CC BY-SA 4.0
| null |
2023-04-28T13:29:06.857
|
2023-04-28T13:29:06.857
| null | null |
28500
| null |
614401
|
1
| null | null |
0
|
18
|
I want to apply a filter to my data that assume its normality. The data comes from different sensors measuring the same quantity over the same period of time.
Using a simple histogram plot, It seems that the data of most of the sensors follows a lognormal distribution, so my idea was to apply the following process:
- fit a lognormal distribution to the data to estimate the shift of the distribution (as I asked in a previous post).
- shift the data according to the estimated parameter, and apply the logarithm to transform the data to normal
- apply the filter to data that should now better follow a normal distribution.
What I would like to do is:
- to have a way to quantify and compare the "improvement" of normality of the data before and after the transformation, to see if the transformation is significant (at least in some cases).
- test for log-normality of the data to see whether to apply this transformation (could I transform and test for normality?).
- test for log-normality at different time aggregations/time intervals and compare them
Looking to a [previous question](https://stats.stackexchange.com/questions/132652/how-to-determine-which-distribution-fits-my-data-best), I identified three different statistical tests, but I'm not sure whether these can be meaningfully applied to my case, and used for making these kinds of comparisons:
- Kolmogorov–Smirnov test
- Lilliefors test (normality testing without assuming distribution parameters)
- Akaike information criterion
|
Statistical tests for normality and log-normality testing?
|
CC BY-SA 4.0
| null |
2023-04-28T13:41:58.803
|
2023-04-28T13:41:58.803
| null | null |
379568
|
[
"goodness-of-fit",
"normality-assumption",
"lognormal-distribution",
"kolmogorov-smirnov-test"
] |
614402
|
1
| null | null |
0
|
18
|
I'm fairly new on univariate time series forecasting and deep learning.
My task is to forecast energy consumption values.
May data looks like:
[](https://i.stack.imgur.com/kPkyL.png)
I'm usin simple RNN, because I have tested with linear regression, xgboost, lstm and rnn, but rnn looks the best.
I'm using Min-Max scaler to scale my data. and after predictions my actual vs prediction chart and training-loss chart looks like below:
[](https://i.stack.imgur.com/9YRdY.png)
[](https://i.stack.imgur.com/6Xa9f.png)
Epoch is 1000.
But there is a big problem with the r2 metric.
It's -14.617
By the way my data is stationary. I've checked with ADF test.
When I check the chart it seems very well. But r2 is very bad.
I couldn't find an understandable explanation to my problem.
Is there anyone to help me to understand?
Thanks.
|
Negative RSquare Value but Good View on Chart for RNN Time Series
|
CC BY-SA 4.0
| null |
2023-04-28T13:44:07.707
|
2023-04-28T14:08:17.607
|
2023-04-28T14:08:17.607
|
53690
|
189959
|
[
"time-series",
"forecasting",
"r-squared",
"recurrent-neural-network",
"univariate"
] |
614404
|
1
| null | null |
1
|
50
|
I am conducting regression analysis and my dependent variable is derived from dividing the number of candidates standing in an election by the number of vacancies. This gives me a discrete dependent numeric variable, which is throwing up issues in the diagnosis of my linear regression model.
Are there alternative types of regression I could use - or is it generally fine to use linear regression?
|cand/vac |count |
|--------|-----|
|1 |1098 |
|1.3333333 |159 |
|1.5 |659 |
|1.6666667 |316 |
|2 |7552 |
|2.3333333 |952 |
|2.5 |1933 |
|2.6666667 |902 |
|3 |14244 |
|3.25 |3 |
|3.3333333 |1291 |
|3.5 |1416 |
|3.6666667 |847 |
|4 |15528 |
|4.3333333 |458 |
|4.5 |445 |
|4.6666667 |222 |
|5 |7777 |
|5.3333333 |31 |
|5.5 |51 |
|5.6666667 |11 |
|6 |2048 |
|6.3333333 |2 |
|6.5 |3 |
|7 |336 |
|7.5 |1 |
|8 |35 |
|9 |3 |
|
Is linear regression suitable for discrete numeric dependent variable when responses are bunched?
|
CC BY-SA 4.0
| null |
2023-04-28T13:57:33.417
|
2023-04-28T19:42:18.597
| null | null |
386784
|
[
"r",
"regression"
] |
614405
|
1
| null | null |
3
|
32
|
From what I already know about the Johansen test, it tests the rank of the VAR matrix (in error correction form) through steps testing whether every eigenvalue is signifincantly different from 0 (first the highest eigenvalue, then the second highest one and so forth), so the number of eigenvalues statistically different from 0 would give an estimation of the rank of the matrix.
I take this as if the Johansen test was establishing an implied equivalence: n. of eigenvalues disctinct from 0 = rank of the matrix, which would imply at the same time that the number of variables minus the algebraic multiplicity of 0 (number of eigenvalues not different from 0) is equal to the rank.
However, the number of eigenvalues disctinct from 0 could be lower than the rank, since the rank nullity theorem establishes a direct relationship between the nullity (geometric multiplicity of 0) and the rank, but not between the algebraic multiplicity of 0 (number of times 0 appears as an eigenvalue) and the rank, because the algebraic multiplicity of 0 can be higher than the nullity.
Why is then the Johansen test based on the algebraic multiplicity of 0 (number of eigenvalues not different from 0) to estimate the rank of the matrix? Why not doing a test based on the geometric multiplicity (nullity) instead?
and not less important: wouldn't the fact that the Johansen test estimates the rank of the matrix through the algebraic multiplicity of 0 make it biased to underestimate the rank of the VAR matrix (expressed in error correction form) since the algebraic multiplicity can be higher than the nullity?
|
Johansen test: why testing for the algebraic multiplicity of 0 and not for the nullity?
|
CC BY-SA 4.0
| null |
2023-04-28T14:19:22.787
|
2023-04-28T14:29:03.217
|
2023-04-28T14:29:03.217
|
272203
|
272203
|
[
"time-series",
"matrix",
"linear-algebra",
"cointegration",
"eigenvalues"
] |
614406
|
2
| null |
410297
|
1
| null |
We have published an R package FrequentistSSD [https://cran.r-project.org/web/packages/frequentistSSD/index.html](https://cran.r-project.org/web/packages/frequentistSSD/index.html) and an accompanying paper: Wu, Jianrong, Haitao Pan, and Chia‐Wei Hsu. "Two‐stage screened selection designs for randomized phase II trials with time‐to‐event endpoints." Biometrical Journal 64, no. 7 (2022): 1207-1218.
| null |
CC BY-SA 4.0
| null |
2023-04-28T14:28:47.943
|
2023-04-28T14:28:47.943
| null | null |
251506
| null |
614407
|
2
| null |
614198
|
3
| null |
First, as you are using `survfit()` to fit your `lung1` data, your simulations aren't using any information about a Weibull fit to those data. Second, the "standard" Weibull parameterization used by [Wikipedia](https://en.wikipedia.org/wiki/Weibull_distribution) and by `dweibull()` in R differs from that used by `survreg()` or `flexsurvreg()` as you try in [another question](https://stats.stackexchange.com/q/614304/28500), providing a good deal of potential confusion. Third, if you want to get smooth estimates over time, then you have to ask for them. It seems that your simulations here and in related questions ask for some type of point estimate or random sample from the distribution rather than a smooth curve.
Random samples from the event distribution are OK and are used for things like power analysis in complex designs. For your application you would need, however, a lot of random samples from each set of new random Weibull parameters to put together to get the estimated survival curves you want. That's unnecessary, as with a parametric fit (unlike the time-series estimates you've used in other work) there is a simple closed form for the survival curve, providing the basis for the continuous predictions that you want.
In the "standard" parameterization used by Wikipedia and by `dweibull()` in R, the Weibull survival function is:
$$ S(x) = \exp\left( -\left(\frac{x}{\lambda} \right)^k \right),$$
where $\lambda$ is the standard "scale" and $k$ is the standard "shape."
Neither `survreg()` nor `flexsurvreg()` (which calls `survreg()` for this type of model) fits the model based on that parameterization. Although `flexsurvreg()` can report coefficients and standard errors in that parameterization, the internal storage that you access with functions like `coef()` and `vcov()` uses a different parameterization.
To get the "standard" scale, you need to exponentiate the linear predictor returned by a fit based on `survreg()`. If there are no covariates, then that's just `exp(Intercept)`.
To get the "standard" shape, you need to take the inverse of the `survreg_scale`. The coefficient stored by `survreg()` or `flexsurvreg` is the log of `survreg_scale`, so you can get the "standard shape" via `exp(-log(survreg_scale))`.
Further complicating things is that `survreg()`, unlike `flexsurvreg()`, doesn't return `log(scale)` via the `coef()` function. You can, however, get that along with the other coefficients by asking for `model$icoef`, which returns all coefficients in the same order that they appear in `vcov()`.
The following function returns the survival curve for a Weibull fit from `survreg()`. The `survregCoefs` argument should be a vector with the first component the linear predictor and the second the `log(scale)` from `survreg()`.
```
weibCurve <- function(time, survregCoefs) {
exp(-(time/exp(survregCoefs[1]))^exp(-survregCoefs[2]))
}
```
Fit a Weibull distribution to the data and compare the fit to the raw data:
```
## fit Weibull
fit1 <- survreg(Surv(time1, status1) ~ 1, data = lung1)
## plot raw data as censored
plot(survfit(Surv(time1, status1) ~ 1, data = lung1),
xlim = c(0, 1000), ylim = c(0, 1), bty = "n",
xlab = "Time", ylab = "Fraction surviving")
## overlay Weibull fit
curve(weibCurve(x, fit1$icoef), from = 0, to = 1000, add = TRUE, col = "red")
```
Then you can sample from the distribution of coefficient estimates and repeat the following as frequently as you like to see the variability in estimates (assuming that the Weibull model is correct for the data). I set a seed for reproducibility.
```
set.seed(2423)
## repeat the following as needed to add randomized predictions for late times.
## I did both 5 times to get the posted plot.
newCoef <- MASS::mvrnorm(n = 1, fit1$icoef, vcov(fit1))
curve(weibCurve(x, newCoef), from = 500, to = 1000, add = TRUE, col = "blue", lty = 2)
```
That leads to the following plot.
[](https://i.stack.imgur.com/tSgwo.png)
Another approach to getting the variability of projections into the future from the model is to get a distribution of "remaining useful life" values for multiple random samples of Weibull coefficient values, conditional upon survival to your last observation time (500 here). [This page](https://stats.stackexchange.com/q/519773/28500) shows the formula.
| null |
CC BY-SA 4.0
| null |
2023-04-28T14:50:53.360
|
2023-04-28T14:50:53.360
| null | null |
28500
| null |
614408
|
2
| null |
614346
|
2
| null |
>
The interaction term is where I lose the script. My understanding is that groupM:timeB is compared against the baseline, so would that mean Group M Time B against Group C Time A?
As you discovered, that is NOT what the interaction term represents. With this type of data coding, when neither predictor is at its reference level, it's the extra difference beyond what you would have estimated based simply on the values of lower-level coefficients. That interpretation has the advantage of extending to even higher-level interactions.
Trying to interpret the coefficients directly from a model summary that includes interactions can lead to lots of difficulties. Besides the one that you found, simply changing the reference level of one predictor can change the estimate of the coefficient for an interacting predictor and thus the apparent "significance" of what some call that interacting predictor's "main effect" coefficient.
Your approach, proceeding to post-modeling tools like those in `emmeans`, leads to much less confusion. That was a good choice.
| null |
CC BY-SA 4.0
| null |
2023-04-28T15:19:31.913
|
2023-04-28T15:19:31.913
| null | null |
28500
| null |
614409
|
1
| null | null |
0
|
6
|
I'm trying to run a logit model for transport mode choice between 3 modes. Like every other transport choice models I have case specific variables like income of the passenger and alternative specific variables like travel time. For each respondent I present 6 choice sets. I've got 2 variables namely trip purpose(leisure, business) and partysize(alone and with group). How should I enter these variables to the model. Can these variables be treated as case-specific in multinomial logit?
Thank you
|
Is trip purpose a case-specific variable in a multinomial logit for transport?
|
CC BY-SA 4.0
| null |
2023-04-28T15:26:14.723
|
2023-04-28T15:26:14.723
| null | null |
386790
|
[
"logistic",
"multinomial-logit",
"transportation"
] |
614411
|
2
| null |
614391
|
2
| null |
Trying to interpret individual coefficients reported by the initial summary of a regression model with interactions tends to lead to confusion. Several similar questions appear on this site each week. My general advice: "Don't try this at home."
With interactions, each higher-level coefficient is a difference beyond what you would predict based on the lower-level coefficients. The value of a lower-level coefficient and thus its "p-value" and the "statistical signficance" of its difference from 0 depends on how its interacting predictors are coded. Yes, you can work those details out. Doing it once or twice does help fix the issues in your mind.
For practical applications, it's better to use well-vetted post-modeling tools rather than trying to figure out the individual coefficients from the model summary. The `Anova()` function in the R [car package](https://cran.r-project.org/package=car) provides estimates of statistical significance that consider all levels of a predictor and its interactions, in a way that isn't limited by an imbalanced design. The R [emmeans package](https://cran.r-project.org/package=emmeans) provides useful tools for evaluating any combinations of predictor values that you want.
| null |
CC BY-SA 4.0
| null |
2023-04-28T15:42:16.650
|
2023-04-28T15:42:16.650
| null | null |
28500
| null |
614414
|
2
| null |
614324
|
3
| null |
If it is too difficult to scale the black line you can scale the red one and the horizontal line instead. In the following code you get overlap:
```
n = 10^7
tibble(cauchy_real = rcauchy(n)) %>%
filter(cauchy_real >= -10, cauchy_real <= 10) %>%
ggplot() +
geom_density(aes(x = cauchy_real)) +
geom_function(fun = function(x){dcauchy(x) / (pcauchy(10) - pcauchy(-10))}, colour = 'red') +
geom_hline(yintercept = 1/pi / (pcauchy(10) - pcauchy(-10)))
```
[](https://i.stack.imgur.com/5FY4p.png)
| null |
CC BY-SA 4.0
| null |
2023-04-28T16:11:56.983
|
2023-04-28T16:11:56.983
| null | null |
257886
| null |
614415
|
1
| null | null |
0
|
10
|
I've got a linear regression sas output. So, I know the standard error from there. I need to work out the 95% confidence interval for the change in y as x increase by 1, which is basically the confidence interval of the slope I think. So, according to some research I have done, I need to work out critial t value first. I have sample size of 31, so df=31-2=29, and found the critical value for 95% to be 2.045. So I took the parameter estimate of x +/- t-critical value(2.045)*standard error of x. parameter estimate and standard error taken from the aforementioned sas output for regression. I am just not sure if I have done this correctly or if I should take z-critical value, if so how?
2nd scenario is chi-sq test. 3 treatments. recorded yes or no for disease. i worked out the expected values this way. (total number of treatment 1 * total number of diseased) / total sample size, for expected number of people who have disease with treatment 1 and so on. Please comment on if I got this right. and then I worked out the chi-sq. And then, I need to work out the 95% confidence interval for difference in rate of disease incidence (yes-disease) between participents of treatment 2 and 3. I am just completely at a loss here. i found thid potentially useful formula. CI = sample mean +/- z*(s/square-root of n). s is sample standard deviation. but no idea what sample mean would be in chi-sq test or how to find s or z.
|
How to work out confidence intervals for the following two scenarios? (please see description below)
|
CC BY-SA 4.0
| null |
2023-04-28T16:18:33.550
|
2023-04-28T16:18:33.550
| null | null |
386795
|
[
"regression",
"confidence-interval",
"chi-squared-test",
"chi-squared-distribution"
] |
614416
|
2
| null |
614399
|
0
| null |
>
For instance, I run a logistic regression in males to obtain the OR for death and age and then I run the logistic regression for females to obtain the OR, and then I compare the two ORs using test for interaction.
That is not the "test for interaction" that the [page you link](https://stats.stackexchange.com/q/331244/28500) describes. The test there is based on two nested models, not two separate models for males and females. One model includes predictors individually (e.g., `~ age + sex`) and the other including the interaction between them (often coded `~ age*sex` in R, which expands to `~ age + sex + age:sex`, with `age:sex` the interaction term). The two nested models are then compared by a likelihood-ratio test.
The "signficance" of the coefficients reported for a single model with interactions is typically evaluated instead by a t-test or Wald test. There's nothing wrong in principle with that, as in the limit of large sample sizes the results converge to those of the likelihood-ratio test.
There nevertheless can be some confusion about lower-level coefficients in such models, even with lower-level interaction coefficients when the model includes higher-level interactions, as their values depend on the coding of interacting predictors. The `Anova()` function in the R [car package](https://cran.r-project.org/package=car) can test all coefficients associated with a predictor in a joint t-test or Wald test. That overcomes those difficulties in interpreting the individual coefficients.
So the difference between the properly constructed "interaction test" based on two models and the properly constructed test of interaction coefficients in a single model is the difference between the likelihood-ratio test used in the first approach and the t-test or Wald test used in the second approach. [This UCLA web page](https://stats.oarc.ucla.edu/other/mult-pkg/faq/general/faqhow-are-the-likelihood-ratio-wald-and-lagrange-multiplier-score-tests-different-andor-similar/) explains how the tests typically reported for models fitted by maximum-likelihood methods differ.
| null |
CC BY-SA 4.0
| null |
2023-04-28T16:20:59.477
|
2023-04-28T16:20:59.477
| null | null |
28500
| null |
614417
|
1
| null | null |
0
|
23
|
In the theoretical framework for my model, there is concern about reverse causality between Y and my main X variable. Additionally, I believe there's an information lag effect where economic agents are not responding to a situation contemporaneously, they are using past information to inform their current decisions. Therefore, any predicted values for Y would be reliant on the observed values of X from the past period. Lagging this X variable would be dealing with the reverse causality concern and incorporating the information lag effect hypothesis discussed in some of the literature.
However, my Y and X have a non-linear relationship. To address this, I am using a quadratic fit for my main model and, as a supplementary note, producing regression spline results to better discuss coefficients on X.
In this case, to perform IV regression would I have to lag both the linear and quadratic variables?
Would it be better practice to only perform IV with my regression splines?
|
Lagging a quadratic independent variable
|
CC BY-SA 4.0
| null |
2023-04-28T16:22:10.327
|
2023-04-28T16:22:10.327
| null | null |
386581
|
[
"multivariate-analysis",
"panel-data",
"nonlinear-regression",
"instrumental-variables",
"lags"
] |
614418
|
1
| null | null |
0
|
14
|
If I have a training set of 10k samples (balanced--two classes) and 20k test samples (imbalanced-- 5% versus 95%), could I compare their ROCs for overfitting/underfitting? I guess not bu I need more clarity. The whole "ROC of training should be higher or equal to the test" only holds if we have more training data compared to test data, right?
-- I know that we prefer to have a bigger training set so it's better to consider some test samples in the training but since the data is imbalanced and I want to have a balanced training data, I had to consider it this way.
|
training vs test data sizes
|
CC BY-SA 4.0
| null |
2023-04-28T16:25:53.497
|
2023-04-28T16:25:53.497
| null | null |
335023
|
[
"logistic",
"predictive-models",
"train-test-split"
] |
614420
|
1
|
614444
| null |
0
|
31
|
I am reading "Confidence Intervals and Prediction Intervals for Feed-Forward Neural Networks" by Richard Dybowski. In this paper, an ensemble of neural networks are trained on bootstrapped data samples so that a confidence interval can be estimated. I will try to briefly summarize the method below:
Assume that our training data is modeled by the stochastic function
$$ y = \mu_y(\mathbf{x}) + \epsilon, \quad y\in\mathbb{R}, \mathbf{x}\in\mathbb{R}^m$$
where $\mu_y$ is the unknown function to be approximated.
We have access to a finite sample of $N$ training pairs $\mathbf{S} = \{(\mathbf{x}^{(1)}, y^{(1)}), \dots, (\mathbf{x}^{(N)}, y^{(N)})\}$ only.
We generate $B$ bootstrapped samples $\{ (\mathbf{x}^{(*b, 1)}, y^{(*b, 1)}), \dots, (\mathbf{x}^{(*b, N)}, y^{(*b, N)}) \}_{b=1}^B$, and train $B$ neural networks $\{\hat{\mu}_y(\mathbf{x}; \hat{\mathbf{w}}^{(*b)})\}_{b=1}^B$.
Then the bootstrap estimate of $\mu_y(\mathbf{x})$ is given by the mean provided by the ensemble of neural networks
$$ \hat{\mu}_{y, \text{boot}}(\mathbf{x}) = \frac{1}{B} \sum_{b=1}^B \hat{\mu_y}(\mathbf{x}; \hat{\mathbf{w}}^{(*b)} )$$
and the bootstrap estimate of the standard error of $\hat{\mu}_y(\mathbf{x}; \hat{\mathbf{w}})$ is given by
$$\hat{\mathrm{SE}}_\text{boot} (\hat{\mu}_y (\mathbf{x}; \cdot)) = \sqrt{ \frac{1}{B-1} \sum_{b=1}^B [ \hat{\mu}_y ( \mathbf{x}; \hat{\mathbf{w}}^{(*b)} ) - \hat{\mu}_{y, \text{boot}}(\mathbf{x})]^2 }$$
Assuming a normal distribution for $\hat{\mu}_y ( \mathbf{x}; \hat{\mathbf{w}} )$ over the space of all possible $\hat{\mathbf{w}}$, we have
$$\hat{\mu}_{y, \text{boot}}(\mathbf{x}) \pm t_{.025}\hat{\mathrm{SE}}_\text{boot}(\hat{\mu}_y(\mathbf{x}; \cdot))$$
as the 95% confidence interval for $\mu_y(\mathbf{x})$.
1. How would I compute the 95% confidence interval without assuming a normal distribution for $\hat{\mu}_y ( \mathbf{x}; \hat{\mathbf{w}} )$?
2. How would I generalise this method to a vector output $\mathbf{y} \in \mathbb{R}^m$?
|
How can I generate bootstrap confidence intervals for a multivariate regression network?
|
CC BY-SA 4.0
| null |
2023-04-28T16:55:55.090
|
2023-04-28T22:53:39.977
|
2023-04-28T20:29:50.887
|
261121
|
261121
|
[
"neural-networks",
"confidence-interval",
"bootstrap",
"prediction-interval",
"multivariate-regression"
] |
614421
|
1
| null | null |
1
|
37
|
Imagine I have a 3x3 matrix of the type:
```
[0.22 0.15 0.99
0.28 0.42 0.51
0.16 0.76 0.12]
```
The numers in the matrix are the result of a complex model, please simply take them as given.
I would like to test the hypothesis that the matrix values are not "uniformly distributed" over the matrix. By uniformly "distributed" over the matrix I mean we would have a matrix of dimension N where each entry is 1/N, for example
```
[0.11 0.11 0.11
0.11 0.11 0.11
0.11 0.11 0.11]
```
Of course, the first matrix is not "uniform". My question is: is there a statistical test to carry out to formally test the hypothesis that each matrix entry does not deviate significantly from 1/N? The ideal statistical test would give me, for example, a low p-value for an entry of, let's say, 0.99, loosely speaking indicating low probability that the null hypothesis is true as 0.99 is very far from 0.11 among all possible realizations in [0,1].
I apologize in advance if I am not too rigorous, but I cannot express this concept better than this. Any kind of hint is very appreciated.
|
test for deviation from uniform distribution
|
CC BY-SA 4.0
| null |
2023-04-28T16:59:42.663
|
2023-04-29T08:46:54.790
|
2023-04-29T08:46:54.790
|
301631
|
301631
|
[
"hypothesis-testing",
"statistical-significance",
"uniform-distribution"
] |
614422
|
2
| null |
49270
|
0
| null |
The algorithm implemented in mice.impute.rf is described in Appendix A.1 of Doove, L.L., van Buuren, S., Dusseldorp, E. (2014), Recursive partitioning for missing data imputation in the presence of interaction Effects. Computational Statistics & Data Analysis, 72, 92-104. Available here:
[https://stefvanbuuren.name/publications/2014%20Recursive%20partitioning%20-%20CSDA.pdf](https://stefvanbuuren.name/publications/2014%20Recursive%20partitioning%20-%20CSDA.pdf)
In short, for each tree in the ensemble, the algorithm determines the leave from which a missing observation would be predicted, then the elements of all these resulting leaves are pooled together and the imputed value is randomly selected from them (the pooled elements make up a predictive distribution). There is no averaging across trees in order to preserve uncertainty of the imputed value.
| null |
CC BY-SA 4.0
| null |
2023-04-28T17:00:02.410
|
2023-04-28T19:19:33.930
|
2023-04-28T19:19:33.930
|
16263
|
16263
| null |
614423
|
2
| null |
604519
|
1
| null |
>
...the idea to describe NHST as something like a posterior predictive check, but with a "null hypothesis" pseudo-posterior that is a Dirac $\delta$ density about $H_0$. Does that make sense...?
Yes. I'm not sure whether to call NHST a prior or posterior predictive check, but it is fair to see it as a form of model check. That said, a Bayesian PPC is often used to check "Is my model large enough yet, or do I need to add more nuance?" By contrast, you could says classical NHST is typically used to check "Is my model small enough yet to match the capacity of my sample size / study design, or should I simplify it further because (especially without believing in an informative prior) I just don't have enough data to estimate some parameters with adequate precision?"
Ultimately, the usual scientific reason for running a NHST is to answer the question "Is my sample size large enough to rule out sampling variation as being a major concern?" We deliberately set up a too-simple straw-man model, under which the effect we hope to learn about (say, a difference in means between treatment and control groups) isn't a true effect in the population, but could show up as an apparent effect in finite samples: $\mu_1=\mu_2$, but $\bar{x}_1\neq\bar{x}_2$.
If our sample is small, this "PPC" might lead us to conclude: "Even though we don't believe this straw-man model, the data aren't inconsistent with it. Let's design our next experiment to collect more data, so that we can rule out sampling variation as a reason to disbelieve our results."
But if our sample is large enough, we should see that the too-simple model $\mu_1=\mu_2$ typically leads to datasets that don't look like our actual sample. Then we can say, "OK, sampling variation isn't a major concern here. Now we can focus on all the other concerns: Was there random assignment? Are the measurements valid for the construct we are trying to study? etc."
---
Perhaps it's worth framing this a 2nd way too:
From the Bayesian point of view, a point prior often makes no sense. If your prior puts all its weight on a single $\theta$ value, the posterior will be the same, so there's no point in collecting data at all. In this sense, classical NHST is not the same as using data to update your prior probabilities for $H_0$ and $H_A$ into posterior probabilities, because it starts with a point prior solely on $H_0$. But since Bayesian methods are largely meant for updating priors into posteriors, NHST seems like nonsense to many Bayesians.
However, if you're a Bayesian who is willing to run a PPC, you are willing to admit your prior might be wrong. Maybe your initial prior is your first attempt at pinning down your beliefs on this topic, and you run the PPC to see if your prior beliefs lead to an adequately realistic model that generates adequately realistic data. If they do, you'll keep using your prior. If they don't, it might convince you that your initial prior was inadequate, and you'll revise your prior (again, NOT the same thing as updating from a prior to a posterior).
In that sense, the purpose of NHST is similar to a PPC attempting to find convincing evidence that a prior of "no effect" is unreasonable. You might not actually hold such a prior yourself, but some readers or reviewers might. By reporting a NHST, you hope to tell them: "If we had started with a simple prior of 'no effect,' a PPC would have told us that our prior was inadequate" (if you reject $H_0$), or "...was not inadequate" (if you fail to reject $H_0$).
---
In either case, NHST is not meant as an answer to the Bayesian's usual question "Which values of $\theta$ should I believe in?" NHST is about the study design, not really about $\theta$ itself. The [Greenland & Poole article](https://journals.lww.com/epidem/Fulltext/2013/01000/Living_with_P_Values__Resurrecting_a_Bayesian.9.aspx) mentioned in the comments does a nice job of trying to frame p-values in more Bayesian ways, but I don't know how useful that is, because (outside of PPCs) Bayesian methods are simply tackling a very different question than NHST is.
| null |
CC BY-SA 4.0
| null |
2023-04-28T17:04:46.460
|
2023-04-28T17:54:00.237
|
2023-04-28T17:54:00.237
|
17414
|
17414
| null |
614424
|
2
| null |
613922
|
3
| null |
Let's generalize slightly by assuming we have $n$ kings and $N$ total cards. On the first round, we have:
$$\begin{eqnarray}
P(\text{player 1 wins}) &=& {n \over N} \\
P(\text{player 2 wins}) &=& {N-n \over N}{n \over N-1} &< {n \over N} \text{ if
} n > 1\\
P(\text{player 3 wins}) &=& {N-1-n\over N-1}{N-n \over N}{n \over N-2} &< {n \over N} \text{ if
} n > 1\\
P(\text{player 4 wins}) &=& {N-2-n\over N-2}{N-1-n\over N-1}{N-n \over N}{n \over N-3} &< {n \over N} \text{ if
} n > 1\\
\end{eqnarray}$$
where the inequalities can be verified by substituting $n=1$ and cancelling terms.
Assuming we reach the second round, we have exactly the same situation, only with $N = N - 4$. Since, on every round, the probability of player 1 winning is greater than the probability of any other player winning (if $n > 1$), we can conclude that overall the probability of player 1 winning is greater than the probability of any other player winning, and the game is unfair.
Also note that we can analyze a two-player game between players 3 and 4 conditional upon neither player 1 nor player 2 having won in a round; clearly, this is the same as player 1 vs player 2, just with $N = N - 2$. We can extend this analysis to show that, for $n > 1$, the probability that a player wins is strictly greater the earlier in the round they draw.
| null |
CC BY-SA 4.0
| null |
2023-04-28T17:39:23.927
|
2023-04-28T17:39:23.927
| null | null |
7555
| null |
614426
|
1
| null | null |
0
|
34
|
I have a data set that is separated by Stimulus Type, Sex, and Observed/Expected. I have many stimulus types (more than 100), and want to run a Fisher's Exact test on EACH to see if there is a difference in observed vs expected ratios between the sexes. In other words, do males respond more often to a given stimulus than females (more often than is expected)?
Example:
[](https://i.stack.imgur.com/5a7bC.png)
Is it possible to run a Fisher's test with this structure? Or should I look for a different test? I have seen lots of guidelines for running Fisher's in R, but not with multiple levels. Just looking for advice if there is a way to do this or not!
Thanks for your help!
|
How to run Fisher's Exact test (in R) with multiple levels?
|
CC BY-SA 4.0
| null |
2023-04-28T18:11:47.910
|
2023-05-01T15:36:15.640
|
2023-05-01T15:36:15.640
|
303664
|
303664
|
[
"multivariate-analysis",
"fishers-exact-test"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.