Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
616958 | 1 | null | null | 0 | 45 | My data is collecting deposition of particles from the atmosphere once a month for 11 months at two sites. I am testing to see if my two sites' data are normally distributed so I can determine what T-test to use to test for a difference between sites (thinking of Student t-test). But I need to make sure my two sites have normally distributed data. One site is normally distributed, and pretty obvious just by looking at a histogram. My issue is the other site: the Hatchery site...I am not sure if the data is normally distributed. I provide a QQ plot and a histogram for the Hatchery site. It's a small sample size (n=11) and I have one data point significantly higher than others. But it is hard for me to determine if it is normally distributed looking at the QQ plot. Should I transform this, and if so, what kind of transformation?
P.S. I provide one histogram that is log-transformed, which makes the data appear more normal...is log the way to go? When I used the Shapiro test on the log-transformed data, it gave a p-value of 0.124...thus saying it is normal. When I used the Shapiro test on the normal Hatchery data, the p-value was 0.003...
[](https://i.stack.imgur.com/CrOna.png)[](https://i.stack.imgur.com/714EC.png)[](https://i.stack.imgur.com/avQa3.png)
| Does this need to be transformed? If, yes, how? | CC BY-SA 4.0 | null | 2023-05-25T23:26:47.323 | 2023-05-26T16:06:17.917 | 2023-05-26T16:06:17.917 | 388849 | 388849 | [
"data-transformation",
"histogram",
"qq-plot"
]
|
616959 | 1 | null | null | 0 | 16 | I am conducting a study that seeks to assess differences in fat distribution in transgender women, as compared to cisgender men and women. A prior study has previously published separate formulas derived from multifactorial linear regression of 3 predictor variables (age, BMI, and sex assigned at birth) for cisgender men (R^2=0.879) and cisgender women (R^2=0.926), but my study will be focusing on the fat distributions of transgender women, which has never been documented in the literature.
I was wondering how to calculate the sample size to assess for (1) differences in actual vs predicted values for trans women using these formulae and (2) how many subjects I would need to calculate my own multifactorial linear regression with the same 3 variables amongst transgender women?
Using G-Power and R^2 values from the article (with a=0.05 and B=0.05) for point #1, I got sample sizes of 7 and 8 for each formula, but this seems very low. Am I doing this wrong and would this mean 15 subjects total?
| Calculating Sample Size from Prediction Equation (multifactorial linear regression) | CC BY-SA 4.0 | null | 2023-05-25T23:29:21.020 | 2023-05-25T23:29:21.020 | null | null | 388850 | [
"sample-size"
]
|
616961 | 2 | null | 616950 | 2 | null | The quantile function for a $\text{Gumbel}(0,1)$ is
$$-\ln(-\ln(p))$$
So to sample $Y|Y>u$, first sample $X\sim\text{U}(e^{-e^{-u}},1)$ then use the transformation
$$Y=-\ln(-\ln(X))$$
| null | CC BY-SA 4.0 | null | 2023-05-26T00:31:20.993 | 2023-05-26T00:31:20.993 | null | null | 214015 | null |
616962 | 1 | null | null | 2 | 67 | I am doing a statistical test (program used is SPSS). On the basis of distribution and sample size, I have to chose the correct variable analysis. I also have to justify every decision. I have two independent groups with a sample size of $n=43$ and $n=12$. My first step is to decide whether the data in groups is normally distributed.
As per instructions:
- When deciding if data is normally distributed, I should take into consideration the use of a histogram or QQ plot along with a Kolgomorov-Smirnov or Shapiro-Wilk test. I should also consider mean and median difference, skewness, and kurtosis values .
- If $n>30$, I should consider data being normally distributed if the deviation is not greater than moderate.
Now regarding Group 1, there exists a non-existent mean/median difference and kurtosis/standard error and skewness/standard error point to normality, but Kolgomorov-Smirnov and Shapiro-Wilk test values are less than $0.05$. The QQ plot shows values gathered in a line. The data in the histogram seems to have close to symmetric distribution, but how do extreme values at 0% and 100% affect my decision if data is normally distributed? Is the data normally distributed?
Regarding Group 2, I am pretty much lost. Mean/median difference, kurtosis/standard error, skewness/standard error and the SW-test does not seem point to deviation, but the KS test has a value of 0.019. In the histogram, there is also an extreme value at 100% that I don't know how to relate to. How does it affect normality? A small sample size of $12$ ($n<30$) doesn't allow one to consider a normal distribution (if the deviation is not greater than moderate). Does this data have a normal distribution?
When considering the normality of a distribution, is there a "grade" regarding histograms, QQ plots, skewness/kurtosis, Kolgomorov-Smirnov and Shapiro-Wilk tests, etc.?
If one group has a normal distribution but the other does not, and considering small sample in Group 2, should I continue with a non-parametric test? Also, how do I decide if the deviation is moderate?
Group 1 Descriptives:
- Mean=50.4
- Median=50
- Standard Deviation = 31
- Kolgomorov-Smirnov = 0.021
- Shapiro-Wilk = 0.012
- Kurtosis/standard error = -0,162
- Skewness/standard error = -0,024
Group 1 Histogram
[](https://i.stack.imgur.com/4SDrL.png)
Group 1 QQ plot
[](https://i.stack.imgur.com/wRF4y.png)
Group 2 Histogram
- Mean = 51.5
- Median = 50
- Standard Deviation = 34.4
- Kolgomorov-Smirnov = 0.019
- Shapiro-Wilk = 0.060
- Kurtosis/standard error = -0,057
- Skewness/standard error = 0,017
[](https://i.stack.imgur.com/714r8.png)
Group 2 QQ plot
[](https://i.stack.imgur.com/kP0xL.png)
Part 2 (In reference to posted answer)
Regarding the equality of variance assumption. I assumed equality of variance is examined by doing Levene's test for equality of variances. I proceeded to test the data for equality of variance and got the following results. The sig. value was 0.955. That's pretty good value, right? I suppose that the assumption of homogeneity of variance has been met?
[](https://i.stack.imgur.com/YI9k9.png)
Now regarding the sample sizes of my groups not being equal. It was some time ago and I can't find direct quote but basically, the author said that the larger the difference between sizes of sample groups, the larger the Sig. of Levene's test should be in order to use Independent T-test. Is this correct? If so, is sig. value of 0.955 enough?
You also noted the gaps between bars in the histogram. I was wondering the same thing. I went through all variable values and found that some values (that were very close) in the histogram for group 1 have been lumped together, although not for group 2. I asked a teacher about this but he said that histograms looked ok and I should use them as they are. Now I should note that the initial sample size for the whole variable was 1000 but I had to filter it for different parameters.
If assumptions are met, I would like to stick to the independent t-test as a first choice because we haven't discussed Welch's in this course. Even the course literature refers to "t-test with corrected degrees of freedom" not being discussed. Now I'm translating directly but I assume it's referring to Welch's or something similar. I believe as long as my line of reasoning is logical and I account for weaknesses when I'm justifying my choice, I think I'm good. Feel free to let me know if my interpretations are wrong in any way.
| Distribution and variable analysis | CC BY-SA 4.0 | null | 2023-05-26T00:39:32.783 | 2023-05-27T12:15:04.877 | 2023-05-27T07:15:19.630 | 22047 | 388851 | [
"hypothesis-testing",
"distributions",
"nonparametric",
"normality-assumption",
"qq-plot"
]
|
616963 | 2 | null | 616962 | 2 | null | I asked in the comments, but based off the wording of your question, I assume this is for an independent samples t-test, where you compare the means of two independent groups and test their statistical significance.
First off, despite what some textbooks may say, there is no real golden rule when it comes to which sample size will get you a normal distribution. I would consider $n=30$ to be pretty low in many cases for achieving normality. I previously did an analysis with around $5,000$ observations and it still had a heavily right-skewed inverse Gaussian distribution. Sample size only minimizes the threat of imprecise tests and potentially makes a distribution more normal. Normality is generally approximate, so it is more up to you to perform detailed detective work to decide whether or not you should use one thing or another with inferential statistics.
With that point in mind, I personally prefer visualization over statistical tests of normality. First off, it is well known that Shapiro-Wilk and Kolgomorov-Smirnov can be easily flagged with large sample sizes even when the data is normally distributed. I have recent experience with this where I had an almost straight line for a QQ plot of thousands of observations, but the tails were slightly curved at the ends of the QQ line and flagged the test for no reason other than slight fluctuation from normality. Plots will generally give you a more detailed look at what is actually going on.
To that point, your data doesn't seem that bad. Your QQ plots are pretty linear, so the theoretical quantiles (where your data is supposed to be distributed) and your empirical quantiles (where the data is actually located) line up the way they should. It is clear from the histogram that there are some gaps between your bars (I discuss this a bit more below), but this isn't incredibly damning either. The kernel density (shown by the black line) seems to still be approximately normal.
The only potential problems with your data are the following:
- The equality of variance assumption does not hold for your groups. They clearly have very different distributions which will affect a parametric t-test. In this case, a Student t-test is not helpful because the results will be biased based off the pooled variance.
- The sample sizes for your groups are not equal, so conducting a t-test on these groups should come with cautious interpretation given sample size can greatly affect the outcome of a test. Consider if you compare the heights of $1,000$ people in one group versus the heights of $10$ in another. While there may be a difference, your perspective on that can be fairly limited.
- While it isn't an issue per se, I'm curious why there are gaps between your bars in the histogram. While this isn't a nail in the coffin of your test, it is important to investigate. For example, if your data isn't independent (e.g. you measure the same person multiple times), it can sometimes exhibit this sort of clustered behavior in histograms.
In any case, with the assumption this is for an independent samples t-test, you should almost always use a Welch t-test by default. The two common assumptions not met for the Student t-test, homogeneity of variance and normality, are generally not a problem for the Welch t-test. Given your data doesn't have any other apparently massive flaws, you can run this just fine. For more details, see the citation below, which involves simulations of Student and Welch t-tests under different distributional properties. It is fairly readable for somebody even without math background.
#### Citation
Delacre, M., Lakens, D., & Leys, C. (2017). Why psychologists should by default use Welch’s t-test instead of Student’s t-test. International Review of Social Psychology, 30(1), 92. [https://doi.org/10.5334/irsp.82](https://doi.org/10.5334/irsp.82)
#### Edit
Since you have updated the question, I will update my thoughts. The TLDR is that you can probably go ahead and use your Student t-test with caveats, but it's generally safer to use Welch and there are some important points you made that warrant discussion. First among them...
>
You also noted the gaps between bars in the histogram. I was wondering the same thing. I went through all variable values and found that some values (that were very close) in the histogram for group 1 have been lumped together. Although not for group 2. I asked a teacher about this but he said that histograms looked ok and I should use them as they are. Now I should note that the initial sample size for the whole variable was 1000 but I had to filter it for different parameters.
A couple things. First, there is a lot of missing information here that seems to be omitted from your previous question. Why was the sample filtered so? That seems to be a giant change in the sample size. Since this is for homework, you don't really have to justify what you did, but I'm curious why there is such a steep drop in participants. Surely this will change the distributional properties of your data, especially at very small sample sizes.
Second, you can of course use your data as-is regardless of the histogram bins. My question was more to get you to understand your data better. Why is it doing this? For example, it is common with discrete data with limited values (like Likert scale data) to stack up into each bin, which contributes to non-normality in Likert scale samples. It is important to know this about your own data. If somebody asks you these things in the future (like journal reviewers, etc.), you need to have a good explanation as to why.
Third, about your comment for Levene's test:
>
It was some time ago and I can't find direct quote but basically, the author said that the larger the difference between sizes of sample groups, the larger the Sig. of Levene's test should be in order to use Independent T-test. Is this correct?
I don't know if I buy that argument about Levene's test because 1) that definition doesn't make sense since the value of $p$ isn't based on magnitude and 2) it suffers from some of the same issues as the other normality tests I mentioned. As an extreme example, we can take one sample of $n = 10,000$ and another of $n = 10$ with the same mean and standard deviation and still end up with a significant Levene test, as shown below with simulated data in R.
```
#### Load Libraries ####
library(tidyverse)
library(rstatix)
#### Set Random Seed for Reproducibility ####
set.seed(123)
#### Simulate Groups (Same Mean/SD) ####
group.1 <- rnorm(n=10000,
mean=50,
sd=1)
group.2 <- rnorm(n=10,
mean=50,
sd=1)
#### Turn into Long Format ####
df <- data.frame(group.1,
group.2) %>%
as_tibble() %>%
gather(key = "group") %>%
mutate(group = factor(group))
#### Conduct Levene Test ####
df %>%
levene_test(value ~ group)
```
Which shows a significant $p$ despite being normally distributed with identical mean/sd:
```
# A tibble: 1 × 4
df1 df2 statistic p
<int> <int> <dbl> <dbl>
1 1 19998 108. 3.55e-25
```
However, looking at the group standard deviations again and your Levene's test, you are probably safer with the equality of variance assumption than I speculated, but the sample size differences still offers problematic theoretical speculation. I refer back to my heights example, which should be clear why you should interpret with caution. Regardless if the t-statistic is "right", what we can deduce from such a test is limited because it doesn't really make sense to do. Your case isn't so dramatic, but it still warrants discussion.
As a final note, you can probably get away with using your Student t-test, but just keep in mind that you are still going to have to explain its limitations given your data. The biggest issue is that your Group 1 is four times as large as Group 2. I don't know if you feel confident about differences between those two groups, but I wouldn't be. I would at minimum note it as a limitation in your homework.
| null | CC BY-SA 4.0 | null | 2023-05-26T01:22:03.517 | 2023-05-27T07:19:28.363 | 2023-05-27T07:19:28.363 | 22047 | 345611 | null |
616964 | 2 | null | 616957 | 1 | null | I think this is a lot simpler than you think it is :)
We have a random vector $\mathbf X=(X_1,X_2, \cdots, X_n)$, with expectation $E[\mathbf X] = (E[X_1], E[X_2], \cdots, E[X_n])$.
The expectation of a vector is just a vector of each element's expectation.
Then, the reverse concatenation $(\mathbf X, \mathbf X_\text{rev})=(X_1,\cdots, X_n, X_n, \cdots, X_1)$ has expectation:
$$
\begin{aligned}
E[(\mathbf X, \mathbf X_\text{rev})] &= (E[X_1], \cdots, E[X_n], E[X_n],\cdots, E[X_1])
\end{aligned}
$$
| null | CC BY-SA 4.0 | null | 2023-05-26T01:34:53.883 | 2023-05-26T01:34:53.883 | null | null | 369002 | null |
616965 | 1 | null | null | 0 | 21 | For the code below:
```
Error in xy.coords(x, y, xlabel, ylabel, log): 'x' and 'y' lengths differ
Traceback:
5. stop("'x' and 'y' lengths differ")
```
I tried to rectify it by adding `plot(x, y[1:length(x)])` before the last line of code, but it didn't work. Would anyone know how of another fix?
| Code issue with plot-x and y lengths differ error | CC BY-SA 4.0 | null | 2023-05-26T02:11:46.000 | 2023-05-26T04:22:31.797 | 2023-05-26T04:22:31.797 | 388855 | 388855 | [
"r",
"python"
]
|
616966 | 1 | null | null | 0 | 23 | I'm looking at the difference between bird communities across 5 different vegetation condition levels. Bird species richness data has been recorded across 40 sites, with uneven sample sizes for each of the 5 groups.
From this species richness data, I have calculated a metric which gives a score of the health of the bird community of each site on a scale of 0-1.
I'd like to run a general linear mixed model of bird community by vegetation condition, with site as a random factor but I'm unsure how to treat the family of the data. Do I treat it as binomial? When I run a `glmer` in `lme4` in R:
```
comm_model <-glmer (cond_metric~State + (1|Site_Number), data_summary, family=binomial())
```
I get the following:
```
Warning messages:
1: In vcov.merMod(object, use.hessian = use.hessian) :
variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
2: In vcov.merMod(object, correlation = correlation, sigm = sig) :
variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
```
Any advice on how to deal with this sort of data?
| GLMM with non-integer "count" data | CC BY-SA 4.0 | null | 2023-05-26T02:16:16.547 | 2023-05-26T02:21:41.957 | 2023-05-26T02:21:41.957 | 345611 | 388854 | [
"regression",
"distributions",
"mixed-model",
"binomial-distribution",
"glmm"
]
|
616967 | 1 | null | null | 1 | 31 | I have 10 features for 3 classes and all of them have a distribution similar to this (green, blue and orange is for class 0,1,2):
[](https://i.stack.imgur.com/FVsAd.png)
[](https://i.stack.imgur.com/cyBga.png)
[](https://i.stack.imgur.com/LKfFC.png)
I tried with NBGausian and LogisticRegression however it looks like that this algorithm is not good enough for them. Do you have any suggested algorithms for those?
| Which machine learning algorithm would be appropriate for this distribution of features and classes? | CC BY-SA 4.0 | null | 2023-05-26T03:16:41.490 | 2023-05-27T16:29:59.510 | 2023-05-26T06:14:54.473 | 388853 | 388853 | [
"machine-learning",
"classification",
"algorithms"
]
|
616968 | 1 | 616970 | null | 0 | 19 | GMM(Gaussian Mixture Model) itself is a mixture of Gaussian with each having the proportion of $\pi_k$, $$\sum_{k=1}^{K}\pi_k=1$$this is easy to understand. But when introducing the latent, I don't understand it, because there can only be one $z_k=1$, then how this is related to the soft assignment in GMM model?
| How to understand the binary latent variable z in GMM model? | CC BY-SA 4.0 | null | 2023-05-26T04:38:22.043 | 2023-05-26T10:56:26.027 | 2023-05-26T10:56:26.027 | 388783 | 388783 | [
"expectation-maximization",
"latent-variable",
"generalized-moments"
]
|
616970 | 2 | null | 616968 | 0 | null | It may help if you can define the $z$ variable a bit more thoroughly in your post, since people can use this variable for different things.
However I am going to assume that you are referring to $z$ as a latent variable that randomly selects from a one-hot encoded vector which Gaussian to choose, for example when randomly sampling from the GMM.
For example, when randomly sampling a GMM you need to pick "one out of $K$" Gaussians to pull the sample from. How do you automate that process? You define an extra latent variable, $z$ to act as the middle man in a sense which communicate which Gaussian to pick from a one hot encoded vector of choices.
So for some data point, $X_i$, we can ask the question what is the probability distribution of $X_i$ assuming it belongs to class "2", so it allows to formalize questions such as: $P(X_i | Z = 2)?$, to which we can allocate its Gaussian $P(X_i | Z = 2) = \mathcal{N}(\mu_2,\sigma^2_2)$, and then we can assign multinomial priors to each cluster and even as $P(Z=2)$, what is the prior that we belong to class 2. And so on.
So it's a latent middle man that helps with the random choice of which Gaussian to pick in the mixture when sampling, or calculting the EM algorithm etc.
See also this link:
[https://stats.stackexchange.com/a/380776/117574](https://stats.stackexchange.com/a/380776/117574)
| null | CC BY-SA 4.0 | null | 2023-05-26T05:10:56.373 | 2023-05-26T05:10:56.373 | null | null | 117574 | null |
616971 | 1 | null | null | 0 | 9 | My first time doing an LSTM and optuna. I wanted to use optuna to find the best values of batch size, epoch and optimizer. But when I do, I get a rmse value of 8.34 when I get a naive prediction of 0.69 and with a model with a set value of batch size of 8, epoch 4000 and optimizer SGD I get a rmse of 0.28. I could use just that model but I wanted to use optuna to get optimal (or near_optimal) values of the parameters rather than random values.
The script I use is:
```
def calculate_rmse(model, loader):
predictions = []
targets = []
for X, y in loader:
y_pred = model(X)
predictions.append(y_pred)
targets.append(y)
predictions = torch.cat(predictions, dim=0)
targets = torch.cat(targets, dim=0)
rmse = torch.sqrt(loss_fn(predictions, targets))
return rmse.item()
def objective(trial):
optimizer_name = trial.suggest_categorical("optimizer", ["adam", "SGD",
"RMSprop", "Adadelta"])
epochs = trial.suggest_int("epochs", 100, 4000,step=100, log=False)
batchsize = trial.suggest_int("batchsize", 8, 40,step=16, log=False)
model = LSTMmodel().double()
train_loader, valid_loader = get_loader(train, batchsize)
for epoch in range(epochs):
model.train()
for X_batch, y_batch in train_loader:
y_pred = model(X_batch)
loss = loss_fn(y_pred, y_batch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Validation
if epoch % 100 != 0:
continue
model.eval()
with torch.no_grad():
valid_rmse = calculate_rmse(model, valid_loader)
weights = list(model.named_parameters())
# Handle pruning based on the intermediate value.
if trial.should_prune():
raise optuna.exceptions.TrialPruned()
trial.set_user_attr(key="best_model_weights", value=weights)
return valid_rmse
def callback(study, trial):
if study.best_trial.number == trial.number:
study.set_user_attr(key="best_model_weights",
value=trial.user_attrs["best_model_weights"])
study = optuna.create_study(direction="minimize")
study.optimize(objective, n_trials=100, timeout=None, callbacks=[callback])
pruned_trials = study.get_trials(deepcopy=False, states=[TrialState.PRUNED])
complete_trials = study.get_trials(deepcopy=False, states=[TrialState.COMPLETE])
print("Best trial:")
trial = study.best_trial
print(" Value: ", trial.value)
print(" Params: ")
for key, value in trial.params.items():
print(" {}: {}".format(key, value))
```
The output is: Best trial: Value: 8.34076415009934 Params: optimizer: SGD epochs: 200 batchsize: 24. What am I doing wrong? and what can I do to improve the rmse? Thanks
| LSTM with optuna, getting a high rmse | CC BY-SA 4.0 | null | 2023-05-26T06:26:19.873 | 2023-05-26T06:26:19.873 | null | null | 387960 | [
"python",
"lstm"
]
|
616972 | 2 | null | 616732 | 2 | null | The parameters have no effect on the minimisation. The effect is already regulated with the single parameter $\Omega$
$\qquad \hat{\theta} = \min_\theta -\mathcal L(\theta)$
$\qquad \hat{\theta} = \min_\theta \sum_{x \in X} -\log(w_1 \, P(x \, | \,\theta) + w_2 \, \Omega)$
using $w_2 = 1-w_1$
$\qquad \hat{\theta} = \min_\theta \sum_{x \in X} -\log(w_1 \, P(x \, | \,\theta) + (1-w_1) \, \Omega) $
rearranging terms
$\qquad \hat{\theta} = \min_\theta \sum_{x \in X} -\log( \, P(x \, | \,\theta) + \frac{1-w_1}{w_1} \, \Omega) -\log(w_1)$
We can drop the term $\log(w_1)$ which only shifts the likelihood with a constant and has no effect on the minimisation
$\qquad \hat{\theta} = \min_\theta \sum_{x \in X} -\log( \, P(x \, | \,\theta) + \frac{1-w_1}{w_1} \, \Omega)$
$\frac{1-w_1}{w_1}\Omega$ can be combined into a single constant
$\qquad \hat{\theta} = \min_\theta \sum_{x \in X} -\log( \, P(x \, | \,\theta) + \Omega^\prime)$
---
>
In addition, are there other ways to make the fitting robust to outliers in a setup like this?
With the current method you assume that an outlier is distributed homogeneously. If you know more properties about outliers, and if you know more about the true distribution (e.g. potential correlation between measurements), then you can incorporate this into the likelihood function.
In addition you may have some cost function that you want to minimize, this you may add in a cross validation step to optimize the model hyper-parameters.
| null | CC BY-SA 4.0 | null | 2023-05-26T07:39:48.333 | 2023-05-26T07:52:21.660 | 2023-05-26T07:52:21.660 | 164061 | 164061 | null |
616973 | 1 | null | null | 1 | 14 | I am reading a paper that contains a Gibbs sampler for a regression model with parameters $\beta$ and design matrix $X$. One of the steps in the Gibbs sampler requires simulating from a binary random variable $Z$, and the authors only say
>
Generate $Z$ based on
$$
\frac{P(Z=1 \mid \beta, X)}{P(Z=0 \mid \beta, X)} = \psi(\beta,X),
$$
where $\beta$ and $X$ are known in this step of the Gibbs sampler, and $\psi$ is a known function.
Do they mean that somehow I need to transform the odds $\frac{p}{1-p} = \frac{P(Z=1 \mid \beta, X)}{P(Z=0 \mid \beta, X)}$ to be able to simulate from $Z$?
That is, do I need to generate $Z$ with probability $p = \frac{\psi(\beta,X)}{1+\psi(\beta,X)}$?
| Gibbs sampler, how to generate samples based on odds? | CC BY-SA 4.0 | null | 2023-05-26T08:05:13.303 | 2023-05-26T08:07:10.573 | 2023-05-26T08:07:10.573 | 388868 | 388868 | [
"sampling",
"markov-chain-montecarlo",
"gibbs"
]
|
616974 | 1 | null | null | 1 | 9 | I have 2 time series - one is for scores of people in a survey and the other for the sales of a product for 10 distinct countries. The survey data is categorized into 3 categories for each country. So, we have 10 * 3, i.e. 30 pairs. The data for the former is from April 2018 till Mar 2023 whereas for the latter, we have data from 2013. The aim is to identify the scores of which survey metric can be used to forecast the sales of the product for each country.
Stationarity of both time series is an underlying condition for Granger Test. I'm using the Augmented Dickey Fuller test for checking the stationarity of each country - metric pair and the sales data for each country. My concern is that whether I should use the common time period for checking the stationarity of both the time series which will be between 2018 till 2023 (that of survey data's)? In that case, I'll have to limit my data for sales as it is since 2013.
I have tried applying Dickey-Fuller on both the time series (by limiting and not limiting the sales data), both are giving drastically different results
| For checking stationarity for applying Granger Causality Test, should the range of both the time series be same? | CC BY-SA 4.0 | null | 2023-05-26T08:07:15.560 | 2023-05-26T08:07:15.560 | null | null | 332360 | [
"time-series",
"stationarity",
"augmented-dickey-fuller",
"granger-causality"
]
|
616975 | 1 | null | null | 0 | 9 |
#### An illustration
David Blei in his [lecture notes](http://www.cs.columbia.edu/%7Eblei/fogm/2015F/notes/mixtures-and-gibbs.pdf) considers a collapsed Gibbs sampler for a Gaussian mixture model. In this case $B = (\mu_1, \dotsc, \mu_K)$ is a latent vector of means and $A = (Z_1, \dotsc, Z_P)$ is the vector of latent indicators, assigning the $p$th point to cluster $Z_P\in \{1, \dotsc, K\}$.
Collapsing $B$ can result in faster convergence on $A$, so that many samples can be drawn. However, once the chain has converged, no information is known about $B$.
I imagine that in this case one could try to estimate $B$ by taking the MAP estimates for each $Z_P$ (that is, do the hard assignment of points into clusters) and calculate empirical mean within each cluster.
This approach however ignores:
- the uncertainty on $B$ even when all the assignments in $A$ were exact and
- the uncertainty in $A$.
Is there a more principled way of estimating the means in this case, together with some measure of uncertainty?
#### The general question
Let's now formalize the problem.
Consider a model with two latent variables $A$ and $B$, and an observed data variable $X$.
For this model we assume that:
- Sampling from $P(A, B\mid X)$ is hard (e.g., sampling from $P(B\mid A,\, X)$ could be hard).
- Sampling from $P(A\mid B,\, X)$ is easy.
- Sampling from $P(A\mid X)$ is easy.
Hence:
- Metropolis–Hastings to sample directly from $P(A, B\mid X)$ will not really work.
- A Gibbs sampler to alternate between $P(B\mid A,\, X)$ and $P(A\mid B,\, X)$ will not work either.
- It is easy to collect a lot of samples $a_1, \dotsc, a_N$ from the collapsed $P(A\mid X)$ and the chain quickly converges.
The question:
>
How can samples $a_1, \dotsc, a_N$ be used to estimate $B$?
| Estimating integrated-out variables | CC BY-SA 4.0 | null | 2023-05-26T08:20:15.650 | 2023-05-26T08:20:15.650 | null | null | 255508 | [
"estimation",
"markov-chain-montecarlo",
"gibbs"
]
|
616976 | 2 | null | 228540 | 3 | null | We have just published an article on this subject in The American Statistician [here](https://www.tandfonline.com/doi/full/10.1080/00031305.2023.2216252)
Similar to @markowitz, we define out-of-sample $R^2$ as a comparison of two out-of-sample models: the null model using only the mean outcome of the training data $\bar{y}_{train}$, and the more elaborate model using covariate information.
For the squared error loss of the null model (which we call the MST), we derive an analytical expression showing that
$$
MST = \operatorname{Var}(\bar{Y}_{train}) + \operatorname{Var}(Y) = \frac{n+1}{n}\operatorname{Var}(Y),
$$
meaning that the prediction error is a sum of estimation error on $\bar{y}_{train}$ and irreducible error. This is a useful expression in absence of a test set. But if you have an independent test set, I would indeed prefer the expression $n^{-1}\sum_{i \in \text{test}}(y_i-\bar{y}_{train})$ as suggested above. In principle, both estimators have the same estimand, but the latter is more robust to differences between training and test sets. Finally, we show through simulation that the expression $n^{-1}\sum_{i \in \text{test}}(y_i-\bar{y}_{test})$ can be badly biased for estimation the true $R^2$.
The squared error loss of the elaborate model (the MSE) is then to be estimated through cross-validation or on your test set. Corresponding out-of-sample $R^2$ is then simply
$$\hat{R}^2 = 1-\frac{\widehat{MSE}}{\widehat{MST}}$$
We provide a standard error for this estimate, unlocking hypothesis testing and confidence intervals.
| null | CC BY-SA 4.0 | null | 2023-05-26T08:41:07.083 | 2023-05-26T08:45:51.413 | 2023-05-26T08:45:51.413 | 60613 | 98942 | null |
616977 | 1 | null | null | 1 | 8 | I am stuck on understanding the best way to approach this problem from a purely statistical/mathematical way.
Say that I have this data in the table below. It shows the comparison of prices for one hotel between two years with a breakdown of the board type and room type.
|Hotel |Board |Room |2023 Price |2024 Price |
|-----|-----|----|----------|----------|
|1 |HB |A |1000 |1100 |
|1 |HB |B |1600 |1850 |
|1 |FB |A |1500 |1750 |
|1 |FB |B |1900 | |
|1 |FB |C |2250 |2250 |
|1 |ALL |A | |2000 |
|1 |ALL |B |2600 |2750 |
Within this data, as a human I can easily tell that most of the prices go up year on year and from the averages below I can also understand this too
2023 average - 1808
2024 average - 1950
However when eyeballing the data I can also see there are some mix differences that will be affecting the overall mean, for example
Board type of "ALL" averages:
2023 average - 2600
2024 average - 2375
Here we can see that because of a lack of data in 2023 for the room type "A" for the board type "ALL" there looks to be a step back in price. This then may actually be artificially increasing the overall mean for this hotel and thus showing a smaller increase in price year on year.
What I want to try and understand - is there a technique out there that will be able to show and explain the drivers affecting the yearly overall means across all groups?
| Explainability of groups mix affect on overall mean | CC BY-SA 4.0 | null | 2023-05-26T09:12:48.007 | 2023-05-26T09:12:48.007 | null | null | 388872 | [
"hypothesis-testing",
"mathematical-statistics",
"descriptive-statistics"
]
|
616978 | 1 | null | null | 0 | 11 | I am studying about Tukey's depth and the characterization of distributions by Tukey's depth and I don't know the proofs of following results:
(a) $Q_a=(x \in \mathbb{R}^d:D(x) \geq \alpha)$ are closed convex sets, where $D(.)$ is the Tukey's depth.
(b) If $F\neq G$ are absolutely continuous probability distributions on $\mathbb{R}^d$ then the set of points $((D_F(x), D_G(x)):x\in\mathbb{R}^d)$ in $\mathbb{R}^2$ have non zero Lebesgue measure.
(c)Let $$C_p=\bigcap_t\{R(t):\mathbb{P}(R(t))\geq p\}$$, where $R(t)=\{x\in \mathbb{R}^d:D(x)>t\}$. Then if $F$ id absolutley continuous and the corresponding densoty function is non - zero everywhere then $C_p=R(t_p)$, where $t_p$ is such that $\mathbb{P}(x\in\mathbb{R}^d:D(x) \geq t_p)=p$.
Can someone help me with these proofs?
| Results related to Tukey's depth | CC BY-SA 4.0 | null | 2023-05-26T09:19:33.917 | 2023-05-26T09:19:33.917 | null | null | 376295 | [
"distributions",
"tukey-depth"
]
|
616979 | 1 | 617006 | null | 5 | 246 | I've begun working with the Gumbel distribution and started with the example in the package documentation at [https://cran.r-project.org/web/packages/fitdistrplus/fitdistrplus.pdf](https://cran.r-project.org/web/packages/fitdistrplus/fitdistrplus.pdf). For the Gumbel distribution `fitdist()` requires start parameters (template is `fitdist(data, distr, method = c(...), start=NULL,...)`), where it's described as "A named list giving the initial values of parameters of the named distribution or a function of data computing initial values and returning a named list. This argument may be omitted (default) for some distributions for which reasonable starting values are computed..." Gumbel is not on the list of distributions where start parameters may be omitted. In the below code I use the start parameters from the reference manual example and it works, but that's random. If I use start parameters of `start = list(1,1)` for example the function doesn't work.
Is there a way to generate these start parameters automatically? When I find start parameters that work the output is indifferent for example whether I use (5,5) or (500,500), so a hack solution would be to randomly generate parameter values until the function works. But I'm hoping for a cleaner, non-hack solution.
By the way I realize Gumbel is not a good fit for `lung` dataset! I'm just using `lung` for sake of example.
Code:
```
library(evd)
library(fitdistrplus)
library(survival)
# Gumbel distribution
time <- seq(0, 1000, by = 1)
deathTime <- lung$time[lung$status == 1]
fitGum <- fitdist(deathTime, "gumbel",start=list(a=10,b=10))
survGum <- 1-evd::pgumbel(time, fitGum$estimate[1], fitGum$estimate[2])
plot(time,survGum,type="n",xlab="Time",ylab="Survival Probability", main="Lung Survival")
lines(survGum, type = "l", col = "red", lwd = 3) # plot Gumbel
```
| Is there a clean way to derive the start parameters for running the `fitdist()` function for the Gumbel distribution? | CC BY-SA 4.0 | null | 2023-05-26T09:51:35.120 | 2023-05-26T16:10:02.023 | null | null | 378347 | [
"r",
"survival",
"gumbel-distribution"
]
|
616980 | 1 | null | null | -1 | 11 | I have two random matrices one on the top of the other:
$ \begin{bmatrix}\boldsymbol{B_1} \\ \boldsymbol{B_2} \end{bmatrix}$.
and they are both of dimension $k \times N$. I have that:
$ vec\begin{bmatrix}\boldsymbol{B_1} \\ \boldsymbol{B_2} \end{bmatrix} \sim \mathcal{N} \left(vec(\boldsymbol{M}), \boldsymbol{\Sigma} \otimes \boldsymbol{S} \right)$.
where $\boldsymbol{\Sigma}$ is an $N \times N$ matrix and $2k \times 2k$. I want to find the distribution of $vec(\boldsymbol{\boldsymbol{B}_2})| vec(\boldsymbol{B_1})$, how can I procede? It should be a normal, but how do I get the mean and the variance?
| Find conditional distribution from multivariate normal with vec operator | CC BY-SA 4.0 | null | 2023-05-26T10:08:25.633 | 2023-05-26T10:08:25.633 | null | null | 269632 | [
"conditional-probability",
"joint-distribution",
"multivariate-normal-distribution",
"random-vector"
]
|
616981 | 2 | null | 616968 | 0 | null | With GMM do you mean "growth mixture model?" If so, then it looks like the proportion π_k might indicate the size of a particular latent class (class size parameter). Since the K latent classes in GMM are exhaustive and mutually exclusive, the π_k parameters for all classes add up to 1.0. This is what the equation states. (For each specific latent class, you should get a specific π_k parameter estimate that is < 1.)
| null | CC BY-SA 4.0 | null | 2023-05-26T10:15:52.697 | 2023-05-26T10:15:52.697 | null | null | 388334 | null |
616982 | 1 | 616983 | null | 1 | 29 | Binary cross entropy is written as follows:
\begin{equation}
\mathcal{L} = -y\log\left(\hat{y}\right)-(1-y)\log\left(1-\hat{y}\right)
\end{equation}
In every reference that I read, when using binary cross entropy, they use labels 0 and 1, with activation the output layer is sigmoid. I wonder if it is possible to use cross entropy labeled -1 and 1 with the output layer using tanh activation?
| can we use binary cross entropy with labels -1 and 1? | CC BY-SA 4.0 | null | 2023-05-26T10:23:00.217 | 2023-05-26T10:38:16.807 | 2023-05-26T10:31:59.707 | 155836 | 388875 | [
"cross-entropy",
"activation-function"
]
|
616983 | 2 | null | 616982 | 0 | null | No, you can’t. What would $\log\left(\hat{y}\right)$ be when $\hat{y}$ is (close to) -1?
There are simple workarounds. You can rescale your outputs to $[0, 1]$, or you can use Brier score instead of cross entropy, but why would you?
Using $\tanh$ activation functions in hidden layers is a natural thing to do, but in the output layer one advantage of sigmoid is that it has a natural probabilistic interpretation. As a consequence, the outputs are compatible with the cross entropy function you defined above.
| null | CC BY-SA 4.0 | null | 2023-05-26T10:30:31.493 | 2023-05-26T10:30:31.493 | null | null | 155836 | null |
616984 | 2 | null | 616883 | 1 | null | If you know $s(t)$ then you can express the unknown term $\gamma(t) = \lambda(t) + 4a$ as
$$\gamma(t) = 2s(t) + \frac{\text{d}s(t)}{\text{d}t} \approx 2s(t) + \frac{\Delta s(t)}{\Delta t} $$
The approximation works if there is little noise and if the sampling frequency is high (not too much change of the slope within a time period $\Delta t$).
With your example the true model is:
$$s(t) = 2a + (y_0 + 0.125 - 2a) \exp(-2t) + 0.125 \sin(2t) - 0.125 \cos(2t)$$
And an example with $a = 3$ and $y_0 = 3$ looks like
[](https://i.stack.imgur.com/xYmYz.png)
Then if we plot $2s(t) + \frac{\Delta s(t)}{\Delta t}$ we get something that resembles your original sine wave
[](https://i.stack.imgur.com/k8leT.png)
If you would know the term $a$ then you can subtract it to get $\lambda(t) = \gamma(t) - 4a$. Otherwise you can not know $\gamma(t)$ because you can always subtract or add a constant value to it and you do not know which is the case without any additional information.
---
How you want to estimate/model that final result is a matter that depends on your use case. Note the saying ["all models are wrong, but some are useful"](https://en.wikipedia.org/wiki/All_models_are_wrong). You can not sensibly just start modelling with an arbitrary tool without knowing what 'useful' is.
You need context. A question 'how do I model' requires an explanation of the goal why one needs a model in the first place. Why do you need a model if you have the real data without any noise? What is the point of modelling it? With answers to such questions you can follow it up with describing what properties the model should have.
---
Below is some R code to generate the above two figures. You can also make it work with noise, but then the parameters for the filter that is used to compute the derivative need to be changed.
Alternatively, if you know the functional form of $\lambda(t)$ or a family that it belongs to, then you can potentially solve the differential equation and describe $\lambda(t)$ in terms of $s(t)$ without the need for differentiation.
Also, instead of differentiation, one may also use integration (which is less sensitive to noise), but the plotted result $\int_0^t \gamma(x) dx$ is not always easy to interpret. An example of using integration is in the use of fitting a function that is a sum of exponential terms $\sum a_i exp(-b_i t)$ to find a starting condition for other algorithms.
```
### generate data
dt = 5/1000
t = seq(0, 5, dt)
a = 3
y0 = 3
s = 2*a + (y0 + 0.125 - 2*a) * exp(-2*t) + 0.125 * sin(2*t) - 0.125 * cos(2*t)
### plot data
plot(t,s, type = "l",
main = expression(s(t) == 2*a + (y[0] + 0.125 - 2*a) * exp(-2*t) + 0.125 * sin(2*t) - 0.125 * cos(2*t)),
cex.main = 0.8)
### compute lambda
### savgol is a Savitzky Golay filter that we use to compute a derivative
gamma = 2*s + pracma::savgol(s, fl = 3, forder = 1, dorder = 1)/dt
### plotting
range = 2:(length(t)-1) ### trim first and last case where derivative is not computed
plot(t[range], gamma[range], type = "l", ylim = c(11,13),
main = expression(gamma(t) == 2*s(t) + over(Delta * s(t), Delta * t)),
cex.main = 0.8, ylab = "gamma", xlab = "t")
```
| null | CC BY-SA 4.0 | null | 2023-05-26T10:57:09.930 | 2023-05-26T11:03:12.227 | 2023-05-26T11:03:12.227 | 164061 | 164061 | null |
616985 | 1 | 616990 | null | 1 | 22 | [](https://i.stack.imgur.com/4lBz7.png)
In Bishop's Pattern Recognition and Machine Learning book(page440), it talks about the M-step in EM algorithm of Gaussian Mixture Model. I am confused about the likelihood function of M-step. By likelihood, it is usually $p(X|\theta)$, right? But when involving this latent variable in the maximum likelihood function, it becomes a bit hard to understand.
In the last sentence of the first paragraph, it says: "we shall suppose that maximization of this complete-data log likelihood function is traightforward", then in the second paragraph, it writes:" Because we cannot use the complete-data log likelihood, we consider...". Why cannot we use the complete-data log likelihood if it is straightforward to maximize? Why taking the expected value under the posterior distribution of the latent variable makes sense? What is the issue if we just maximize $p(X,Z|\theta)=p(Z)p(X|Z,\theta)=\prod_{n=1}^{N}\prod_{k=1}^K{\pi_k^{z^{nk}}}{N(x_n|\mu_k,\Sigma_k})^{z_{nk}}$
| Why complete data log likelihood in M-step of EM algorithm | CC BY-SA 4.0 | null | 2023-05-26T11:08:51.397 | 2023-05-26T13:04:24.160 | 2023-05-26T13:04:24.160 | 388783 | 388783 | [
"maximum-likelihood",
"expectation-maximization",
"function"
]
|
616986 | 2 | null | 616787 | 2 | null | This is treated under [Configuration model](https://en.wikipedia.org/wiki/Configuration_model#Edge_probability). Note that, there can be self-edges (i.e. it's not a loop-free graph) and there can also be multiple edges between two nodes. And, the formula above is a result of a series of assumptions and is not exactly correct.
Let's find the expected value of number of edges between vertices $i$ and $j$ and setup the mathematical model. We can think of vertices with half-edges (or stubs as wikipedia article mentions) attached to them. Each vertex, $i$, has $k_i$ half-edges attached. If an half-edge from vertex $i$ is connected another half-edge from vertex $j$, we have an edge between $i$ and $j$. If an half-edge from vertex $i$ is connected to another half-edge from the same vertex, we have a self-loop for vertex $i$.
Let $X_{i}^l$ be the binary random variable that represents the $l$-th half-edge of vertex $i$. It's $1$ if it's connected to vertex $j$ and $0$ otherwise. So, the expected value of $X_{i}^l$ is the probability of the half-edge $l$ of vertex $i$ connected to vertex $j$. There are $2m-1$ half-edges in total except this one and there are $k_j$ that are of interest. So, in a random setup, this probability is simply
$$\mathbb E[X_i^l]=\frac{k_j}{2m-1}$$
The number of edges between the vertices $i$ and $j$ is just the sum of all $X_i^l$ for vertex $i$. So, the expected value of this is
$$\mathbb E[\text{# edges between i and j}] = \mathbb E\left[\sum_{l=1}^{k_i} X_i^l\right]=\sum_{l=1}^{k_i} E[X_i^l]=\sum_{l=1}^{k_i} \frac{k_j}{2m-1}=\frac{k_ik_j}{2m-1}$$
This is not exactly the probability of vertices $i$ and $j$ being connected. It's the expected number of edges between these two and is expected to be close to the probability we want if the graph (and the number of edges) is large. In this case, it can also be approximated with $2m-1\approx 2m$ in the denominator. Because, if the graph is large, and say the degrees of the vertices is small compared to the overall number of edges, the overcounting by adding the probabilities of independent events (if this expected value is assumed to be the probability as per the original question) is getting smaller because for every half-edge there are so many other possibilities to connect to and the likelihood of selecting the same half-edge from vertex $j$ is small for two given half-edges of $i$.
### A Counter Example
Moreover, returning to the original question, one should not always expect to have $k_ik_j\leq 2m$ because the denominator is just a sum of the degrees and the numerator is the multiplication of some of them. Simply, in a graph with only two vertices, we can easily have $k_1=4, k_2=4$ and therefore $2m=8$, but the "probability" greater than $1$.
The actual probability in this case is
$$1-\overbrace{3/7}^{\text{prob. of first stub of $i$ connected to $i$}} \times \overbrace{1/5}^{\text{prob. of the remaining stub of $i$ is connected to $i$}} = 32/35$$
But, the expected number of edges between the two vertices is $k_ik_j/(2m-1) = 16 / 7$, which is far greater.
### Reference
The treatment of this topic/subject is inside Section 13.2.1 of Newman's [book](https://math.bme.hu/%7Egabor/oktatas/SztoM/Newman_Networks.pdf), Networks: An Introduction.
| null | CC BY-SA 4.0 | null | 2023-05-26T11:21:47.137 | 2023-05-27T14:13:56.557 | 2023-05-27T14:13:56.557 | 204068 | 204068 | null |
616987 | 1 | 616989 | null | 2 | 123 | I'm reading this [paper](https://pubmed.ncbi.nlm.nih.gov/31321490/) about a genetic disease. In the paper, there's the following table
[](https://i.stack.imgur.com/eRPU2.png)
where $DD$ means developmental delay and $ID$ means intellectual disability.
From the table (last column) we know that
$$
P(ID \& DD) = 0.82 \\
P(ID) = 0.43 \\
P(DD) = 0.52 \\
$$
Now, imagine that we have already observed that a patient has $DD$ but we don't know if they have $ID$, which is the probability of this patient having also $ID$? This is, how can we compute $P(ID | DD)$?
I've tried using the definition of the conditional probability
$$
P(ID | DD) = \frac{P(ID, DD)}{P(DD)} = \frac{0.82}{0.52} = 1.58
$$
which doesn't make any sense.
What am I doing wrong?
| Conditional probability of having intellectual disability | CC BY-SA 4.0 | null | 2023-05-26T11:30:44.523 | 2023-05-26T11:54:13.050 | null | null | 350686 | [
"conditional-probability",
"medicine",
"disease"
]
|
616988 | 2 | null | 616726 | 3 | null | If you have a model with behavior like an increasing function then often you do not get random behavior in the form of some additive noise term $\epsilon_i(t)$, and instead it is more like the errors are correlated and with different type of functions.
There are several different ways to perform a statistical analysis. For example
- Logistic regression and probit regression: Here the increasing curve can be a probability for some Bernoulli distributed variable. Here the error is incorporated by using binomial regression.
- Survival curves: Here the curve is the effect of an accumulation of risk which is necessarily an increasing function. The curve can be fitted by maximizing the likelihood function. When comparing multiple groups and assuming a constant relative risk ratio, then one can use Cox regression (the survival curve, a distribution for waiting time, is turned into a Bernoulli distribution, by comparing groups)
- Lorenz Curves: These curves plot a cumulative distribution. They can be distribution of wealth, and the related gini index, but also ROC curves. Often these are analysed with simplistic measures, like the area under the curve, but you can also analyse them if you have some mechanistic model underlying the curves.
- Frequency analysis of extreme events: Here one often plots the cumulative distribution function for events and the curve is expected to be some extreme value distribution. But you are not fitting the CDF as if you are doing least squares regression. Ideally you will be fitting the observed events with the PDF. (Although there might be considerations like the tails being difficult to predict, etc. and there all kind of ways to fit these distributions. Before there where fast and cheap computers, people used to plot the double logarithm to fit a Gumbel distribution, which becomes a straight line in such a plot.)
- Empirical distribution function: these are sometimes fitted by minimizing the Kolmogorov-Smirnov statistic or some other metric that describes the distance between the fitted distribution and the empirical distribution (I have no source here, but I believe that I have seen this done in some places).
- etcetera
So it is possibly better to start analyzing your curve based on some underlying principle.
- What sort of process are you looking at and can you reason why you get an increasing curve?
- What sort of statistical variations may occur in the sampling/experiment and how dooes this influence the observed curve?
With such questions you can figure out what sort of regression/fitting to use.
| null | CC BY-SA 4.0 | null | 2023-05-26T11:43:32.640 | 2023-05-26T11:43:32.640 | null | null | 164061 | null |
616989 | 2 | null | 616987 | 4 | null | The problem here is the notation in the paper (not your formula). ID/DD in this context means ID or DD (not ID and DD). In your formula, $P(ID,DD) = P(ID \cap DD)$, but in the paper ID/DD means $P(ID \cup DD)$. If there are 49 that have either (or both) and there are 26+31=57 in the individual categories, this means there was an overcount of 8...thus, the number that have both is 8 or 8/60 = 13.3%. (Note, the clue was that the intersection count can't be larger than either of the individual counts.)
If you replace your numerator with this value, you should get the desired conditional probability.
| null | CC BY-SA 4.0 | null | 2023-05-26T11:54:13.050 | 2023-05-26T11:54:13.050 | null | null | 199063 | null |
616990 | 2 | null | 616985 | 1 | null | you might consider rewording your question a little bit as I find the question itself confusing. Nevertheless, let me try to re-word what Bishop has said in more plain language.
Firstly as regards we shall suppose that maximisation of this complete-data log likelihood function is straightforward
He means that if we had access to both X and Z, then it would be straightforward to find the $\theta$ which maximises this log-likelihood.
We don't have the values of Z, i.e. we don't know $Z_{1}=a, X_{2}=b, \ldots$
But we do have a posterior over all the values of Z, which he calls $P(Z|X, \theta)$
So while we can't maximise $P(X,Z|\theta)$ wrt $\theta$, we can maximise
$\langle P(X,Z|\theta) \rangle_{z}$
wrt $\theta$
This expectation over Z integrates out all z-dependence, the expression will depend on just $(X,\theta)$ and you know X, it's your observable data, so now you have something which is a function of $\theta$ which you wish to maximise wrt $\theta$
Note that the expectation wrt z above means integrating wrt the posterior, explicitly this is given by
$\int P(X,Z|\theta) P(Z|X,\theta _{t-1})dZ $
The somewhat non-obvious point is that I've written $\theta$ and $\theta _{t-1}$. Whatever value $\theta$ took at timestep t-1 gets subbed into that integral, and then you maximise the integral wrt $\theta$ to calculate $\theta_{t}$
Bishop then goes on to prove that this somewhat non-obvious procedure (unlike gradient descent) guarantees your log-likelihood will increase at every step
(and while I've written the above in terms of maximising $\langle P(X,Z|\theta) \rangle_{z}$ wrt $\theta$ for notational simplicity, in practice you maximise its logarithm wrt $\theta$)
| null | CC BY-SA 4.0 | null | 2023-05-26T12:01:16.207 | 2023-05-26T12:01:16.207 | null | null | 103003 | null |
616991 | 2 | null | 616052 | 0 | null | Since you don't seem to have a strong reason to consider effect of the course as random and with only 10 different courses I would recommend against random effects, based on the glmm-faq from Ben Bolker, specifically because you probably can't estimate the variance very well:
[https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#should-i-treat-factor-xxx-as-fixed-or-random](https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#should-i-treat-factor-xxx-as-fixed-or-random)
I would treat course as fixed effects in a normal glm or, if you feel that this makes to many variables, you can use them as strata in a conditional logit model:
[https://www.rdocumentation.org/packages/survival/versions/3.5-5/topics/clogit](https://www.rdocumentation.org/packages/survival/versions/3.5-5/topics/clogit)
Also you should rewrite your course variable as one factor column with 10 levels, as the the current form will probably create linear dependence with the intercept and the R-functions will transform into appropriate dummies of contrasts on their own.
| null | CC BY-SA 4.0 | null | 2023-05-26T12:18:55.323 | 2023-05-26T12:18:55.323 | null | null | 341520 | null |
616992 | 1 | null | null | 0 | 6 | I have a large dataset and I want to check if there is any difference between the whole sample and the subsample.
For example, I'm checking the difference in the age of patients in the whole sample and the age of patients with cancer (subsample).
| Using t-test to check the difference between a sample and subsample? | CC BY-SA 4.0 | null | 2023-05-26T12:30:36.547 | 2023-05-26T12:30:36.547 | null | null | 388882 | [
"t-test",
"sample",
"subsampling"
]
|
616993 | 1 | 616994 | null | 0 | 40 | I'm reporting odds ratios for a logistic regression. I'm including p-values but it doesn't include R-squared as [elsewhere](https://www.youtube.com/watch?v=8nm0G-1uJzA), it says that "Odds Ratios and Log(Odds Ratios) are like R-Squared".
Is this correct, should you add e.g. McFadden R^2^ to odds ratio results?
| when reporting odds ratios should you include R-squared values? | CC BY-SA 4.0 | null | 2023-05-26T12:31:37.557 | 2023-05-26T15:41:57.410 | null | null | 103090 | [
"logistic",
"r-squared",
"odds-ratio"
]
|
616994 | 2 | null | 616993 | 2 | null | That video did not seem very helpful to me.
It guess it's technically correct that odds ratios and R2s both "explain the relationship between two things" but you could also say that about t values, OLS coefficients, Chi square tests, kendall's tau, hazard ratios, risk ratios, and a zillion other things in stats that are very very different in other important ways. It's like saying "Taylor Swift and Sea snails both contain DNA" - technically true but not very helpful in explaining either of these two things.
Odds ratios are a transformation of the coefficients you get from a logistic regression model (which the video correctly notes are just the log of the odds ratios). So just like in linear regression, you get one odds ratio/coefficient for each independent variable, and each one tells you the relationship between that independent variable and the dependent variable, holding all other variables constant. And you get a p value for each one, which tells you if the coefficient is significantly different from zero (or if the odds ratio is significantly different from one, which is the same thing).
As a side note - lots of people (like me) think that odds ratios are really confusing because people often [misinterpret them as risk ratios.](https://stats.stackexchange.com/questions/213223/explaining-odds-ratio-and-relative-risk-to-the-statistically-challenged) It might be better to just report the coefficients (log odds ratios) because those are obviously uninterpretable, and then use some other approach (like average marginal effects) to produce a measure of effect size in terms of changes in probability.
Now for R2: In linear regression/OLS the R2 is something we calculate in regard to the model as a whole - you get one R2 value for the whole model, which tells you the percentage of the variance in the dependent variable being explained by all of the independent variables together. Now in a logit model we're not "explaining variance' at all, so the entire concept of an R2 is meaningless. But people have tried to come up with various "[psuedo R2](https://en.wikipedia.org/wiki/Pseudo-R-squared)" values that could serve as similar diagnostic tools about the model as a whole. The McFadden R2 is one of these, but it doesn't really have much to do with the R2 from linear regression. It shows how the log likelihood of the observed data change between a null model and the full model. Personally, I never report any psuedo-R2 from a logit model, but I know others do.
So in short - an odds ratio is a (potentially confusing) measure of association for an individual variable and McFadden's R2 is just one of various "psuedo-R2"s people have come up with to serve as a measure of the explanatory power of the model as a whole that are kinda sorta like the R2 from an OLS model. You would not calculate or report a psuedo R2 value for every odds ratio, but you might report one for the model as a whole.
| null | CC BY-SA 4.0 | null | 2023-05-26T13:10:16.273 | 2023-05-26T13:10:16.273 | null | null | 291159 | null |
616995 | 1 | null | null | 0 | 16 | I have developed different kinds of RNNs (such as LSTM,GRU etc.)to predict future values of thermocouple measurements. The residual errors look like they do not follow normal distribution, so I wanted to explore their skewness and kurtosis values.
Can I somehow correlate the values of skewness and kurtosis (for example if skewness is positive or negative and if kurtosis is more or less than 3) with the performance of the model and the predictions?
Thank you!
| Meaning of Skewness and Kurtosis values of Residual Errors in Time Series Forecasting Problem using LSTM | CC BY-SA 4.0 | null | 2023-05-26T13:21:47.813 | 2023-05-26T13:21:47.813 | null | null | 385198 | [
"residuals",
"skewness",
"recurrent-neural-network",
"kurtosis"
]
|
616996 | 1 | null | null | 0 | 32 | My master's thesis involves modelling fMRI data. Each of $M$ participants has a total of $N$ voxels being measured. All these measurements represent an activation amplitude. Aside from this, I have data on participant characteristics (e.g. age, gender, etc.). My aim is to get posterior distributions across the assignment probabilities of the classes. In essence I want to cluster them using a mixture model. And I want to know how likely each cluster is per voxel. Either the voxel is negatively, positively, or 'null' activated, so there are three classes.
What I am trying to do is fit a three-component Gaussian mixture model. Let's denote this as follows using the allocation variable perspective, so I condition on some grouping $z_i$. For simplicity's sake I will just take the variance to be equal across components:
$$
y_i | z_i \sim \mathcal{N}(\mu_{z_i}, \sigma^2)\quad \text{with}\quad \pi(z_i = g) = \frac{1}{3}
$$
As I understand each $\mu_{z_i}$ can again be modelled using a linear combintation of mixed effects. This is where I get stuck. As I undersand random-effects, they allow to account for group-level variation. For instance, each of my $M$ participants can be assigned its own random-intercept, allowing for prediction to be more accurate. However, this appears to be at odds with the formulation of the mixture model: there are only three components. How can I then get $M$ different values given the restriction of three components? What am I overlooking? My understanding of the mixture model is probably wrong.
| Bayesian mixture model with Random Effects in Linear Predictor | CC BY-SA 4.0 | null | 2023-05-26T13:43:48.967 | 2023-05-26T17:09:02.873 | 2023-05-26T14:40:07.270 | 388887 | 388887 | [
"bayesian",
"mixed-model",
"multilevel-analysis",
"finite-mixture-model"
]
|
616997 | 1 | null | null | 0 | 55 | Assume $Y_{ij} \sim N(\mu_i,\sigma^2)$, $\mu_i \sim N(\eta,\tau^2)$ for $i=1,2$ $j=1,\cdots,n_i$ and prior $\pi(\eta,\tau^2,\sigma^2) \propto Ca^+(\tau^2,0,b_{\tau}) \times Ca^+(\sigma^2,0,b_{\sigma})$
where $Ca^+(x; 0, b)$ is the truncated Cauchy distribution with pdf $$f(x|b)=\frac{2}{\pi}\times \frac{1}{b[1+(x/b)^2]}I_{[x>0]}, b>0$$
[](https://i.stack.imgur.com/9QBaP.png)
I want to use JAGS with R to do Bayesain analysis. Because the full conditionals for $\sigma$ and $\tau$ are not a known density, I want to use Acceptance-Reject sampling with $\tau^* \sim IG(a,b)$, $C_{max}=Ca^+(0;0,b_{\tau})$ and the acceptance probability is $\frac{Ca^+(\tau^*;0,b_{\tau})}{C_{max}}$.
$IG(x;a,b)$ the inverted Gamma distribution with pdf $$f(x|a,b)=\frac{b^a e^{-b/x}x^{-(a+1)}}{\Gamma(a)}$$
How do I use Acceptance-Reject sampling when I don't have the complete form of the density function?
| Acceptance-Reject to generate a distribution proportionate to Inverse Gamma and truncate Cauchy distribution | CC BY-SA 4.0 | null | 2023-05-26T13:53:05.110 | 2023-05-27T10:01:34.233 | 2023-05-26T16:58:55.710 | 350153 | 350153 | [
"bayesian",
"simulation",
"accept-reject"
]
|
616998 | 2 | null | 120005 | -1 | null | Suppose $A=\begin{pmatrix}a&b\\c&d\end{pmatrix}$ and $B=\begin{pmatrix}e&f\\g&h\end{pmatrix}$
Then,
A$\otimes$B=$\begin{pmatrix}a&b\\c&d\end{pmatrix}$ $\otimes$ $\begin{pmatrix}e&f\\g&h\end{pmatrix}$=\begin{pmatrix} a {\begin{pmatrix}e&f\\g&h\end{pmatrix}}&b{\begin{pmatrix}e&f\\g&h\end{pmatrix}}\\c{\begin{pmatrix}e&f\\g&h\end{pmatrix}}&d{\begin{pmatrix}e&f\\g&h\end{pmatrix}}\end{pmatrix}
And
A$\oplus B=\begin{bmatrix}A&0\\0&B\end{bmatrix}$
| null | CC BY-SA 4.0 | null | 2023-05-26T13:53:31.537 | 2023-05-26T13:53:31.537 | null | null | 388886 | null |
616999 | 2 | null | 616993 | 2 | null | No, you should not report $R^2$ for a logistic regression analysis. My thought is that this would be misleading because a "perfect fit" is anomalous in logistic regression and inference cannot be performed, plus a linear regression would still give you the "optimal" $R^2$ even when the distribution of the response is binary. While there are versions of $R^2$ statistics meant for analysis of categorical data, I disagree that these are somehow necessary or even useful for understanding the odds ratios themselves.
The video specifically says; "The odds ratio - and the log odds ratio - are like the R^2 [in that] they indicate a relationship between two things". Which... well, so does an OLS regression slope, a Pearson correlation, a Wilcoxon U-statistic, a covariance, a hazard ratio, a risk ratio, a risk difference, a harmonic mean difference, ... the list goes on. In fact, the comparison becomes more tenuous when you consider the problem of multivariate adjustment. In that case, the OR should be compared to a regression slope in a linear model because it summarizes a bivariate association in a multivariate projection whereas the $R^2$ is simply a bivariate association (note it relates the fitted values to the actual values irrespective of the individual components).
The point of fitting a logistic regression is precisely to obtain odds ratios. To test a hypothesis of association between an exposure and a response, you can formulate a null hypothesis that the odds ratio is equal to 1. The inference that one performs on the odds ratio is powerful and robust. Just like with linear regression, the best way to understand an odds ratio is simply to present it and its 95% confidence interval. One can go further by simply tabulating the data is the analysis is largely categorical, or by plotting the continuous exposure versus the discrete response, and fitting a smoothed curve relating the mean response over time. This "S shaped" curve is precisely what logistic regression estimates. The odds ratio is the slope of that curve, with a value of 1 indicating a completely flat probability response and a value of $\infty$ indicating a step function.
| null | CC BY-SA 4.0 | null | 2023-05-26T13:58:27.973 | 2023-05-26T15:41:57.410 | 2023-05-26T15:41:57.410 | 8013 | 8013 | null |
617000 | 1 | null | null | 0 | 10 | I have two very large independent groups (n=30,000 and 12,000). Each data point is an individual’s UK Deprivation Index decile (the Deprivation Index itself is a ranking from 1 to 32,844 of an area’s relative deprivation where 1 is most deprived). My hypothesis is that one group is more deprived than the other, i.e. has more people in the lower deciles.
The distributions look like this:
[](https://i.stack.imgur.com/MebjS.jpg)
What would be the appropriate test for my hypothesis? Mann-Whitney?
Thanks!
| What test to compare Deprivation Index between two groups | CC BY-SA 4.0 | null | 2023-05-26T13:58:33.793 | 2023-05-26T13:58:33.793 | null | null | 308263 | [
"hypothesis-testing",
"statistical-significance",
"wilcoxon-mann-whitney-test"
]
|
617001 | 1 | null | null | 0 | 15 | This might be a dumb question but I am struggling to figure out what predict.cv.glmnet actually do under the hood, let's say for example for a Lasso regression.
Does it actually train a new model with new coefficients based on the new data submitted (via newx argument)? I thought it just takes the coefficients from the already trained + tuned model and generates predicted Y values from those for the new data, for example for a validation. Also, does this change depending if we submit new data or not?
The reason I am asking is because I am looking at the source code from an ensemble learner which internally uses predict.cv.glmnet in a cross validation and the author claims that this yields an unbiased prediction error. So it runs predict.cv.glmnet k many times, and in each iteration it calls the function with newx set to 1/k of the new data. Then it simply returns the mean squared error between the predictions and Y values of the new data. I am struggling to make sense of this, if predict.cv.glmnet just uses the same (hyper- ) parameters from the cv.glmnet object, what is the point of the iterated call? Why not run predict.cv.glmnet on all the new data together?
Kind regards,
Josephine
| What does predict.cv.glmnet actually do with new data? | CC BY-SA 4.0 | null | 2023-05-26T14:28:13.523 | 2023-05-26T14:33:39.303 | 2023-05-26T14:33:39.303 | 386232 | 386232 | [
"r",
"cross-validation",
"predictive-models",
"glmnet"
]
|
617002 | 1 | null | null | 2 | 22 | I have datasets from two [bivariate poisson](https://projecteuclid.org/journals/kodai-mathematical-seminar-reports/volume-25/issue-2/The-structure-of-bivariate-Poisson-distribution/10.2996/kmj/1138846776.pdf) distributions, $BVP_x(\lambda_1, \lambda_2, \lambda_{12})$, and $BVP_y(\lambda_3, \lambda_4, \lambda_{34})$ respectively.
Now we know the correlation coefficient for these two distributions can be defined as $\rho_x = \frac{\lambda_{12}}{\sqrt{(\lambda_1 + \lambda_{12})(\lambda_2 + \lambda_{12})}}$ and$\rho_y = \frac{\lambda_{34}}{\sqrt{(\lambda_3 + \lambda_{34})(\lambda_4 + \lambda_{34})}}$,
Is there a standard test for testing $H_0 : \rho_x = \rho_y$ vs $H_1 : \rho_x \neq \rho_y$, I understand this doable by likelihood ratio tests, but not sure if there is already existing distribution for testing the parameters.
I know for bivariate normal distribution Fisher's transformation and $z$ test would work. Would that also work for bivariate poisson.
| Testing correlation coefficients from two bivarate poisson | CC BY-SA 4.0 | null | 2023-05-26T14:33:05.460 | 2023-05-27T01:27:02.043 | null | null | 123199 | [
"hypothesis-testing",
"correlation",
"poisson-distribution",
"bivariate"
]
|
617003 | 1 | null | null | 1 | 15 | So, the policy gradient is evaluated as follows:
$$
L = \sum_{\text{minibatch}}\sum_{t} G^t \log \pi(a_t, s_t)
$$
And I was trying to implement this as as code, and I noticed that it really looks like a weighted sparse categorical crossentropy, which would be:
$$
L = \sum_{\text{minibatch}}\sum_{t} G^t (\log \pi(s_t))_{a_t}
$$
where in the unweighted version (say for classification), $G^t$ would be always 1
Am I wrong?
| Isn't policy gradient just a weighted sparse categorical crossentropy? | CC BY-SA 4.0 | null | 2023-05-26T14:46:26.263 | 2023-05-26T14:46:26.263 | null | null | 346940 | [
"neural-networks",
"loss-functions",
"reinforcement-learning"
]
|
617005 | 2 | null | 616979 | 4 | null | Putting EdM's answer into code, which seems to work well and is very concise:
```
library(evd)
library(fitdistrplus)
library(survival)
time <- seq(0, 1000, by = 1)
deathTime <- lung$time[lung$status == 2]
scale_est <- (sd(deathTime)*sqrt(6))/pi
loc_est <- mean(deathTime) + 0.5772157*scale_est
fitGum <- fitdist(deathTime, "gumbel",start=list(a = loc_est, b = scale_est))
survGum <- 1-evd::pgumbel(time, fitGum$estimate[1], fitGum$estimate[2])
plot(time,survGum,type="n",xlab="Time",ylab="Survival Probability", main="Lung Survival")
lines(survGum, type = "l", col = "red", lwd = 3) # plot Gumbel
```
Alternative: in referencing [Why is nls() giving me "singular gradient matrix at initial parameter estimates" errors?](https://stats.stackexchange.com/questions/160552/why-is-nls-giving-me-singular-gradient-matrix-at-initial-parameter-estimates) per whuber's comment, which says to paraphrase: "Automatically finding good starting values for a nonlinear model is an art. (It's relatively easy for one-off datasets when you can just plot the data and make some good guesses visually.) One approach is to linearize the model and use least squares estimates." I came up with the following which appears to work in this case.
```
library(evd)
library(fitdistrplus)
library(survival)
time <- seq(0, 1000, by = 1)
deathTime <- lung$time[lung$status == 2]
# Define the linearized model
linearized_model <- function(time, a, b) {
log_time <- log(time)
log_time - a / b
}
# Define the objective function for least squares estimation
objective_function <- function(params) {
a <- params[1]
b <- params[2]
predicted_values <- linearized_model(deathTime, a, b)
residuals <- predicted_values - log(deathTime)
sum(residuals^2)
}
# Least squares estimation to obtain starting parameters
starting_params <- optim(c(1, 1), objective_function)$par
fitGum <- fitdist(deathTime, "gumbel",start=list(a = starting_params[1], b = starting_params[2]))
survGum <- 1-evd::pgumbel(time, fitGum$estimate[1], fitGum$estimate[2])
plot(time,survGum,type="n",xlab="Time",ylab="Survival Probability", main="Lung Survival")
lines(survGum, type = "l", col = "red", lwd = 3) # plot Gumbel
```
| null | CC BY-SA 4.0 | null | 2023-05-26T14:46:55.840 | 2023-05-26T16:10:02.023 | 2023-05-26T16:10:02.023 | 378347 | 378347 | null |
617006 | 2 | null | 616979 | 10 | null | The [NIST page on Gumbel distributions](https://www.itl.nist.gov/div898/handbook/eda/section3/eda366g.htm) shows the [method of moments](https://stats.stackexchange.com/tags/method-of-moments/info) estimators for the parameters of both the maximum and minimum extreme-value distributions. Those are easily calculated and should provide reliable initial estimates.
The parameterization of the density function for the minimum extreme value distribution on that page is:
$$f(x) = \frac{1}{\beta} \exp({\frac{x-\mu}{\beta}})\exp({-\exp({\frac{x-\mu}{\beta}}}))$$
with location parameter $\mu$ and scale parameter $\beta$.
In that parameterization, with sample mean $\bar X$ and sample standard deviation $s$, the method of moments estimators are:
$$\tilde{\beta}=\frac{s\sqrt6}{\pi}$$
and:
$$\tilde{\mu}=\bar X +0.5772 \tilde{\beta} $$
where 0.5772 is an approximation to [Euler's constant](https://en.wikipedia.org/wiki/Euler%27s_constant).
That's used by `survreg()` in R, which you can see by typing the following at the command prompt:
```
survival::survreg.distributions$extreme$init
```
| null | CC BY-SA 4.0 | null | 2023-05-26T14:49:07.597 | 2023-05-26T14:49:07.597 | null | null | 28500 | null |
617007 | 1 | null | null | 0 | 32 | Say we have $N$ independent Gaussian variables $X_i$, each drawn from a mean-zero Gaussian with variance $\sigma_i^2$, where in this case I assume I know the $\sigma_i^2$. Define the weighted average as
$$
\bar{X} = w \sum_i^N X_i / \sigma_i^2, \ \ w = \left(\sum_i^N \sigma^{-2}_i\right)^{-1}.
$$
Then define the weighted sample variance as
$$
S^2 = w \sum_i^N (X_i - \bar{X})^2 / \sigma_i^2
$$
In the case that all variances are equal, so that $\sigma_i^2 = \sigma_2$, this is of course just the standard un-weighted sample variance.
Now, the expectation of $S^2$ is simply $w(N-1)$, where the $(N-1)$ comes about in the same way that we get it in the non-weighted case. We can then consider the quantity $\gamma = S^2 / \langle S^2 \rangle$ to be a "normalized" weighted variance with expectation unity.
I'd like to know the distribution of $\gamma$.
In the case that all variances are equal, $\gamma$ can be written
$$
\gamma = \frac{N}{N-1} \sum_i^N (Y_i - \bar{Y})^2,
$$
where $Y = X/\sigma$ is a standard normally distributed variable. According to [this MathWorld article](https://mathworld.wolfram.com/SampleVarianceDistribution.html), this ends up having the distribution
$$
f(\gamma) = \frac{\beta^\beta}{\Gamma(\beta)} \gamma^{\beta-1} e^{-\beta \gamma},
$$
with $\beta = (N-1)/2$. This is just a Gamma distribution.
But what about the case in which $\sigma_i^2$ can vary for different $i$? Having run several simulations, it seems that the distribution of $\gamma$ is precisely the same as in the unweighted case, and thus depends only on $N$. I have only two problems with this:
- I can't prove it.
- Intuitively, I would expect that if I had say $N=100$, but of those variables, only one had low variance, and the rest had "enormously" high variance, then the distribution should be approximately the same as having $N=1$. However, the distribution knows nothing about the variances involved, and only about the total number of observations. Most likely my intuition just needs to be sharpened here -- does anyone have a good explanation?
EDIT
I'm adding some code and a plot here to show the simulation I did:
So, here's the simulation code:
```
def simulate(variance: np.ndarray, size=100000):
nd = len(variance)
w = 1/np.sum(1/variance)
Z = np.random.normal(size=(size, nd))
first_term = np.sum(Z**2, axis=1)
second_term = np.sum(Z / np.sqrt(variance), axis=1)**2 * w
return (first_term - second_term) / (nd - 1)
```
Then, I made a plot where I used three different variance vectors: (i) all ones, (ii) all ones except one at $10^{-5}$, and (iii) half ones, half $10^{-5}$.
Code for making the plot:
```
N = 100
plt.figure(figsize=(10, 6))
sim_gamma_ones = simulate(np.ones(N))
lopsided_var = np.ones(N)
lopsided_var[0] /= 1e5 # very small variance for just one observation
sim_gamma_single_good = simulate(lopsided_var)
half_var = np.ones(N)
half_var[:N//2] /= 1e5 # very small variance for just one observation
sim_gamma_half_good = simulate(half_var)
plt.hist(sim_gamma_ones, bins=150, histtype='step', label='unweighted', density=True)
plt.hist(sim_gamma_single_good, bins=150, histtype='step', label='one reliable obs', density=True)
plt.hist(sim_gamma_half_good, bins=150, histtype='step', label='half reliable', density=True)
x = np.linspace(0, 5, 500)
plt.plot(x, gamma.pdf(x, a=(N-1)/2, scale=2/(N-1)), color='k', lw=1, label='Gamma distribution')
plt.title(f"N={N}")
plt.legend()
```
The plots:
[](https://i.stack.imgur.com/hmU7G.png)
[](https://i.stack.imgur.com/VDcZk.png)
[](https://i.stack.imgur.com/6eDUC.png)
| Distribution of Weighted Sample Variance | CC BY-SA 4.0 | null | 2023-05-26T14:51:18.420 | 2023-05-26T22:43:51.460 | 2023-05-26T22:43:51.460 | 81338 | 81338 | [
"random-variable",
"gamma-distribution",
"weighted-mean",
"weighted-data"
]
|
617008 | 1 | null | null | 0 | 11 | I am interested in multi-modal medical image registration. Given an image, I need to replace it by an image constituted by the entropies of the pixels/voxels.
I don't understand well how Wachinger and Navab did in this paper [https://campar.in.tum.de/pub/wachinger2010structural/wachinger2010structural.pdf](https://campar.in.tum.de/pub/wachinger2010structural/wachinger2010structural.pdf)
| entropy images or image of the entropies | CC BY-SA 4.0 | null | 2023-05-26T15:09:13.777 | 2023-05-26T15:09:13.777 | null | null | 388894 | [
"optimization"
]
|
617009 | 2 | null | 616899 | 4 | null | Great question.
I think the previous answers by @HarveyMotulski and @MichaelLew are solid: you set out to investigate a between-group difference, with positive but weak results, and especially given the replication crisis need to emphasize that while by your decision criterion this counts as a group difference, those results depend on a single observation.
Their answers concentrate on describing the experiment you set out to do, and the analysis you would have pre-registered. And I agree you have a duty to report that as planned, to help stave off file drawer bias and other scientific gremlins.
However, the key result to me is your surprise at the few successes. This may not be Fleming's petri dish, or Rutherford's reflected alpha particles, but it's a potential discovery. It seems you had a very important but unarticulated background assumption that only became apparent when it was violated.
Worth mentioning for its own sake and because that violation might invalidate the statistical test. I'm thinking here of Feynman's story about rats running mazes.
Gelman frequently emphasizes "check the fit" as a step that might be outside your formal inference framework: [one example here](https://statmodeling.stat.columbia.edu/2013/03/14/everyones-trading-bias-for-variance-at-some-point-its-just-done-at-different-places-in-the-analyses/). I think this is also consilient with Mayo's [severe testing](https://errorstatistics.com/mayo-publications/) approach -- but I'm no expert.
That may lead you to a new model or explanation -- it would be exploratory work on this dataset and require another experiment to test, which hopefully you have time and resources for, else a good description can inspire someone else to.
Best of luck!
| null | CC BY-SA 4.0 | null | 2023-05-26T15:38:09.330 | 2023-05-26T15:38:09.330 | null | null | 19951 | null |
617010 | 2 | null | 616752 | 3 | null | The following demonstration is not rigorous, but it's correct :-).
Because the Normal distribution is continuous, there is almost surely a unique maximum absolute value. Thus, the chance that $|Z_n|=1$ is $1/n.$ In all other cases $Z_n = X_n/X_i$ where $X_i$ attains the maximum absolute value. Furthermore, for extremely large $n,$ the standard deviation of the distribution of that maximum $M_n$ shrinks towards zero as $n$ grows. Consequently, $M_n$ is nearly constant and will necessarily be close to its median.
Because the CDF of the maximum is $\Phi(x)^n,$ its median $m_n$ is where $\Phi(m_n)^n = 1/2,$ so that
$$M_n \approx m_n = \Phi^{-1}(2^{-1/n}).$$
Thus,
>
$Z_n$ will (with probability $1-1/n$) be arbitrarily close to $X_n/m_n,$ which has a mean-zero Normal distribution with standard deviation $1/m_n.$
In a simulation of $5\,000$ samples, each of size $50\,000,$ I computed all the $Z_n.$ Here are histograms for four selected values of $n.$ Over them I have plotted the Normal approximation. The p-value is that obtained by removing the values of $Z_n$ equal to $1$ (we know they contribute a small "atom") and applying a Kolmogorov-Smirnov test to the rest.
[](https://i.stack.imgur.com/LKVCq.png)
Even with this $5\,000$ samples, the KS test -- which is notorious for its power to distinguish Normal from non-Normal data (some people on this site have claimed it will always reject Normality on any dataset this large) -- does not reject the foregoing claim when $n = 50\,000.$
More specifically, upon dividing $Z_n$ by the standard deviation $1/m_n,$ we find the distribution of $Z_n \Phi^{-1}(2^{-1/n})$ converges to the standard Normal distribution. An immediate consequence is that the distribution of $Z_n$ converges to zero at a rate given by $1 / \Phi^{-1}(2^{-1/n}).$ This is a slow rate, because a crude approximation gives $m_n = O(1/\sqrt{\log n}).$ For instance, $1/m_n=1/10$ for $n \approx e^{100} \approx 10^{43}$ and a standard deviation of $1/10$ is still appreciable. Thus, in any application it would usually be unwise to approximate $Z_n$ by $0:$ use this limiting Normal distribution instead as a better approximation.
| null | CC BY-SA 4.0 | null | 2023-05-26T15:45:03.713 | 2023-05-26T22:00:51.187 | 2023-05-26T22:00:51.187 | 919 | 919 | null |
617011 | 1 | null | null | 0 | 14 | I have a data that contains demographic information and primary variables (A,B,X,Y,Z) that extracted from past 20 years literature review. All those literature reviews are about case study, so in the data that we collected, each row is a person that selected from different case studies. Therefore, the data is like the regular data that contain every information for each person, like age, sex, A, B, X, Y, Z. But each person is not collected like regular sample collection, is from case studies from literature review.
I want to compare mean difference between A and B, also the mean difference among X, Y, Z. But I guess that we can't use T-test or ANOVA for this kind of situation right? Does anyone know what statistical methods should I conduct in this situation? And how to do it? Thank you! Hopefully that I express myself clearly.
| What statistical method that I can use to compare mean of cases that selected from different studies? | CC BY-SA 4.0 | null | 2023-05-26T16:19:01.997 | 2023-05-26T16:19:01.997 | null | null | 352192 | [
"hypothesis-testing",
"mean"
]
|
617014 | 1 | 617047 | null | 1 | 32 | I started working with the Gumbel distribution and fit it to the `lung` dataset to try it out. I then compared it with the survival curve using the Weibull distribution, which provides the best fit per goodness-of-fit tests and also hews closely to the Kaplan-Meier plot as shown below.
When averaging the death rate in the `lung` data (status = 2 is death; all status 2's divided by a total of 228 elements in `lung` data) the death rate is 72.4%. This compares to a death rate for Gumbel fit of 72.7% (see `death_rate_Gumbel` in below code) and a death rate for Weibull fit of 63.2% (see `death_rate_Weibull` below). Shouldn't the Weibull death rate be close to the actual death rate for `lung` dataset? What am I doing wrong, or misinterpreting?
[](https://i.stack.imgur.com/yLiq2.png)
Code:
```
library(evd)
library(fitdistrplus)
library(survival)
time <- seq(0, 1022, by = 1)
# Gumbel distribution
deathTime <- lung$time[lung$status == 2]
scale_est <- (sd(deathTime)*sqrt(6))/pi
loc_est <- mean(deathTime) + 0.5772157*scale_est
fitGum <- fitdistrplus::fitdist(deathTime, "gumbel",start=list(a = loc_est, b = scale_est))
survGum <- 1-evd::pgumbel(time, fitGum$estimate[1], fitGum$estimate[2])
# Weibull distribution
survWeib <- function(time, survregCoefs) {exp(-(time / exp(survregCoefs[1]))^exp(-survregCoefs[2]))}
fitWeib <- survreg(Surv(time, status) ~ 1, data = lung, dist = "weibull")
# plot all
plot(time,survGum,type="n",xlab="Time",ylab="Survival Probability", main="Lung Survival")
lines(survGum, type = "l", col = "red", lwd = 3) # plot Gumbel
lines(survWeib(time, fitWeib$icoef),type = "l",col = "blue",lwd = 3) # plot Weibull
lines(survfit(Surv(time, status) ~ 1, data = lung), col = "black", lwd = 1) # plot K-M
legend("topright",legend = c("Gumbel","Weibull","Kaplan-Meier"),col = c("red", "blue","black"),lwd = c(3,3,1),bty = "n")
# death rates
death_rate_Weibull <- 1-mean(survWeib(time, fitWeib$icoef))
death_rate_Gumbel <- 1-mean(survGum)
```
| Why does the best fitting Weibull distribution for this survival data deviate further from the actual data than a poorly fit distribution? | CC BY-SA 4.0 | null | 2023-05-26T16:51:22.170 | 2023-05-27T07:05:46.217 | 2023-05-26T18:27:10.147 | 378347 | 378347 | [
"r",
"survival",
"weibull-distribution",
"gumbel-distribution"
]
|
617015 | 1 | null | null | -1 | 12 | For my thesis I have to analyze a pretest-posttest intervention. Participants who had a certain treatment were measured prior to the treatment (pre-test anxiety) and after the treatment they were measured again (posttest). there was also a control group. the variables I use is group (controlgroup, intervention). There is also another variable with 2 levels (High and low IQ.) My hypothesis is that after the intervention the state of anxiety will be reduced. I used an ANCOVA with pretest-anxiety as covariate, but unfortunatly there is a significant interaction between my group variable and anxiety variable.. My hypothesis is dat anxiety will be reduced after the intervention. I don't know which alternative I can use.. I have 2 categorical variables both with 2 levels, a pre and a post test and my post test is the dependant variable
| What alternative to use when assumption homogenity ANCOVA is violated | CC BY-SA 4.0 | null | 2023-05-26T16:57:24.077 | 2023-05-26T16:57:24.077 | null | null | 388441 | [
"assumptions",
"ancova"
]
|
617016 | 2 | null | 616996 | 0 | null | Here is a potential viewpoint of the sort of model that you can have:
$$y_i|z_i,x_i \sim \mathcal{N}(\mu_{z_i,x_i},\sigma_\epsilon)$$
where $x_i$ is an index for the individual. With priors
$$\begin{array}{}z_i &\sim& \pi(z_i = g) = 1/3 \\
\mu_{z_i,x_i}|z_i &\sim& \mathcal{N}(\mu_{z_i},\sigma)
\end{array}$$
---
Below is a visualisation of a potential distribution with 5 individuals and 10 measurements per level per individual, when $\mu_i = (-5,0,5)$, $\sigma = 1$ and $\sigma_\epsilon = 0.3$.
[](https://i.stack.imgur.com/rXBjx.png)
| null | CC BY-SA 4.0 | null | 2023-05-26T17:03:36.570 | 2023-05-26T17:09:02.873 | 2023-05-26T17:09:02.873 | 164061 | 164061 | null |
617017 | 2 | null | 616622 | 4 | null | First, as a comment suggested, none of these functions uses random sampling. The data sets I examined all had the number of candidate parameters less than the number of observations. If that's the case, you can get full linear least-squares (LS) regression fits as a reference.
With `lars()` and the `enet()` function that's based on it, the functions can return the entire path of solutions from fully penalized (all coefficients 0) to that LS solution. That's explained in Chapter 3 of [Elements of Statistical Learning](https://web.stanford.edu/%7Ehastie/ElemStatLearn/). When you make predictions from such models with a penalty-factor choice `s`, you must specify what that penalty means.
You chose `s = 0.1` and `mode = "fraction"` for those models. According to the [lars manual](https://cran.r-project.org/web/packages/lars/lars.pdf), with that `mode`:
>
If mode="fraction", then s should be a number between 0 and 1, and it refers to the ratio of the L1 norm of the coefficient vector, relative to the norm at the full LS solution.
So in those cases you are (arbitrarily) choosing the set of coefficients along the path that has a sum of absolute values equal to one-tenth of the sum of coefficient absolute values at the LS solution. I think that you should get the same results from both types of models. You specified `normalize=FALSE` in the call to `enet()` but didn't to `lars()`, which might be responsible (perhaps due to numerical precision issues) for the difference you found even if the predictor variables were nominally standardized to start with.
The penalty factor in `glmnet()` is different. With its more general application, that function chooses by default 100 penalty parameter values. The maximum is the smallest penalty that sets all coefficients to 0, and the minimum is a small fraction of it; "the default depends on the sample size `nobs` relative to the number of variables `nvars`" according to the [manual](https://cran.r-project.org/web/packages/glmnet/glmnet.pdf).
Furthermore, the value of `s` means something completely different from what you specified for the other functions. Again, quoting from the manual:
>
Value(s) of the penalty parameter lambda at which predictions are required.
That's the actual penalty applied to the L1 norm of the coefficients, not a fraction of the L1 norm at the LS solution as you specified to the other functions. The value used by `predict.glmnet()` is equivalent to `mode = "penalty"` for [predict.enet()](https://cran.r-project.org/web/packages/elasticnet/elasticnet.pdf) and `mode = "lambda"` for [predict.lars()](https://cran.r-project.org/web/packages/lars/lars.pdf).
Finally, I respectfully suggest that this approach with a fixed choice of `s` (even when you eventually specify the same type of `s` to all models) is not consistent with your intent to use LASSO as a "benchmark" against which to compare another predictor-selection method. Choosing an arbitrary penalty factor is not how these functions are used in practice. Typically, one would use `cv.glmnet()` to find an optimal penalty factor, and return the coefficients that were retained in the model at that optimal penalty.* I think you will find it very hard to convince reviewers that your approach to feature selection with LASSO, with a single fixed value of `s`, constitutes valid "benchmarking."
---
*Unlike your code, that would require random sampling and a corresponding `set.seed()` for reproducibility.
| null | CC BY-SA 4.0 | null | 2023-05-26T17:11:44.340 | 2023-05-26T17:11:44.340 | null | null | 28500 | null |
617019 | 1 | null | null | 1 | 15 | In A General Metric for Riemannian Manifold Hamiltonian Monte Carlo (Betancourt, 2013), the author writes:
>
The first [5] and still most common choice of the conditional density, $\pi(p|q)$, is a standard gaussian, [...]. This choice, however, ultimately limits the effectiveness of HMC when applied to intricate target distributions. Because $p^T M^{-1} p$ is a $\chi^2$ variate, in equilibrium $\Delta T \approx \frac{N}{2}$ and, with the Hamiltonian conserved along each trajectory, this implies that the variation in the potential is also limited to $\Delta V \approx \frac{N}{2}$.
I am struggling to understand why $p^T M^{-1} p$ being $\chi^2$ in equilibrium gives us that $\Delta T \approx \frac{N}{2}$. Further, I am also not entirely sure what the quantity $\Delta T$ represents in this context.
Where does this conclusion come from and what intuition is trying to be conveyed?
| What is the intuition for the limited variation in potential energy for HMC? | CC BY-SA 4.0 | null | 2023-05-26T17:29:39.260 | 2023-05-26T17:29:39.260 | null | null | 191335 | [
"markov-chain-montecarlo",
"hamiltonian-monte-carlo"
]
|
617022 | 1 | null | null | 3 | 81 | Let $X_i$ be a binary random variable where $P(X_i = 1) = p$.
I want to find the MLE estimator for $\theta = p(1-p)$.
The likelihood function should be
$$
L(\theta) = \prod_{i=1}^n p^{x_i} (1-p)^{1-x_i} = p^{\sum_{i=1}^nx_i}(1-p)^{n- \sum_{i=1}^nx_i},
$$
How should I proceed?
Seems like I can not just take the derivative with respect to $\theta = p(1-p)$.
Edit: You use the plug in estimator, so you find $\hat{p}_{mle}$ first, then
$\hat{\theta} = \hat{p}(1-\hat{p})$.
Now suppose I want to find the asymptotic distribution for $\hat{\theta}$. how do I compute the information matrix?
| What is the asymptotic distribution for the MLE estimator of p(1-p) in a binary random variables | CC BY-SA 4.0 | null | 2023-05-26T18:09:05.700 | 2023-05-28T06:37:40.673 | 2023-05-26T20:31:16.173 | 388905 | 388905 | [
"self-study",
"maximum-likelihood"
]
|
617023 | 1 | null | null | 1 | 26 | I am looking for advice on the best practice to determine hyperparameters for my LSTM model. I have time series data that I have divided into train and test sets. I was planning to use an expanding walk forward cross validation scheme on my train set to determine the hyperparameters and then use the test to obtain a final evaluation score. However, I am confused as to how to use the early stopping with this setup. Should I:
- At each CV iteration use the test fold as a validation set to determine how many epochs to train for with early stopping? This potentially will skew the CV score based on how much the model overfits the test fold at each CV iteration.
- Just preset the number of epochs for each fold and not use early stopping? - due to the different number of data points in each fold, this may not make sense.
Also if I do use CV, how do I determine my early stopping rounds for the final model (without using the test set)?
Would I be better off just dividing my data into a train/val/test split and tuning both hyperparameters and early stopping on the validation set?
Thank you!
| How to handle hyperparameter tuning for LSTM with early stopping? | CC BY-SA 4.0 | null | 2023-05-26T18:55:01.603 | 2023-05-27T09:43:46.523 | null | null | 236994 | [
"machine-learning",
"time-series",
"neural-networks",
"cross-validation",
"lstm"
]
|
617024 | 2 | null | 616940 | 0 | null | You may find yourself interested in Benavoli et al. (JMLR 2017). In addition to making a case for the Bayesian methods the authors like, the paper goes through more classical methods from frequentist statistics that you may prefer.
I do not see the relevance of the model construction when it comes to testing. While the fact that you are using different data sources to construct the models might have enormous implications for your work, when it comes to testing, you just have two models that you want to compare, no different than if you wanted to compare a random forest to a neural network.
Benavoli et al. (JMLR 2017) measure their models using classification accuracy, but their techniques do not seem to depend on the measure of performance on which the models are compared. If you have reason to want to investigate the ROCAUC, do feel free to use that in the methods in the article.
If you need to do power calculations, simulation can be a powerful tool for these kinds of tests where analytical solutions are not straightforward or necessarily even possible.
REFERENCE
[Benavoli, Alessio, et al. "Time for a change: a tutorial for comparing multiple classifiers through Bayesian analysis." The Journal of Machine Learning Research 18.1 (2017): 2653-2688.](https://www.jmlr.org/papers/volume18/16-305/16-305.pdf)
| null | CC BY-SA 4.0 | null | 2023-05-26T19:09:25.460 | 2023-05-26T19:09:25.460 | null | null | 247274 | null |
617026 | 1 | null | null | 1 | 9 | Let’s say we want to check the effect of the interaction of education level and gender of the salesperson on number of units sold by them. We have thousands of sales people in the data, but they only come from 10 companies: 4 of them hire (almost) only men, 4 hire almost only women, and 2 companies hire both men and women (let's say the employees are independent of one another).
We want to control for the company, but that pretty much kills any variation in the gender. Is there a way to still rule out the effect of the company? If the gender is really important for the research, is there a good way to check whether the company doesn’t matter much? How would you go about it?
| Including an interaction variable which doesn't vary within a key control variable | CC BY-SA 4.0 | null | 2023-05-26T19:35:31.193 | 2023-05-26T19:35:31.193 | null | null | 388911 | [
"variance",
"controlling-for-a-variable"
]
|
617028 | 1 | null | null | 0 | 8 | Let's say I am working with a state $X$ split into three parts $U$, $V$, and $W$.
I can efficiently sample from $W|U,V$, $U|V$, and $V|U$. My initial intuition was to do a variable-at-a-time Metropolis Hastings algorithm to sample $X$.
However, I then realized that the candidate density defined by the following mixture:
$f(x^*|x) = f(w^*|u^*,v^*) \cdot (f(u^*|v^*) \cdot f(v^*|u) + f(v^*|u^*) \cdot f(u^*|v))/2$ produces an algorithm with acceptance ratio equal to one.
Is this part of some more general result? It looks like what I have is a Gibbs sampler, block sampling $U,V$ then $W|U,V$, but it only seems to work if I randomize the order in which I sample $U$ and $V$.
Is this indeed more efficient than using a variable-at-a-time sampler with acceptance ratios less than one?
| Is it possible to increase the Hastings ratio by combining and mixing elementary kernels? | CC BY-SA 4.0 | null | 2023-05-26T20:15:51.297 | 2023-05-26T20:15:51.297 | null | null | 387957 | [
"markov-chain-montecarlo",
"gibbs",
"mcmc-acceptance-rate"
]
|
617029 | 1 | null | null | 1 | 10 | I am building a model that predicts a target variable using daily weather data. I am trying to understand which weather lags influence my target variable the most. The issue is some weather lags are highly correlated - for example, if I use monthly weather aggregates, January and October for some bizarre reason have highly correlated values for one of my variables. I have similar results with weekly weather variables (although these tend to be correlated with the previous 2 lags). If I were just interested in prediction, I'd just use PCA.
Most of the feature importance methods and metrics I am aware of, such as permutation importance, feature importance in random forests and SHAP are affected by correlated variables. What would be the best way to evaluate the feature importance in my case? Is there a way to preprocess my data to make it more favorable to my goal?
Thanks!
| Feature extraction or importance for correlated time series lags | CC BY-SA 4.0 | null | 2023-05-26T20:22:46.677 | 2023-05-26T20:22:46.677 | null | null | 236994 | [
"machine-learning",
"time-series",
"random-forest",
"feature-selection",
"importance"
]
|
617030 | 2 | null | 617022 | 1 | null | There is no MLE of $\theta$.
>
$$L(\theta) = \prod_{i=1}^n p^{x_i} (1-p)^{1-x_i}$$
This expression of the likelihood function is incorrect. The left-hand side expresses the likelihood as a function of $\theta$, but the right hand side is a function of $p$.
You can not fix this either because $\theta$ is a non-injective function of $p$ and can not be inverted. Given $\theta$ we have two possible values of $p$
$$p=\frac{1}{2}\pm\sqrt{\frac{1}{4}-\theta}$$
so you can not compute the probability of the data given $\theta$ unless you have a second parameter.
---
>
how do I compute the information matrix?
If you would have a valid MLE then you could start with the information matrix of some parameter and [apply a scaling according to the square of the derivative of the transformation](https://en.wikipedia.org/wiki/Fisher_information#Reparametrization).
However, note that some transformed variable can be biased and the Fisher Information alone is not an indication of asymptotic variance. See this example: [Why the variance of Maximum Likelihood Estimator(MLE) will be less than Cramer-Rao Lower Bound(CRLB)?](https://stats.stackexchange.com/questions/592676/why-the-variance-of-maximum-likelihood-estimatormle-will-be-less-than-cramer-r)
---
The special case of finding the variance of the distribution of the statistic $\hat\theta = \hat{p}(1-\hat{p}) = \hat{p}-\hat{p}^2$ can be done more directly by computing $$\begin{array}{}
Var(\hat\theta) &=& E[(\hat{p}-\hat{p}^2)] - E[\hat{p}-\hat{p}^2]^2 \\
&=& E[\hat{p}^4-2\hat{p}^3+\hat{p}^2] - E[\hat{p}-\hat{p}^2]^2\\
&=& E[\hat{p}^4]-2E[\hat{p}^3]+E[\hat{p}^2] - (E[\hat{p}]-E[\hat{p}^2])^2
\end{array}$$
which can be expressed in terms of the raw moments of $\hat{p}$ (a binomial distributed variable, scaled by $1/n$)
$$\begin{array}{}
E[\hat{p}] &=& p \\
E[\hat{p}^2] &=& p^2 + \frac{p(1-p)}{n} \\
E[\hat{p}^3] &=& p^3 + 3 \frac{p^2(1-p)}{n} \\
E[\hat{p}^4] &=& p^4 + 6 \frac{p^3(1-p)}{n} + 3 \frac{p^2(1-p)^2}{n^2} \\
\end{array}$$
and the variance [can be written as](https://www.wolframalpha.com/input?i=+expand+p%5E4+%2B+6+%5Cfrac%7Bp%5E3%281-p%29%7D%7Bn%7D++%2B+3+%5Cfrac%7Bp%5E2%281-p%29%5E2%7D%7Bn%5E2%7D+-2%28p%5E3+%2B+3+%5Cfrac%7Bp%5E2%281-p%29%7D%7Bn%7D%29+%2B++p%5E2+%2B+%5Cfrac%7Bp%281-p%29%7D%7Bn%7D+-+%5Cleft%28p%5E2+%2B+%5Cfrac%7Bp%281-p%29%7D%7Bn%7D+-+p+%5Cright%29%5E2)
$$Var(\hat\theta)
= \frac{2p^4-4p^3+2p^2}{n^2} + \frac{-4p^4+8p^3-5p^2+p}{n}$$
where the second term becomes dominant for large $n$ and is the same result as using the Fisher Information matrix
$$p(1-p)/n \cdot \left(\frac{\text{d}\theta}{\text{d}p}\right)^2 = \frac{p(1-p)(1-2p)^2}{n}$$
In the case of $p=0.5$ this would lead to zero variance (or an infinite value in the information matrix). In that case you can still use the Delta method with a second order derivative as demonstrated in this question: [Implicit hypothesis testing: mean greater than variance and Delta Method](https://stats.stackexchange.com/questions/343187/implicit-hypothesis-testing-mean-greater-than-variance-and-delta-method/441688#441688)
| null | CC BY-SA 4.0 | null | 2023-05-26T20:52:12.137 | 2023-05-28T06:37:40.673 | 2023-05-28T06:37:40.673 | 164061 | 164061 | null |
617031 | 2 | null | 506409 | 0 | null | A SMOTE-style technique is completely reasonable for balanced data. Yes, the "M" means "minority" and there is no minority class in a balanced problem. However, the idea of using the SMOTE-style synthesis for points in general could apply.
[Note that SMOTE seems not to be so great at synthesizing reasonable points](https://stats.stackexchange.com/a/588248/247274), which diminishes its potential utility.
I am with [the answer by Pananos](https://stats.stackexchange.com/a/506533/247274) expressing skepticism about synthesizing new points in order to expand the sample size. If you have a small sample size that truly needs to be expanded, then there is a real risk that the point synthesis will overfit to the coincidences in the small training data, and funky behavior might not get washed out by other, more mainsteam points. Once you reach a large sample size where this is not such a concern, then I question if you really need to generate new points.
| null | CC BY-SA 4.0 | null | 2023-05-26T21:19:08.980 | 2023-05-26T21:19:08.980 | null | null | 247274 | null |
617033 | 2 | null | 595519 | 1 | null | One common way of supporting the parallel trends is using the relative time model. I'm assuming in your model, you have the treatments at different time periods (P1--P5), so first you need to have a variable (Rel_time) which has value at all treatment times set to 0. For e.g. say a unit has received treatment at period P3, then P3 has value 0, P2 has value -1, P4 has value +1, and so on.
The next step is to run a regression of the outcome on the interaction term of \textbf{Treated_Unit} * \textbf{Rel_time}. Here, \textbf{Treated_Unit} is a binary variable which has 1 for the treatment, and 0 for the control.
If the parallel trends assumption holds, then the coefficients of negative Rel_time values should have insignificant coefficients. If there is an effect post treatment, the positive time values should have a significant coefficient in the desired direction.
In the recent econometrics literature, there are some further complications in the DID estimates of Staggered treatment (which is this setting). You can look at Goodman-Bacon (2018) if interested.
| null | CC BY-SA 4.0 | null | 2023-05-26T21:52:49.437 | 2023-05-26T21:52:49.437 | null | null | 179369 | null |
617035 | 1 | null | null | 0 | 5 | I've recently started learning Python through an intro online class. The instructor just introduced bisection algorithms, and I'm trying to apply it to a game of hangman. However, my code gets caught in a never ending loop for certain 'secret words', but not for others. For example, it can guess the word "wife", but not "wifey". However, it can guess the word "y". I'm sure I'm missing something obvious, but can't figure it out. (As an aside, I do realize this code won't work for the letter 'z', but that is a separate problem.) I'm also still new to posting questions here, so let me know if I omit any useful info or break any community norms. Thanks in advance!
```
secret_word = 'wifey'
num_guesses = 0
alphabet = "abcdefghijklmnopqrstuvwxyz"
guessed_word = ''
low = 0
high = len(alphabet)+2
epsilon = 0
while len(guessed_word) != len(secret_word):
for i in range(len(secret_word)):
guess_index = 0
while alphabet[guess_index] != secret_word[i]:
if ord(alphabet[guess_index]) < ord(secret_word[i]):
low = guess_index
else: high = guess_index
num_guesses += 1
guess_index = int((high + low)/2)
else: guessed_word += alphabet[guess_index]
print('num_guesses =', num_guesses)
print(guessed_word)
```
| Learning Python - Why does my bisectional approach fail for certain secret words in hangman? | CC BY-SA 4.0 | null | 2023-05-26T23:52:16.903 | 2023-05-26T23:52:16.903 | null | null | 220007 | [
"python"
]
|
617036 | 1 | null | null | 1 | 12 | I am training a regression machine learning model to predict an airplane's maximum take-off weight based on some pre-project features. The airplane weights in the dataset I'm working with are registered in kilograms. The model is not expected to be highly precise; it is intended to provide a robust estimation of the weight, with an accuracy of approximately 0.1 tons. Given these requirements, would the model benefit from a target transformation that changes its scale and rounds off the target value (e.g., changing 102,341 kg to 102.3 tons)? Or should I perform this transformation after my model predicts the weight? What should this process to be called?
| Changing scale and rounding off Target | CC BY-SA 4.0 | null | 2023-05-27T00:36:59.963 | 2023-05-27T05:42:20.787 | null | null | 354618 | [
"regression",
"machine-learning",
"data-transformation",
"data-preprocessing",
"scales"
]
|
617037 | 2 | null | 617002 | 1 | null | Yes, the Fisher z-transform would be a reasonable test (even if the underlying distributions are non-normal). The reason for this is that the distribution of the correlation coefficient is nearly normal with the transform. Furthermore, this is empirical in nature (as opposed to something derived from a distribution...though I'm certain there has been a proof using the central limit theorem by this point in time).
The reference you might wish to review is: [https://www.jstor.org/stable/20156574](https://www.jstor.org/stable/20156574).
Hope this helps.
| null | CC BY-SA 4.0 | null | 2023-05-27T01:27:02.043 | 2023-05-27T01:27:02.043 | null | null | 199063 | null |
617038 | 1 | null | null | 1 | 10 | I am running the following code:
```
Patient1 <- Patient1 %>%
group_by(PAXDAY, cumsum(PAXINTEN != 0)) %>%
mutate(Group_Label = "Active") %>%
mutate(Group_Label = replace(Group_Label, cumsum(PAXINTEN == 0) >= 20 & cumsum(PAXINTEN == 0) <= 180, "Inactive")) %>%
mutate(Group_Label = replace(Group_Label, cumsum(PAXINTEN == 0) > 20 & cumsum(PAXINTEN == 0) <= 300, "Non-wear")) %>%
mutate(Group_Label = replace(Group_Label, cumsum(PAXINTEN == 0) > 20 & cumsum(PAXINTEN == 0) <= 9000, "Sleep")) %>%
ungroup()
plots <- lapply(groups, function(group) {
data <- filter(Patient1, PAXDAY == group) # Filter data for each group
ggplot(data = data) +
aes(x = as.POSIXct(Time, format = "%H:%M"), y = PAXINTEN, color = Group_Label) +
geom_point() +
geom_line() +
labs(title = paste("Missing Data Chunks for Patient 1 -", group),
x = "Time of Day",
y = "Intensity of activity") +
scale_x_datetime(date_labels = "%H:%M", date_breaks = "4 hours") +
theme_minimal()
})
# Print the plots
for (i in seq_along(plots)) {
print(plots[[i]])
}
```
which gives me the following image:
[](https://i.stack.imgur.com/5Zvno.png)
But What I want to see is if it has zeros more than 300, It should automatically label the whole period as sleep, which is not happening below in image and it is giving me all the different labels I have and then finally the sleep. How can I modify my code?
| I am trying to visualize the NHANES dataset using the PAXINTEN of the individual which is intensity recorded by accelerometer | CC BY-SA 4.0 | null | 2023-05-27T02:48:57.147 | 2023-05-27T06:40:39.683 | 2023-05-27T06:40:39.683 | 362671 | 388920 | [
"r",
"descriptive-statistics"
]
|
617039 | 1 | 617062 | null | 1 | 76 | I am reading this article [here](https://projecteuclid.org/journals/annals-of-applied-statistics/volume-11/issue-3/Dynamic-prediction-of-disease-progression-for-leukemia-patients-by-functional/10.1214/17-AOAS1050.full) and trying to regenerate their simulation study. Here is this scenario here, among others, but if I can figure out one, the rest follow. That is,
Simulation set-up
we assume the hazard function of subject $i$ is
\begin{equation}
h_i(t|Z_i(t)) = h_0(t) \exp(\alpha Z_i(t)),
\end{equation}
where $h_0(t) = \lambda t^{\lambda-1} \exp(\eta)$, a Weibull baseline hazard function with $\lambda = 2$, $\eta = -5$, and the association parameter $\alpha = 0.5$.
Consider the linear model.
$$
Z_i(t) = a + bt + b_{i1} + b_{i2}t,
$$
The linear longitudinal trajectory is described with $a = 1$, $b = -2$. Trajectories considered above use random effect terms $b_i = (b_{i1}, b_{i2}) \sim \mathcal{N}(0,D))$, with $D = \begin{pmatrix} 0.4 & 0.1 \\ 0.1 & 0.2 \end{pmatrix}$. For simplicity, we generate longitudinal data on irregular time points $t = 0$ and $t = j + \epsilon_{ij}$, $j = 1, 2, \ldots, 10$ and $\epsilon_{ij} \sim \mathcal{N}(0, .1^{2})$ independent across all $i$ and $j$. We simulated the censoring times from a uniform distribution in $(0, t_{\text{max}})$, with $t_{\text{max}}$ set to result in about 25% censoring.
Coding part
```
# Define the hazard function with time-varying parameters
hazard <- function(t, params) {
lambda <- 2
eta <- -5
alpha <- 0.5
a <- 1
b <- -2
# Compute the hazard value at time t using the time-varying parameters
param1 <- params[1]
param2 <- params[2]
hazard_value <- lambda * t^(lambda - 1) * exp(eta) * exp(alpha * (a + b * t + param1 + param2 * t))
return(hazard_value)
}
# Define the cumulative hazard function by integrating the hazard function
cumulative_hazard <- function(t, params) {
result <- stats::integrate(hazard, lower = 0, upper = t, params = params)
return(result$value)
}
# Define the survival function by exponentiating the negative cumulative hazard
survival <- function(t, params) {
cumulative_hazard_value <- cumulative_hazard(t, params)
survival_value <- exp(-cumulative_hazard_value)
return(survival_value)
}
# Define the inverse survival function to find the time value for a given survival probability
inverse_survival <- function(p, params) {
uniroot(function(t) survival(t, params) - p, interval = c(-100, 100))$root
}
# Test the inverse survival function
p <- runif(1,0,1)
params <- c(-0.3569612177, -0.212686737) # Example time-varying parameter values
inverse_time <- inverse_survival(p, params)
print(inverse_time)
-2.879081
```
My questions
- Eventually, my goal is to generate survival time (the follow-up time) and status for each subject. So, should I integrate \begin{equation}
h_i(t|Z_i(t)) = h_0(t) \exp(\alpha Z_i(t)),
\end{equation} from $0$ to $t$ to obtain the cumulative hazard $H(t)$ then obtain $S(t)=\exp(-H(t))$ and the inverse of $S(t)$, $t=S^{-1}(u)$. You then generate $U\sim\mathrm{Uniform\left(0,1\right)}$, substituting $U$ for $S\left(t\right)$ and to simulate $t$, the follow up time.
- Given what I have said in part (1) is correct, here I am trying to do it numerically, and obviously not working since I have negative time and am also not sure how I would be using my t as described in the simulation. It would be possible to do it analytically and work through steps to obtain the survival time.
| Generate survival data with functional data using R | CC BY-SA 4.0 | null | 2023-05-27T02:58:55.637 | 2023-05-30T19:06:10.283 | 2023-05-30T19:06:10.283 | 127026 | 127026 | [
"r",
"survival",
"simulation",
"proportional-hazards",
"functional-data-analysis"
]
|
617040 | 1 | null | null | 0 | 32 | Suppose $X \sim \text{Beta}(\alpha, \beta)$, and $Y = \frac{cX}{1-X}$. Then $Y \sim \text{Beta'}(\alpha, \beta, 1, c)$ has a generalized Beta-prime or Conjugate Gamma distribution.
What is $\mathbb{E}\left[\frac{Y}{1+Y}\right]$? Note that if $c=1$, then $\frac{Y}{1+Y}=X$ and has mean $\frac{\alpha}{\alpha+\beta}$.
| If X is Beta distributed, and Y = cX/(1-X), then what is E[Y/(1+Y)]? | CC BY-SA 4.0 | null | 2023-05-27T04:22:42.450 | 2023-05-27T04:23:09.327 | 2023-05-27T04:23:09.327 | 383674 | 383674 | [
"distributions",
"beta-distribution"
]
|
617041 | 2 | null | 617036 | 0 | null | Why would you ever make your data less precise? By doing this you would be intentionally handicapping your model by preventing it from being less precise. I highly doubt that “accuracy of approximately 0.1 tons” means “not better than”. Quality of your training data forces the upper bound for accuracy. The model can be arbitrarily worse than this because the end result would depend on the quality of the data plus the approximation error of the model itself.
| null | CC BY-SA 4.0 | null | 2023-05-27T05:42:20.787 | 2023-05-27T05:42:20.787 | null | null | 35989 | null |
617042 | 1 | null | null | 0 | 29 | I am having trouble understanding the following steps in the Bayesian approach to predicting the probability of tossing a head H given some data D
\begin{aligned}
p(H \mid D) & =\int_0^1 p(H, w \mid D) d w \\
& =\int_0^1 p(H \mid w, D) p(w \mid D) d w
\end{aligned}
I understand the first step is the sum rule and the second is the chain rule for probability.
I am thinking I should be looking at the integrand as $P((H\cap w) \ \mid D)$ but am having trouble visualising this.
What would be the probability of heads?
[Update]
Here is the slide from my lecture hand out on Bayesian approach to prediction.
[](https://i.stack.imgur.com/oXk80.png)
There is also information on [Wikipedia](https://en.wikipedia.org/wiki/Chain_rule_(probability))
| Intuition for chain rule in Bayesian approach for prediction? | CC BY-SA 4.0 | null | 2023-05-27T05:48:19.563 | 2023-06-03T10:56:37.143 | 2023-06-03T10:56:37.143 | 284610 | 284610 | [
"probability",
"self-study",
"bayesian"
]
|
617043 | 2 | null | 208043 | 0 | null | In a similar case, I have used the following approach. I had some ideas about the characteristics of the synthetic data I wanted to generate.
I started searching for these characteristics in public data by designing some data filters. I applied these filters to several public data sets, unrelated to my problem but good enough to be potentially useful.
I ended up with a data set from public companies' financial statements that fitted my requirements. Although my problem had nothing to do with financial data, their numbers were adequate to developing further tests.
| null | CC BY-SA 4.0 | null | 2023-05-27T05:51:35.220 | 2023-05-27T05:51:35.220 | null | null | 382413 | null |
617044 | 2 | null | 617042 | 1 | null | Do you have a link to somewhere that claims this is Bayes rule?
As far as I can see line (1) Is marginalization, and line (2) is chain rule. These are all very general probabilistic statements that exists independent of Bayes rule.
So part of your confusion might be that you are trying to look for something where there isn't anything to be found. i.e. there is nothing "inherently Bayesian" in what you have written, as it can exist independently of Bayes rule, as per conventional probability axioms.
What you have done really here is introduce a latent variable $w$ around which some other Bayesian-driven analysis can occur, as there are many latent variable "Bayesian methods" (but what you have written is not Bayes rule, and it requires more construction to be used for Bayesian methods).
Alternatively you could argue in some sense that you have introduced a continuous mixture parameter via $w$. See [here](https://www.youtube.com/watch?v=Ii62xWVFWoE&ab_channel=LawrenceLeemis).
Also maybe you're thinking of this expression as per the model evidence for a continuous usage of Bayes rule:
$$p(H | D) = \frac{p(D | H) P (H)}{P(D)} = \frac{p(D | H) P (H)}{\int P(D|H)p(H)dH}$$
But this feels nonsensical sense we are working with a discrete system here, not a continuous wrt coin tosses so I believe it is not helpful for your intuition to approach the problem of coin tosses using integrals (i.e. continuous RV modeling).
| null | CC BY-SA 4.0 | null | 2023-05-27T06:01:46.050 | 2023-05-27T06:37:02.113 | 2023-05-27T06:37:02.113 | 362671 | 117574 | null |
617045 | 1 | null | null | 0 | 10 | Suppose I have an ARMA (p,q) (let it be ARMA (2,2)) fitted to my original returns series and have the residuals of said ARMA model extracted.
Next, it is my understanding that I need to fit a GARCH model to the residuals of the ARMA model.
Suppose I wanted to fit only a GARCH (1,1) to the residuals. In practice, for someone who's using `garchFit` function in R, this yields both a given GARCH (1,1) variance model and yet another ARMA (0,0) model.
My questions therefore are simple:
- Does taking only the variance model specified from GARCH (1,1) fitting to the residuals and adding it to the first ARMA (2,2) model fitted to the returns series produces the actual time series?
- Suppose I want to check if there's any heteroscedasticity left after doing all of this work, and I'm fitting a joint ARMA (2,2)+ GARCH (1,1) —— what should be specified in the data argument of the code, the residuals of ARMA (2,2) or the actual returns series?
| GARCH fit to the residuals of AR/ARMA mean equation previously fitted | CC BY-SA 4.0 | null | 2023-05-27T06:22:35.070 | 2023-05-28T07:08:50.727 | 2023-05-28T07:08:50.727 | 53690 | 388586 | [
"r",
"arima",
"modeling",
"garch",
"volatility"
]
|
617046 | 1 | null | null | 2 | 30 | I have made the following model in DAGitty:
[](https://i.stack.imgur.com/B8ism.png)
Where $X_2$ is controlled for.
DAGitty says:
>
The total effect cannot be estimated due to adjustment for an intermediate or a descendant of an intermediate.
I asked [here](https://stats.stackexchange.com/questions/615195/can-controlling-for-a-variable-block-the-backdoor-path-opened-by-controlling-for) whether it would be possible to obtain the treatment effect after controlling for $x_2$.
But I guess my fundamental question is: if I control for both $x_2$ and $x_1$, why, from the theory of DAGs, that doesn't identify the treatment effect?
I mean, controlling for $x_2$, which is a collider, open a backdoor path $x_1 \leftrightarrow y$. But controlling for $x_1$ should close this backdoor path again.
Why, from a theoretical perspective, that doesn't happen?
I will reinforce my understanding of how that should work with another example.
Consider the following DAG:
[](https://i.stack.imgur.com/CDog3.png)
where $x$ is the treatment, and $y$ is the outcome.
In this DAG I can control for nothing.
Or I can control for $\{m,a\}$, or for $\{m,b\}$, or for $\{m,a,b\}$.
In fact, $m$ is a collider, and controlling for it induces a backdoor path $x \leftrightarrow a \leftrightarrow b \leftrightarrow y$.
But I can close the backdoor path so opened by controlling for also $a$, or for also $b$, or for also $a$ and $b$ together.
Why that doesn't happen with the first DAG I posted?
If I control for $x_2$ (the collider), and so open the backdoor path $t \leftrightarrow x1 \leftrightarrow y$, why I can't close the backdoor path so opened by controlling for $x_1$?
I would like an answer from a theoretical point of view.
| According to DAG theory, why controlling for this variable doesn't close the backdoor path opened by controlling for collider? | CC BY-SA 4.0 | null | 2023-05-27T06:22:54.210 | 2023-05-31T13:06:22.973 | 2023-05-27T16:00:23.810 | 154990 | 154990 | [
"causality",
"controlling-for-a-variable",
"dag",
"collider",
"backdoor-path"
]
|
617047 | 2 | null | 617014 | 1 | null | Putting the EdM comment into code, here are three options for estimating median survival/death rates:
- Use the standard parameterization for Weibull of $λ(ln2)(1/α)$ with scale parameter $λ$ and shape parameter $α$ per the answer provided in Why am I not able to correctly calculate the median survival time for the Weibull distribution?
- Use quantiles
- Use the survival fraction at time $X$
Code:
```
### METHOD 1: MEDIAN FORMULA ###
median_surv <- exp(fitWeib$icoef[1])*(log(2))^(1/exp(fitWeib$icoef[2]))
death_rate_Weib <- 1-median_surv/max(lung$time)
### METHOD 2: QUANTILES ###
# median survival times
median_surv_Weib <- qweibull(0.5, shape = exp(fitWeib$icoef[2]), scale = exp(fitWeib$icoef[1]))
# median death rates
death_rate_Weib <- 1 - median_surv_Weib/max(lung$time)
### METHOD 3: MIDPOINT SURVIVAL ###
# median survival percentage
surv_rate_Weib <- survWeib(max(lung$time)/2, fitWeib$icoef)
surv_rate_Gumb <- 1-evd::pgumbel(max(lung$time)/2, fitGum$estimate[1], fitGum$estimate[2])
# median death percentage
death_rate_Weib <- 1-surv_rate_Weib
death_rate_Gumb <- 1-surv_rate_Gumb
```
| null | CC BY-SA 4.0 | null | 2023-05-27T07:05:46.217 | 2023-05-27T07:05:46.217 | null | null | 378347 | null |
617049 | 1 | null | null | 0 | 11 | Im am doing some modelling with the mgcv package, specifically the gam function
```
glm.mod1 <- gam(imp1 ~ s(pH1, bs = 're') + s(ZnAc, bs = 're') + s(filter.age1, bs = 're'),
family = quasi(link = 'inverse', variance = 'mu^2'), method = 'REML')
```
When I want to extract the variance components, the last component is called scale
```
gam.vcomp(glm.mod1)
Standard deviations and 0.95 confidence intervals:
std.dev lower upper
s(pH1) 0.0164750427 0.0035519282 0.076416813
s(ZnAc) 0.0281406417 0.0062903029 0.125891507
s(filter.age1) 0.0009649886 0.0002380628 0.003911585
scale 0.7354211808 0.7061853013 0.765867418
Rank: 4/4
```
What confuses me is why this scale parameter is different from the scale returned by
```
glm.mod1$scale
[1] 0.4863872
```
As far as I can tell the value returned by `glm.mod1$scale` is the dispersion parameter of the underlying distribution, but what is the scale in `gam.vcomb()` then?
| Why are there two different scale estimates in GAM model with mgcv | CC BY-SA 4.0 | null | 2023-05-27T07:38:52.830 | 2023-05-27T07:38:52.830 | null | null | 267140 | [
"mixed-model",
"generalized-additive-model",
"mgcv"
]
|
617050 | 2 | null | 616962 | 2 | null | I am going to make so many general and specific comments that they are better presented as an answer, intended to complement the very helpful and detailed answer from @Shawn Hemelstrand.
I am also broadening the discussion so that is directed not just at a particular assignment but also at more general questions of how to analyse similar data.
What are the data?
Although it can be argued that the correct analysis does not depend on what the data (supposedly) are measuring, that information can still be interesting and also helpful. In science, although not so often in an assignment, any researcher should use their judgement based on knowing how certain kinds of measurement usually behave. That is, the researcher should know in particular what larger samples would often be like, either from direct experience or by looking at literature on the variables concerned.
The data need more careful discussion
Any really thorough answer depends on seeing not just the descriptive statistics and graphs you posted -- which were helpful -- but also the raw data. The datasets here are so tiny that if they are available they should be posted in full to allow really thorough discussion.
To flag a point touched on but not developed fully, it seems that the data are bounded by 0 and 100%. I will come back to that.
I note from the quantile plots 11 distinct points for the group with sample size 43 and 4 distinct points for the group with sample size 12. What's more, most if not all of the values appear to be multiples of 5.
If that means the data were binned before SPSS drew the graphs, or that SPSS binned the data in drawing the graphs, that is not good practice and at least needs to be flagged. If the raw data come with such spikes of duplicated values, that at least needs to be flagged and discussed.
The graphs could be improved
I am happy to blame SPSS defaults as dopey if they are, but the choices of 0 30 60 90 120 as labelled points for the quantile plots and 0.00 20.00 40.00 60.00 80.00 and 100.00 for the histograms are not just inconsistent, but also poor choices for a major and a minor reason. The major reason is from previous discussion we should be seeing say 0(20)100 on the quantile plots. Extra labels or at least axis ticks at 10(20)90 would do no harm. The minor reason is that those .00 are not even cosmetic, but silly, given the apparent resolution of the data.
I would prefer a dotplot or even stem-and-leaf plot to a histogram for this kind of data. I know that histograms allow superimposed normals but they aren't to me easier to interpret than quantile normal plots.
Normality is frustrated if the data are bounded
It's an elementary point (meaning here fundamental as well as introductory) that normal distributions are unbounded. We often treat that lightly, as when a fit of a normal distribution to human heights can look good and we happily ignore the very tiny probability predicted for negative heights. But when both bounds, here 0 and 100%, occur in the sample we should be more cautious.
Be more circumspect with wording
Wording such as (1) "Is my distribution normal?" is always over-simplified. Unfortunately, a careful discussion obliges more long-windedt wording such as (2) "Is my distribution close enough to normal to make it defensible to use procedures for which marginal normal distributions are an ideal?". If it is argued that people (should) know that (1) is shorthand for (2), well, good, but reading that kind of wording (1) always makes me feel uncomfortable.
Using multiple criteria is awkward
You have been given multiple criteria that you could use (different plots; different tests; skewness and kurtosis) but (it seems) little guidance on how to weigh them comparatively if they don't agree. Juggling several criteria seems common practice in some fields but nevertheless is rather muddled. Learners are given a raw deal if expected to understand not just individual guidelines but what to do if they conflict. As already brought out in discussion to some degree, these criteria find varying favour among statistically-minded people. A personal view -- but one nevertheless not unique to me as it is quite often aired on CV -- is that normal quantile plots are by far the best guide in this issue. Tests for normality are often unhelpful for differing reasons. While I will often look at skewness and kurtosis as descriptive statistics, they are a weak reed to decide on closeness to normality. The chicken-and-egg learning and decision problems are that you need to have experience with normal quantile plots before you can interpret them carefully; nevertheless even experienced people might still disagree about interpretation. The problem is a fetish of objectivity in the guise of trying to automate decisions and downplaying not just the need for, but the value of, judgement based on experience.
A distinct but related issue is assessing closeness to normality can be easier if there are alternatives to consider, as when the fit of a normal is compared with the fit of a gamma or a lognormal. Here the most obvious brand-name distribution as an alternative is some kind of beta distribution to match the boundedness, but I guess that is a little esoteric at the level of this assignment.
Why go back to the early 1970s?
Your assignment is your assignment, but the challenge given is in my view about 50 years behind the state of the art. If the main focus is the difference between means, I would want to see a bootstrapped confidence interval for that difference. Also possible in practice (possible in principle 50 years ago, but often not easy in software available) would be a permutation test. I mention without enthusiasm what has been mentioned already, a Wilcoxon-Mann-Whitney test (which goes back to the 1940s). Although it goes way beyond the assignment, using a generalized linear model would have allowed some experimentation to see how the results are robust depending on choice of link or family.
(Naturally I am having a little wicked fun here: being an old idea doesn't invalidate anything, or else we would stop using means as having long roots. The point is why use old ideas when long since we have better ones.)
| null | CC BY-SA 4.0 | null | 2023-05-27T08:14:52.257 | 2023-05-27T12:15:04.877 | 2023-05-27T12:15:04.877 | 22047 | 22047 | null |
617052 | 2 | null | 617023 | 1 | null | I think it's better if you do train/val/test for each cross-validation fold. The validation set is used only for early stopping and the test set is used for just performance evaluation for the chosen hyper parameter configuration.
So, you do your usual time series cross-validation, and while doing so, separate the train set into two i.e. (train/val again, e.g. 80/20) and use the val only for early-stopping.
| null | CC BY-SA 4.0 | null | 2023-05-27T09:43:46.523 | 2023-05-27T09:43:46.523 | null | null | 204068 | null |
617053 | 2 | null | 616997 | 0 | null | When simulating $\zeta=\tau^2$ from
$$p(\zeta)\propto IG(\zeta;0,(\mu_1-\eta)^2/2+(\mu_2-\eta)^2/2)\times Ca^+(0,b_ \tau)$$
using a proposal $$p(\zeta)\propto IG(\zeta;a,b)$$
the acceptance ratio is
$$\dfrac{IG(\zeta;0,(\mu_1-\eta)^2/2+(\mu_2-\eta)^2/2)\times Ca^+(0,b_ \tau)}{M\times IG(\zeta;a,b)}$$
i.e.,
$$\dfrac{\zeta^{-1}\exp[-(\mu_1-\eta)^2/2(\mu_1-\eta)^2/2+(\mu_2-\eta)^2/2\zeta+(\mu_2-\eta)^2/2\zeta]}{M[1+\zeta^2/b^2_\tau]\,\zeta^{-1-a}\,\exp[-b/\zeta]}$$
where $M$ is the maximum of the ratio (if it exists). Assuming the tails of $IG(\zeta;a,b)$ are fatter than those of the numerator, the accept-reject method thus applies once $M$ is constructed.
| null | CC BY-SA 4.0 | null | 2023-05-27T10:01:34.233 | 2023-05-27T10:01:34.233 | null | null | 7224 | null |
617054 | 1 | 617064 | null | 2 | 130 | I did a classification of review text data using the ANN model. As we know, text data must be converted into numeric first before entering the classification stage. To change this I used the TF-IDF method. I am confused whether the data scale of TF-IDF results is a ratio or interval scale.
Here's the example of TF-IDF result:
[](https://i.stack.imgur.com/MEI0y.png)
doc 1 = It is going to rain today
doc 2 = Today I am not going outside
doc 3 = I am going to watch the season premiere
the result of TF-IDF every word can be 0 to infinity
| what is the scale of TF-IDF results? | CC BY-SA 4.0 | null | 2023-05-27T10:01:42.217 | 2023-05-27T14:11:13.013 | 2023-05-27T11:35:13.407 | 388875 | 388875 | [
"neural-networks",
"scales",
"tf-idf"
]
|
617055 | 2 | null | 16696 | 0 | null | The Politics of Large Numbers: A History of Statistical Reasoning By [Alain Desrosières](https://en.wikipedia.org/wiki/Alain_Desrosi%C3%A8res).
It covers a wide range of topics, in several countries (France, Germany, Great-Britain, United States), from the 17th/18th centuries to about 1970-1980: the history of institutions collecting and treating statistical data, how statistical methods evolved (probabilities, central limit theorem, regression, survey methodology...), the influence of political or social considerations (epidemics, poverty, unemployment, eugenics...), etc.
It was originally published in 1993 in French, and later translated in English. The most recent edition in French is from 2010, and apparently 2002 for the edition in English.
| null | CC BY-SA 4.0 | null | 2023-05-27T10:08:57.573 | 2023-05-27T10:08:57.573 | null | null | 164936 | null |
617057 | 2 | null | 329407 | 0 | null | While I am with Tim that there is more to the story than statistical significance, the source of your complex $z$-stat is that you have miscalculated $\hat p$ as $1.45$. As $\hat p$ is a proportion, $\hat p$ cannot exceed $1$. Your $n$ is the total number of classification attempts, which cannot be lower than the total number of correct classification. Since your two models each attempt $648$ classifications, your $n=2\times 648=1296$. With such a value of $n$, you get $\hat p=0.725$, leading to a real $z$-stat.
ADDITIONAL INFO
If you are interested in the topic of comparing machine learning models, I recommend a read of Benavoli et al. (21017). While the authors make a case for Bayesian methods, the paper also goes through more classical techniques.
Finally, note the issues with classification accuracy as a measure of model performance.
[Why is accuracy not the best measure for assessing classification models?](https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models)
[Academic reference on the drawbacks of accuracy, F1 score, sensitivity and/or specificity](https://stats.stackexchange.com/questions/603663/academic-reference-on-the-drawbacks-of-accuracy-f1-score-sensitivity-and-or-sp)
REFERENCE
[Benavoli, Alessio, et al. "Time for a change: a tutorial for comparing multiple classifiers through Bayesian analysis." The Journal of Machine Learning Research 18.1 (2017): 2653-2688.](https://www.jmlr.org/papers/volume18/16-305/16-305.pdf)
| null | CC BY-SA 4.0 | null | 2023-05-27T10:59:54.037 | 2023-05-27T11:17:16.707 | 2023-05-27T11:17:16.707 | 247274 | 247274 | null |
617059 | 2 | null | 612390 | 2 | null | The performance of a model in terms of AUC relates to the effect size and not to how significant you measured that effect size.
Below is a simpler case as an example for two populations where some feature follows a normal distribution. We can determine that the distributions have significantly different means with a t-test, if we have a sufficient large sample. But a classification using that feature may not need to perform well because the effect size is small.
[](https://i.stack.imgur.com/yy9Yk.png)
```
### simulate two populations
### with different means
set.seed(1)
x = rnorm(10^4,0.2,1)
y = rnorm(10^4,0,1)
### view of the two distributions
plot(density(x), main = "density of x and y")
lines(density(y),col=2)
legend(-4,0.4,c("mean = 0.2","mean = 0"), col = c(1,2), lty = 1)
### t-test will be significant
t.test(x,y)
### create an ROC plot
### a classification can be based on whether the value is above or below some cutoff
cutoff = seq(-4,4,0.01)
p1 = sapply(cutoff, FUN = function(t) {mean(x<t)})
p2 = sapply(cutoff, FUN = function(t) {mean(y<t)})
plot(p1,p2, type = "l", xlab = "P(X < cutoff)\n false positive", ylab = "P(Y < cutoff)\n true positive", main = "a not so great ROC curve")
```
>
The auc is showing up as 0.178
This indicates some bigger problem. I can't easily verify your code without the data.
One possibility is that you invert the labels positive/negative (as I did in the first version of the answer) then the computed area will be one minus the true AUC and can get below 0.5
Another possibility is that the model is overfitting on the training data and performs badly with AUC<0.5 on tests data. (although the output from you GLM doesn't look like overfitting)
| null | CC BY-SA 4.0 | null | 2023-05-27T12:09:50.247 | 2023-05-27T12:23:15.483 | 2023-05-27T12:23:15.483 | 164061 | 164061 | null |
617060 | 1 | null | null | 0 | 16 | I am pretty new to mediation in multilevel models.
I want to run a 2-1-1 (and maybe a 2-2-1) mediation model in R.
The dataset consists of 110 participants with three assessments per day.
covariates = sex ; age ; financial status and beep. sex ; age ; financial status are all level 2), beep is level 1.
Mediators = stress level (measured at each assessment), which is disentangling cstress (within-person stress level); bstress (between-person stress level).
Predictor = depression symptoms (measured at baseline = level 2)
Outcome = burnout (measured at each assessment = level 1)
Based on my understanding, using the lavaan package seems to be the most appropriate approach.
However, I am unsure about how to incorporate the covariates in the mediation model. Additionally, I would like to know how to include both between-person stress (level 2 - 2-2-1 mediation) and within-person stress (level 1 - 2-1-1 mediation) as mediators. If it is not recommended, I would like the stress level (stress) without disentangling as a mediator (2-1-1 mediation).
Any assistance with the R code would be greatly appreciated.
| Mediation model with covariates and between and within-person mediators | CC BY-SA 4.0 | null | 2023-05-27T12:21:13.373 | 2023-05-27T12:21:13.373 | null | null | 386601 | [
"mixed-model",
"multilevel-analysis",
"structural-equation-modeling",
"mediation",
"lavaan"
]
|
617061 | 1 | null | null | 0 | 15 | I have a multi output neural net, where my output is predicting 3 values.
The loss function I am using is RMSE.
What is exactly getting minimised here? Is it the sum of the RMSE of each of the individual values?
How would I get the loss per variable in this instance?
| Loss function of a multi-output Neural Net | CC BY-SA 4.0 | null | 2023-05-27T13:29:28.393 | 2023-05-27T13:29:28.393 | null | null | 292642 | [
"neural-networks",
"rms"
]
|
617062 | 2 | null | 617039 | 1 | null | The coding-specific part of this question is off-topic on this site, but there is one principle of survival analysis that you should consider implementing.
Simulating event times typically starts as you suggest, sampling from a uniform distribution over (0,1) and then finding the time corresponding to that survival fraction. The way you have this structured makes sense if you want to sample multiple event times from a distribution. In this scenario, however, you only want to take a single event-time sample from each of the randomly generated hazard functions.
Take advantage of the following relationship between $S(t)$ and the cumulative hazard $H(t)$:
$$H(t)=- \log S(t)$$
After you have your sample of the survival fraction from the uniform distribution, start to integrate your (necessarily non-negative) hazard function $h(t)$ from $t=0$ until you reach the value of $H$ corresponding to that survival fraction. Record the upper limit of integration at which that occurs as the event time. If you instead get to some maximum observation time without reaching that value of $H$, record a right-censored observation at that maximum time.
As an example, sample a survival probability and find the corresponding cumulative hazard:
```
set.seed(2)
(cumHazTarget <- -log(runif(1)))
## [1] 1.688036
```
Define your hazard function, with random effects of 0 in this instance:
```
hazF <- function(t) 2*t*exp(-5)*exp(0.5*(1+2*t))
```
Now find the value of $t$ that gives an integrated (i.e., cumulative) hazard equal to the above target. [This page](https://stackoverflow.com/a/28328660/5044791) shows a simple way to find the upper limit of integration that gives a desired target for a definite integral. More-or-less copying that here (with a restriction of the lower limit of integration to 0) and applying it to this instance:
```
findprob <- function(f, interval, target) {
optimize(function(x) {
abs(integrate(f, 0, x)$value-target)
}, interval)$minimum
}
findprob(hazF, interval=c(0,10),cumHazTarget)
## [1] 3.429483
```
That gives the time at which the cumulative hazard equals that corresponding to your sampled survival probability. (I suspect that there is a more efficient way to do this that takes advantage of the non-negativity of the hazard function, but this illustrates the principle.)
Set the top limit of `interval` to a value somewhat above your maximum observation time and treat times greater than the maximum observation time as right censored. For example, if your maximum observation time is 3, in this instance the function returns a time value slightly above 3, which you could then right censor at 3:
```
findprob(hazF, interval=c(0,3.1),cumHazTarget)
## [1] 3.099922
```
| null | CC BY-SA 4.0 | null | 2023-05-27T13:56:33.103 | 2023-05-27T17:22:12.070 | 2023-05-27T17:22:12.070 | 28500 | 28500 | null |
617063 | 2 | null | 175023 | 0 | null | You can always apply both, but the question is, if it makes sense. The homogeneous $K$-function was defined for stationary point processes, while "The inhomogeneous" $K$-function was defined for a special class of inhomgeneous point processes, the so-called "second-order intensity reweighted stationary" (soirs) point processes, see [Baddeley, Møller and Waagepetersen (2000)](https://onlinelibrary.wiley.com/doi/10.1111/1467-9574.00144).
When the conditions (stationarity or soirs) are not fulfilled, the theoretical value of the (inhomogeneous) $K$-function depends on the placement of the observation window, and thus the interpretation of estimation result is somewhat dubious. A statistical test for the underlying assumptions can be found in the paper [Hahn and Jensen (2015)](https://onlinelibrary.wiley.com/doi/abs/10.1111/sjos.12185), together with alternative definitions of inhomogeneous $K$-functions that are adapted to other particular classes of inhomogeneous point processes: transformation-inhomogeneous and locally scaled point processes.
| null | CC BY-SA 4.0 | null | 2023-05-27T14:04:50.143 | 2023-05-28T21:12:11.293 | 2023-05-28T21:12:11.293 | 237561 | 237561 | null |
617064 | 2 | null | 617054 | 5 | null | According to the definitions made in [this](https://www.questionpro.com/blog/ratio-scale-vs-interval-scale) article, the most important difference between an interval-scale variable and a ratio-scale variable is the notion of absolute zero point:
>
Zero-point in an interval scale is arbitrary. For example, the
temperature can be below 0 degrees Celsius and into negative
temperatures.
>
The ratio scale has an absolute zero or character of origin. Height
and weight cannot be zero or below zero.
The TF-IDF has a notion of absolute zero, when the term does not appear in the any document, i.e. term-frequency is $0$. It is, therefore, a ratio-scale variable.
| null | CC BY-SA 4.0 | null | 2023-05-27T14:11:13.013 | 2023-05-27T14:11:13.013 | null | null | 204068 | null |
617065 | 1 | null | null | 0 | 14 | I am trying to use `t_family(link="log")` distribution from the glmmTMB package. The package allows fixing the shape parameter using `start = list(psi = log(fixed_df)), map = list(psi = factor(NA))`. In practice when I try to fix or not to fix, I get a very different result. My question is that what will be the effect of fitting a model with and without fixing the shape parameter, and which approach is correct? The AIC and deviance indicate a model with no fixed shape looks to fit better? The following is the code and the results :
```
glmmTMB(cypk~H*FL, family=t_family(link="log"), data= data,,start = list(psi = log(12)), map = list(psi = factor(NA)))
```
[](https://i.stack.imgur.com/rWj1u.png)
[](https://i.stack.imgur.com/6uPP1.png)
```
glmmTMB(cypk~H*FL, family=t_family(link="log"), data= data)
```
[](https://i.stack.imgur.com/tNiVn.png)
[](https://i.stack.imgur.com/Z61JF.png)
| Use of t_family distribution in glmmTMB | CC BY-SA 4.0 | null | 2023-05-27T14:27:51.623 | 2023-05-27T14:48:28.873 | 2023-05-27T14:48:28.873 | 362671 | 388946 | [
"r",
"distributions",
"glmmtmb"
]
|
617066 | 1 | null | null | 2 | 20 | I am wondering what should happen when trying to compute leave-one-out cross-validation error when there are multiple 1-nearest-neighbours that is equidistant to the training point.
As an example, will the Leave-one-out cross-validation performed on the points below have an error of have an error of
- Approach 1. 2/8 since the points at (2,6) and (6,2) will go by majority vote
- Approach 2. 2/6 since the points at (2,6) and (6,2) will be not be classified and therefore will not be considered?
It seem to me that the answer is ambiguous and both could be possible solutions, albeit option 1 is likely to be most commonly chosen.
May I know if I use approach 2, stating that the two points will be unclassified, is incorrect?
[](https://i.stack.imgur.com/yAoVK.png)
| Multiple equidistant neighbours in 1-nearest-neighbour - how to break ties? | CC BY-SA 4.0 | null | 2023-05-27T14:31:22.650 | 2023-05-27T15:01:44.767 | null | null | 384139 | [
"machine-learning",
"cross-validation",
"k-nearest-neighbour"
]
|
617067 | 1 | null | null | 0 | 9 | As part of a mini meta analysis where I want to test 2 moderators, I have used the following code:
```
output <- rma(yi, vi, mods = ~ factor(discipline) : factor(gender), data=interact_emm, method = "FE")
```
Both gender and discipline have two levels. When I run the analysis, the model output only shows 4 lines:
Model Results:
```
intrcpt 0.1334 0.0854 1.5614 6 0.1694 -0.0756
factor(discipline)hass:factor(gender)men 0.0873 0.1542 0.5659 6 0.5920 -0.2901
factor(discipline)stem:factor(gender)men 0.1837 0.1194 1.5381 6 0.1749 -0.1086
factor(discipline)hass:factor(gender)women 0.2918 0.1505 1.9387 6 0.1006 -0.0765
```
Hence, I'm missing one of the interactions - Does anyone know how to resolve this?
| Metafor interactions (not showing complete output) | CC BY-SA 4.0 | null | 2023-05-27T14:44:02.683 | 2023-05-27T14:49:10.423 | 2023-05-27T14:49:10.423 | 362671 | 386531 | [
"interaction",
"metafor"
]
|
617068 | 2 | null | 617066 | 0 | null | There is no single correct tie breaking rule for knn algorithm. You can do majority voting, increase or decrease $k$ in cases of tie, do random sampling etc. The choice of leaving the points unclassified will depend on your way of handling ties. It is not incorrect, it's probably just unusual.
| null | CC BY-SA 4.0 | null | 2023-05-27T15:01:44.767 | 2023-05-27T15:01:44.767 | null | null | 204068 | null |
617069 | 2 | null | 448977 | 0 | null | David G. Jenkins and Pedro F. Quintana-Ascencio (2020) suggested:
>
We conclude that a minimum $N = 8$ is informative given very little variance, but minimum $N ≥ 25$ is required for more variance.
>
Insufficient $N$ and $R^2$-based model selection apparently contribute to confusion and low reproducibility in various disciplines. To avoid those problems, we recommend that research based on regressions or meta-regressions use $N ≥ 25$.
[https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0229345](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0229345)
| null | CC BY-SA 4.0 | null | 2023-05-27T15:18:42.797 | 2023-05-27T15:18:42.797 | null | null | 80704 | null |
617070 | 2 | null | 616811 | 1 | null | The casual estimand is $E[Y|do(a)]$. When the backdoor criterion is satisfied, we have that $E[Y|do(a)] = E[E[Y|A=a, X]]$, where $X$ is a sufficient adjustment set. The propensity score theorem says that $E[E[Y|A=a, X]] = E[E[Y|A=a, p(X)]]$, where $p(X) = P(A|X)$ is the propensity score. This last step is what allows us to estimate the causal estimand using propensity score methods.
| null | CC BY-SA 4.0 | null | 2023-05-27T15:32:40.100 | 2023-05-27T15:32:40.100 | null | null | 116195 | null |
617071 | 1 | null | null | 0 | 26 | When studying through the book Introductory Econometrics, we see that Wooldridge list 5 assumptions in order to get the BLUE estimators (Best Linear Unbiased Estimator) in Linear Regression: linear dependence, random sampling, zero conditional mean (of error term), no perfect multicollinearity and homoscedasticity.
However, even in relevant scientific articles, we often see results in linear regression that are not considering some of the assumptions. I understand that the assumptions do not hurt the predictions, but if we are discussing inference, shouldn't they be take into consideration? Why do some authors just neglect the errors distribution and expected value, and the homoscedasticity, for example?
| How important are the Gauss-Markov assumptions for linear regression? | CC BY-SA 4.0 | null | 2023-05-27T15:57:39.607 | 2023-05-27T15:57:39.607 | null | null | 371238 | [
"regression",
"econometrics",
"gauss-markov-theorem"
]
|
617072 | 2 | null | 300461 | 1 | null | This is the exact same derivation as is given for the usual derivation of linear regression $R^2$ as the proportion of variance explaind by the regression. For that reason, I can see why the derivation may be omitted, if the assumption is that readers have seen the derivation earlier in their statistics studies. This approach has the disadvantage of confusing readers like the OP who perhaps do not know this derivation.
Once the proportion of variance explained is known, that can be compared to the total variance to give the explained variance. The difference between the total and explained variance would be the unexplained variance.
Unfortunately, the proportion of variance explained outside of OLS linear regression is not straightforward. [Here](https://stats.stackexchange.com/a/551916/247274), I derive the "proportion of variance explained" interpretation in such a situation and explain why that is a special case. Thus, I am not sold on the idea of "explained variance" in self-organizing maps, though [there seems to be a convergence toward the OLS behavior](https://stats.stackexchange.com/a/475785/247274) (though this is for the true loss optimum, not where the parameters are when you stop training at what might be a local but not global optimum).
| null | CC BY-SA 4.0 | null | 2023-05-27T16:04:24.463 | 2023-05-27T16:04:24.463 | null | null | 247274 | null |
617074 | 2 | null | 616967 | 0 | null | You are seeing the difference between statistical significance and practical significance.
I am with you that the distributions are different. They sure seem have different means. However, the distributions are not all that different. It should be hard to consistently classify the group. You might be able to do such classification occasionally and achieve an okay accuracy, but a high accuracy based on these two features seems impossible.
If you have this kind of overlap on the rest of your features, then you should not expect to score high accuracy values if you just use those features. While the distributions are statistically significantly different, the practical significance is not so great to result in high classification accuracy.
Where you might be able to make some progress is by considering multivariate distributions through interaction terms between the original features. For instance, consider the example below.
```
library(ggplot2)
library(MASS)
library(pROC)
set.seed(2023)
N <- 1000
X0 <- MASS::mvrnorm(
N,
c(0, 0),
matrix(c(
1, 0.9,
0.9, 1
), 2, 2)
)
X1 <- MASS::mvrnorm(
N,
c(0, 0),
matrix(c(
1, -0.9,
-0.9, 1
), 2, 2)
)
X <- rbind(X0, X1)
y <- c(
rep(0, N), rep(1, N)
)
d <- data.frame(
x1 = X[, 1],
x2 = X[, 2],
y = as.factor(y)
)
ggplot(d, aes(x = x1, y = x2, col = y)) +
geom_point()
L1 <- glm(y ~ X[, 1] + X[, 2], family = binomial)
L2 <- glm(y ~ X[, 1] + X[, 2] + X[, 1]:X[, 2], family = binomial)
r1 <- pROC::roc(y, predict(L1))$auc
r2 <- pROC::roc(y, predict(L2))$auc
print(r1) # I get 0.5248, so barely above chance
print(r2) # I get 0.9577, which is quite strong performance
```
[](https://i.stack.imgur.com/I3B1m.png)
In this example, the marginal distributions of the features are identical for the two categories (`y`), so predictions based on just the raw features cannot be accurate. However, looking at the plot, the joint distribution of the features are clearly different for the two categories. Considering the interaction leads to much stronger performance. A number of machine learning models consider these kinds of interactions internally without you having to explicitly program them, among them being random forests and neural networks.
A useful visualization for you might be something like a t-SNE plot. While t-SNE can fail to separate categories when they truly are separable and introduce separability when the categories are not, such a plot is suggestive of whether or not even considering interactions and joint distributions of the features will allow for accurate classifications by any model. It might be that, even if the (even marginal) distributions of the features are different, the categories simply are not that separable on the given features, and strong performance using these features is impossible.
Note the [issues](https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models) with accuracy as a performance metric, especially in a situation with imbalanced classes like you appear to have.
| null | CC BY-SA 4.0 | null | 2023-05-27T16:23:05.467 | 2023-05-27T16:29:59.510 | 2023-05-27T16:29:59.510 | 247274 | 247274 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.