Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
614187
|
2
| null |
614044
|
1
| null |
Confidence intervals grow out of a school of thought called Frequentism, in which we assume we could hypothetically repeat such a job an unlimited number of times.
So all the students of that university are just a sample of all those unlimited numbers of universities that might have collected students from the same background as your specific university has. Does that make sense? Not really, bug kind of:
Whenever the university that you studied stands in as an example for universities in general, you may consider reporting things that may not be sensible for the one actual thing you are investigating.
It can be misleading though, if someone took that as a measure for different universities that take in different students.
| null |
CC BY-SA 4.0
| null |
2023-04-26T11:46:08.853
|
2023-04-26T11:46:08.853
| null | null |
117812
| null |
614188
|
1
| null | null |
0
|
10
|
I have a dataset in which 900 people responded to a single-item binary outcome variable. However, each of the individuals responded to only one of three different binary measurements.
This would result in a dataset like the one below:
```
set.seed(1234)
mydata <- data.frame(id = 1:900,
measure = sample(rep(1:3, each = 300)),
response = sample(0:1, size = 900, replace = TRUE))
```
Now, the editor reviewing the paper wants me to provide evidence for the measurement invariance of these three measures. However, I'm confused and unsure whether it is even possible to test measurement invariance for this setup because these are a.) single-item measures and b.) each person responds to only one of the measurements (not all three). So I don't see a way to test for MI.
I could compare the three measures in a logistic regression and see whether responses differ across the three measures, but that wouldn't tell me much about whether they measure the same construct.
|
Measurement Invariance For Sigle-Item Binary Measures (Between Participants)
|
CC BY-SA 4.0
| null |
2023-04-26T12:05:18.870
|
2023-04-26T12:05:18.870
| null | null |
337306
|
[
"scale-invariance"
] |
614189
|
1
| null | null |
0
|
60
|
We have data $X_1, \dots, X_n$ which are i.i.d copies of $X$. Where we denote $\mathbb{E}[X] = \mu$, and $X$ has finite variance.
We define the truncated sample mean:
$\begin{align}
\hat{\mu}^{\tau} := \frac{1}{n} \sum_{i =1}^n \psi_{\tau}(X_i)
\end{align}$
Where the truncation operator is defined as:
$\begin{align}
\psi_{\tau}(x) = (|x| \wedge \tau) \; \text{sign}(x), \quad x \in \mathbb{R}, \quad \tau > 0
\end{align}$
The bias for this truncated estimator is then defined as:
Bias $:= \mathbb{E}(\hat{\mu}^{\tau}) - \mu$
In [previous question](https://stats.stackexchange.com/questions/611794/proving-upper-bound-for-bias-of-truncated-sample-mean) it is shown that we can upper bound the truncation
$\begin{align}
|\text{Bias}| = |\mathbb{E}[(X - \text{sign}(X)\tau) \mathbb{I}_{\{|X| > \tau\}}]| \leq \frac{\mathbb{E}[X^2]}{\tau}
\end{align}$
---
I was now wondering if it can be shown that:
$\begin{align}
0 < |\text{Bias}|
\end{align}$
That is, is the truncated mean estimate biased?
|
Is truncated mean a biased estimator
|
CC BY-SA 4.0
| null |
2023-04-26T12:11:57.137
|
2023-04-26T13:20:27.307
| null | null |
283493
|
[
"probability",
"mathematical-statistics",
"robust",
"probability-inequalities",
"bias-variance-tradeoff"
] |
614190
|
1
| null | null |
0
|
11
|
I'm completely new to this so I apologise in advance if my question isn't even phrased correctly.
I want to analyse changes in similarity of two interacting entities over time. Each entity is represented as a 100-D feature vector which is updated interchangeably after every interaction. My hypothesis is that the similarity would increase over time, as the entities interact. Are there any parametric tests that could handle it, given that the samples are non-independent? Would it make sense to simply measure euclidean distance between the features at every time step and then somehow analyse how it changes over time?
Thank you in advance!
|
Similarity measure over time
|
CC BY-SA 4.0
| null |
2023-04-26T12:15:51.617
|
2023-04-26T12:17:42.610
|
2023-04-26T12:17:42.610
|
386612
|
386612
|
[
"time-series",
"non-independent",
"similarities"
] |
614191
|
1
|
614208
| null |
1
|
20
|
In R, we have many packages to estimate the GARCH model. They choose different algorithms to estimate like quasi-Newton, SQP. Unfortunately, the estimated parameters are totally different from each other. Now, if I have real data, I don't know the real volatility. How can I determine which algorithm estimates GARCH(1,1) best? By objective function value? Ljung-Box test? Kurtosis, skewness? I have read some papers about that, but they mainly compare the result among different GARCH-type models.
|
How can I determine which algorithm estimates GARCH(1,1) best?
|
CC BY-SA 4.0
| null |
2023-04-26T12:25:03.033
|
2023-04-26T15:25:52.687
|
2023-04-26T15:22:25.670
|
53690
|
386613
|
[
"estimation",
"optimization",
"model-selection",
"garch"
] |
614192
|
2
| null |
614044
|
2
| null |
While it makes sense to acknowledge that even population data might have errors or uncertainty for various reasons, you can't quantify that error using the specific statistical procedure we refer to as "calculating confidence intervals." Confidence intervals are designed to do one thing and one thing only: to quantify the amount uncertainty associated with using a random sample to generalize about the population from which it is drawn (that is, the amount of uncertainty due purely to sampling error). The reason this is actually possible to do (the reason we can quantify this uncertainty precisely) is due (in part) to a particular mathematical result called the central limit theorem (CLT), which is specifically about the using random samples to generalize to a population. The math used to generate confidence intervals stems directly from the CLT and related assumptions.
The different kinds of errors you are talking about are not related to the issues of using a sample to generalize about a population, and thus the CLT, and associated procedures like confidence intervals (or things like p values), can't help you quantify how big of a problem they are. In particular, these errors are likely to be non random - they are likely to affect certain kinds of observations more than others, leading not only to error, but bias, and confidence intervals don't help you to address issues of bias.
Now, if you try to calculate confidence intervals on a population, using the simple version of the formula
$CI=x {\displaystyle \pm } z \frac{s}{ \sqrt{n}}$
And you put in the population N for "n" and the population standard deviation for "s" then you will get confidence intervals. But these will be meaningless, because one of the assumption of this formula is that the population you are drawing the sample from is infinitely large (or rather, that your sample is an infinitesimally small proportion of the population). In practice, this is usually a reasonable assumption (the proportion of all Americans included in any one election poll is basically zero). But if your sample is a non-trivial proportion of the population, then to get the correct confidence intervals you need to adjust that formula with a "[finite population correction](https://stats.stackexchange.com/questions/5158/explanation-of-finite-population-correction-factor)" which will make the confidence interval smaller, since there is less opportunity for un-sampled population members to contribute to error. In your case, where the sample size equals the population size, this version of the CI formula will correctly tell you that the size of the confidence interval is zero - since there is no possibly of the result being wrong due to sampling error, which is the only thing confidence intervals are concerned with.
In short: there are lots of reasons why data (whether it is from a sample or population) might be biased or wrong. But the procedure of confidence intervals is only designed to quantify one of those sources of errors - sampling error.
| null |
CC BY-SA 4.0
| null |
2023-04-26T12:43:42.130
|
2023-04-26T12:43:42.130
| null | null |
291159
| null |
614193
|
1
|
614783
| null |
1
|
33
|
Assume that we have objects (e.g. particles) whose properties (e.g. diameter) follow a log-normal distribution that can be described by a geometric mean $\mu_g$ and a geometric standard deviation $\sigma_g$.
Now consider that we want to measure $\mu_g$ and $\sigma_g$, by sampling $n$ samples from the distribution (e.g. measuring the diameters of $n$ individual particles). Consider further that the utilized measurement technique is imperfect, i.e. we don't receive the true value $A_i$ from a measurement, but instead we receive $A_i'=A_i+\Delta A_i$.
If we define that the errors are random, then a reasonable assumption for $A_i'$ could be that it is normally distributed around $A_i$ with a certain standard deviation $\sigma_{measurement}$.
We then calculate the measured properties of the initial log-normal distribution, based on the imperfect measurements, so we receive
$\mu_g'=\mu_g+\Delta \mu_g= \sqrt[n]{(A_1+\Delta A_1) (A_2+\Delta A_2) \cdots (A_n+\Delta A_n)}$
and
$\sigma_g'=\sigma_g+\Delta \sigma_g=\exp{\sqrt{ \sum_{i=1}^n \left(\ln { {A_i+\Delta A_i} \over \mu_g' } \right)^2 \over n }}$.
Questions:
- Can we make general statements about $\Delta \mu_g$ and $\Delta \sigma_g$? E.g. $\Delta \mu_g<0$ or $\Delta \sigma_g>0$, or something along these lines?
- How does the sample size $n$ play into this? Often, we would assume that errors will approach 0 for $n\rightarrow\infty$. Is this also the case in this scenario?
Note: I simulated the process, and obviously I'm willing to share the results, but I'd like to have an unbiased discussion first.
Thanks a lot!
|
How do sample size and measurement accuracy affect the measurement of the parameters of a log-normal distribution?
|
CC BY-SA 4.0
| null |
2023-04-26T12:48:49.847
|
2023-05-03T15:48:10.840
| null | null |
251040
|
[
"sampling",
"lognormal-distribution",
"geometric-mean"
] |
614194
|
1
| null | null |
2
|
21
|
Suppose that we perfrom a Johansen test over three I(1) variables that give us these results through the maximum eigenvalues statistic:
[](https://i.stack.imgur.com/m4SrK.png)
as you can see, we accept the null hypothesis in the first step (highest eigenvalue), so in theory we should stop and assume that there is no cointegration between the variables.
However, for the lowest eigenvalue, we could reject the null hypothesis for 10% and 5% confidence level, so my question is: why don't consider the last eigenvalue test as the proof that there is a cointegration relationship and why do we stop in the first step?
Note: my question is fundamented in the fact that I believe the Johansen test is actually testing the rank of the matrix through the nullity + rank theorem, which states that the number of variables in a matrix (columns) - the nullity of a Matrix (the dimension of the eigenspace associated with an eigenvalue of 0) is equal to the rank. So, if I were right, in this case, rejecting the null hypothesis that the last eigenvalue is equal to 0 would mean that the rank of the matrix is at least 1, since the nullity couldn't be as high as the number of variables. Would be gradeful if anyone could give me an explanation in case my supposition is wrong.
|
Johansen test accepts first null hypothesis but would reject last one
|
CC BY-SA 4.0
| null |
2023-04-26T12:49:02.260
|
2023-04-26T14:58:52.383
|
2023-04-26T14:58:52.383
|
53690
|
272203
|
[
"time-series",
"matrix",
"linear-algebra",
"cointegration",
"eigenvalues"
] |
614195
|
2
| null |
614189
|
2
| null |
To elaborate my comment, consider first that $X \sim U(-2, 2)$ and $\tau = 1$. In this case $E[X] = 0$ and $E[\psi_\tau(X)] = 0$, hence the bias is $0$.
On the other hand, if $X \sim U(-1, 2)$ and $\tau = 1$, then $E[X] = \frac{1}{2}$ but $E[\psi_\tau(X)] = \frac{1}{6}$, which gives the bias of $-\frac{1}{3}$.
Therefore, if you are faced with a non-trivial statistical inference problem, I tend to believe $\hat{\mu}^\tau$ is biased (as unbiasedness requires $E_\mu(\hat{\mu}^\tau) = \mu$ for every distribution in the parametric family). To see this more clearly, I suggest you tabulating a table of biases for the family of normal distributions $\{N(\mu, 1): \mu > 0\}$. Of course, the question may have a more definitive answer if you can restrict the underlying distribution family to a specific one.
| null |
CC BY-SA 4.0
| null |
2023-04-26T13:20:27.307
|
2023-04-26T13:20:27.307
| null | null |
20519
| null |
614197
|
1
| null | null |
0
|
37
|
I have a dataset that I have splitted for training - testing purpose (~2400 - ~600) .
After training an XGBoost regression model on the training set, I would like to statistically test the fact that the model is indeed better on the test set than a naïve model that would just always output the average of the training set output factor.
Residuals (of both predictions), residuals difference (between both predictions) or squarred residuals difference(between both predictions) are all not normal (tested with ShapiroWilk) or not even pseudo-normal.
So I think I should use a non parametric test, maybe something like Mann–Whitney U test, but if I get it right, this particular test can't solve my problem.
Except if I am wrong, this problem would be almost isomorphic of prooving that a sort of R² of the model is not null.
```
y_pred = trained.predict(X_test)
baseground = [y_train.mean()] * len(y_pred)
mse = mean_squared_error(y_test, y_pred)
baseground_mse = mean_squared_error(y_test, baseground)
Model mse: 0.3606197869571293
Baseground mse: 0.42712517870196076
```
```
residuals = [y_p - y_t for y_p, y_t in zip(y_pred, y_test)]
squarred_residuals = [(y_p - y_t)**2 for y_p, y_t in zip(y_pred, y_test)]
residuals_naive = [(y_b - y_t) for y_b, y_t in zip(baseground, y_test)]
squarred_residuals_naive = [(y_b - y_t)**2 for y_b, y_t in zip(baseground, y_test)]
residuals_differences = [(r - rn) for r, rn in zip(residuals, residuals_naive)]
squarred_residuals_differences = [sr - srn for sr, srn in zip(squarred_residuals, squarred_residuals_naive)]
residuals: ShapiroResult(statistic=0.9887793660163879, pvalue=4.4769913074560463e-05)
residuals_naive: ShapiroResult(statistic=0.8029045462608337, pvalue=5.698079444623409e-28)
residuals_differences: ShapiroResult(statistic=0.9659961462020874, pvalue=1.8076425772894922e-11)
squarred_residuals_differences: ShapiroResult(statistic=0.8816245794296265, pvalue=2.258187421269406e-22)
```
```
r2 = r2_score(y_test, y_pred)
R² = 0.1557046857947879
```
[This Crossvalidated thread may be related](https://stats.stackexchange.com/questions/9573/t-test-for-non-normal-when-n50)
T.L;D.R
How to statistically test that for two paired non normal distributions d1, d2 that mean(d1) > mean(d2)
|
Statistically testing MSE difference between models predictions without normality of squared residuals
|
CC BY-SA 4.0
| null |
2023-04-26T13:36:30.900
|
2023-04-26T15:50:31.107
|
2023-04-26T15:50:31.107
|
53690
|
386617
|
[
"predictive-models",
"model-evaluation",
"normality-assumption",
"model-comparison",
"diebold-mariano-test"
] |
614198
|
1
|
614407
| null |
1
|
61
|
I am trying to create R code for generating multiple simulation paths for forecasting survival probabilities. In the code posted at the bottom, I take the `survival` package's `lung` dataset and create a new dataframe `lung1`, representing the `lung` data "as if" the study max period were 500 instead of the 1022 it actually is in the `lung` data. I use a parametric Weibull distribution per goodness-of-fit tests I ran separately. I'm trying to forecast, via multiple simulation paths, survival curves for periods 501-1000 using, ideally as a random number generation guide the Weibull parameters for the data for periods 1-500. This exercise is a forecasting "what-if", if I only had data for a 500-period lung study. I then compare the forecasts with the actual lung data for periods 501-1000.
The shape and scale parameters I extracted from the `lung1` data are 1.804891 and 306.320693, respectively.
I'm having difficulty generating reasonable, monotonically decreasing simulation paths for forecast periods 501-1000. In looking at the code posted at the bottom, what should I be doing instead?
The below images help illustrate:
- First image is a K-M plot showing survival probabilities for the entire lung dataset.
- Second image plots lung1 (500 assumed study periods) with period 501-1000 forecasts extending in grey lines. Obviously something is not quite right!
- Third image is only there to show simulations I've done in the past before using time-series models such as ETS, which sort of gets at what I'm trying to do here with survival analysis. This isn't my best example, I've generated nice monotonically decreasing, concave forecast curves using log transformations and ETS. I am now trying to understand survival analysis better, no more ETS for now.
[](https://i.stack.imgur.com/scLQB.png)
Code:
```
library(fitdistrplus)
library(dplyr)
library(survival)
library(MASS)
# Modify lung dataset as if study had only lasted 500 periods
lung1 <- lung %>%
mutate(time1 = ifelse(time >= 500, 500, time)) %>%
mutate(status1 = ifelse(status == 2 & time >= 500, 1, status))
fit1 <- survfit(Surv(time1, status1) ~ 1, data = lung1)
# Get survival probability values at each time point
surv_prob <- summary(fit1, times = seq(0, 500, by = 1))$surv
# Create a data frame with time and survival probability values
lungValues <- data.frame(Time = seq(0, 500, by = 1), Survival_Probability = surv_prob)
# Plot the survival curve using the new data frame
plot(lungValues$Time, lungValues$Survival_Probability, xlab = "Time", ylab = "Survival Probability",
main = "Survival Plot", type = "l", col = "blue", xlim = c(0, 1000), ylim = c(0, 1))
# Generate correlation matrix for Weibull parameters
cor_matrix <- matrix(c(1.0, 0.5, 0.5, 1.0), nrow = 2, ncol = 2)
# Generate simulation paths for forecasting
num_simulations <- 10
forecast_period <- seq(501, 1000, by = 1)
start_prob <- 0.293692
shape <- 1.5
scale <- 100
for (i in 1:num_simulations) {
# Generate random variables for the Weibull distribution
random_vars <- mvrnorm(length(forecast_period), c(0, 0), Sigma = cor_matrix)
shape_values <- exp(random_vars[,1])
scale_values <- exp(random_vars[,2]) * scale
# Calculate the survival probabilities for the forecast period
surv_prob <- numeric(length(forecast_period))
surv_prob[1] <- start_prob
for (j in 2:length(forecast_period)) {
# Calculate the survival probability using the Weibull distribution
surv_prob[j] <- pweibull(forecast_period[j] - 500, shape = shape_values[j], scale = scale_values[j], lower.tail = FALSE)
# Ensure the survival probability follows a monotonically decreasing, concave path
if (surv_prob[j] > surv_prob[j-1]) {
surv_prob[j] <- surv_prob[j-1] - runif(1, 0, 0.0005)
}
}
# Combine the survival probabilities with the forecast period and create a data frame
df <- data.frame(Time = forecast_period, Survival_Probability = surv_prob)
# Add the simulation path to the plot
lines(df$Time, df$Survival_Probability, type = "l", col = "grey")
}
```
|
How to generate multiple forecast simulation paths for survival analysis?
|
CC BY-SA 4.0
| null |
2023-04-26T13:39:56.447
|
2023-04-28T14:50:53.360
| null | null |
378347
|
[
"r",
"forecasting",
"survival",
"simulation",
"kaplan-meier"
] |
614199
|
2
| null |
605924
|
0
| null |
@jbowman said that "You can in fact run a Poisson regression on non-integer data, at least in R; you'll still get the "right" coefficient estimates etc.". I have tried this out and saw problems in the "etc" part.
My dataset is [mortality rates from Sweden](https://doi.org/10.1371/journal.pone.0233384) , I have extracted the observations for females born in 1905. The resulting data frame contains 4 columns: `age` (in years, this is the predictor), `ndead` (number of people dying at that age), `N` (the total number of people in that age cohort) and `haz` = $n_{dead}/N$ which is the mortality rate.
First, let's perform an OLS with $\ln(\mathrm{haz})$ as the response variable to establish a "baseline", so to speak:
```
Call:
lm(formula = log(haz) ~ age, data = md)
Residuals:
Min 1Q Median 3Q Max
-0.088305 -0.034501 0.000204 0.034399 0.075291
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.091e+01 6.900e-02 -158.0 <2e-16 ***
age 1.012e-01 9.136e-04 110.8 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.04549 on 29 degrees of freedom
Multiple R-squared: 0.9976, Adjusted R-squared: 0.9976
F-statistic: 1.228e+04 on 1 and 29 DF, p-value: < 2.2e-16
```
If I run a Poisson GLM in R using `log(N)` as the offset I get a very similar result:
```
Call:
glm(formula = ndead ~ age + offset(log(N)), family = poisson,
data = md)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.7945 -0.7639 0.1338 1.1044 1.8624
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -10.808320 0.052330 -206.5 <2e-16 ***
age 0.099560 0.000661 150.6 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 24739.939 on 30 degrees of freedom
Residual deviance: 47.526 on 29 degrees of freedom
AIC: 324.69
```
Now let's perform the Poisson GLM fit using `haz` as the response variable. The parameter estimates are sort of similar to the fit above. However, look at the estimates of the standard errors of the coefficients: they are much higher than in the "offset" case, the residual deviance is unrealistically low and the AIC goes to infinity:
```
Call:
glm(formula = haz ~ age, family = poisson, data = md)
Deviance Residuals:
Min 1Q Median 3Q Max
-0.0174605 -0.0048843 0.0008723 0.0071586 0.0212824
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -11.0679 8.9329 -1.239 0.215
age 0.1033 0.1084 0.953 0.340
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 1.1368277 on 30 degrees of freedom
Residual deviance: 0.0024028 on 29 degrees of freedom
AIC: Inf
```
From this I conclude that the Poisson GLM implementation in R (I used Version 4.2.1) cannot really deal with non-integer response variables. This is not surprising, because the Poisson distribution is discrete and "knows nothing" about ratios.
However... here is a twist to the tale: the Quasi-Poisson apparently can deal with mortality rates! Witness this result:
```
Call:
glm(formula = haz ~ age, family = quasipoisson, data = md)
Deviance Residuals:
Min 1Q Median 3Q Max
-0.0174605 -0.0048843 0.0008723 0.0071586 0.0212824
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -11.067922 0.081368 -136.0 <2e-16 ***
age 0.103309 0.000987 104.7 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for quasipoisson family taken to be 8.296971e-05)
Null deviance: 1.1368277 on 30 degrees of freedom
Residual deviance: 0.0024028 on 29 degrees of freedom
AIC: NA
```
Note that the parameter estimates and the deviance values are the same (within numerical error) as what I got from the `haz ~ age` formula with the Poisson GLM, but the error estimates are much more reasonable.
Since I don't know how the Quasi-Poisson magic is implemented in R, I have no idea whether this was just a lucky coincidence or it is supposed to work like this all the time. The safe bet would be to restrict ourselves to fitting Poisson GLM-s to count data only, because that's what the Poisson distribution is for (number of independent events in a generic interval).
| null |
CC BY-SA 4.0
| null |
2023-04-26T13:58:24.820
|
2023-04-26T14:06:45.197
|
2023-04-26T14:06:45.197
|
43120
|
43120
| null |
614200
|
1
| null | null |
1
|
19
|
I have a collection of data that I expect to be linear but has a unknown amount of noise to the data. Initially I wanted to use the least squares regression line to determine if the slope, y axis, and variance of the data was acceptable. But, a data set with less data points or data points highly localized to two locations would give similar results. Is there a linear approximation algorithm that takes into account the beginning and end of the data as well as the density of the data? Maybe something that results in a point and a Vector instead of a continuous linear equation?
Some Graphs that have the same linear equation from the LSRL algorithm that I'd like to be able to differentiate.
"Good Well Spaced Data"

"Tightly Packed Data"

"Short Length"

|
Least Squares Regression with Length and Density of input data
|
CC BY-SA 4.0
| null |
2023-04-25T20:18:38.527
|
2023-05-19T22:38:23.453
|
2023-05-19T22:38:23.453
|
11887
|
386636
|
[
"regression",
"least-squares",
"linear-algebra"
] |
614201
|
1
| null | null |
0
|
23
|
I'm trying to analyse whether the balance of framing of the issue in question has significantly changed since 2012. Specifically, my research hypotheses are that the proportion of Frames 1 & 2 has decreased since 2012, and that the proportion of Frames 3,4,5,6,& 7 have increased since 2012 as per the results in the second table, and I am interested in the trend over the time as opposed to the change between two specific years. Given that the data is originally categorical, I am unsure as to whether a simple linear regression is sufficient for analysing the change in proportions, or whether I need to conduct a chi-squared test or other statistical test to answer this validly, so I'd really appreciate any help!
[](https://i.stack.imgur.com/ZoudB.png)
[](https://i.stack.imgur.com/t85lU.png)
|
Linear regression or other statistical hypotheses test needed for this data?
|
CC BY-SA 4.0
| null |
2023-04-26T14:20:04.400
|
2023-04-26T14:20:04.400
| null | null |
386624
|
[
"regression",
"hypothesis-testing",
"statistical-significance"
] |
614202
|
1
| null | null |
0
|
19
|
I have run a pooled logit regression (using R) with a dependent variable of probability of CEO turnover. I have several binary and continuous independent variables, and an interaction term between binary and continuous variable.
I am confused about interpreting the results, since interaction term does not have a marginal effect, and it is said that interpreting the logit model coefficients do not make much sense.
What do you recommend to interpret the results, especially the interaction term?
Thanks in advance
I have read several sources, but it seems tricky and complex to follow
|
Interpretation of results - logit model with interaction term
|
CC BY-SA 4.0
| null |
2023-04-26T14:09:15.950
|
2023-05-18T15:28:05.387
|
2023-05-18T15:28:05.387
|
11887
|
386623
|
[
"r",
"logistic",
"interaction",
"interpretation"
] |
614203
|
1
| null | null |
0
|
23
|
I am very new in performing the Difference in Difference analysis. Before starting the analysis, I would like to check the data if it is violated the most primary assumption 'parallel trend'. For my understanding, we should examine the trends are significantly different between case and control groups in the pre-intervention period. However, since I only have 1 study time point before the the intervention, do I still need to test the 'parallel trend'? Apparently, there is no 'trending' exists in my data since I only have 1 study timepoint. Any feedbacks appreciated.
|
Difference in Difference, parallel trend assumption examination
|
CC BY-SA 4.0
| null |
2023-04-26T15:03:33.603
|
2023-04-26T15:03:33.603
| null | null |
386627
|
[
"difference-in-difference"
] |
614205
|
1
| null | null |
0
|
21
|
I have a time series $X_t \sim N(0, 1)$. There is a single outlier at index 347, at 8.5 standard deviations from the mean. If I now compute a rolling window standard deviation of $X_t$ with window size 100, we of course see a large spike at index 347.
Once this observation has fallen out of the 100 period window, the rolling standard deviation falls back down to normal levels at index 447 (347+100=447). This is all as expected.
My question is, what is this phenomenon formally called, because I wan't to research it more to be able to understand it and mitigate it. Specifically, once the outlier observation falls out of the rolling window, I do not want the standard deviation estimation to jump back down to normal levels.
I know some ways to mitigate this may be to time weigh or exponential-decay weigh the observations prior to estimating the standard deviation. But I am wondering what this phenomenon is called, if it even has a name, and if there are potentially other ways to mitigate it.
EDIT: Just for clarity, I am using an equal weighted rolling window, and at each point in time $t$ I can only use observations up to and including $t$ i.e. at time $t$, all observations $x$ used to estimate the standard deviation must be in $x_l, l\leq t$. This is to avoid lookahead bias. So any method that uses "future" information is not allowed.
[](https://i.stack.imgur.com/wnh6s.png)
[](https://i.stack.imgur.com/FEiOR.png)
[](https://i.stack.imgur.com/TD81e.png)
|
What is it called when an outlier falls out of a rolling window statistical calculation?
|
CC BY-SA 4.0
| null |
2023-04-26T15:05:58.067
|
2023-04-26T16:37:57.393
|
2023-04-26T16:37:57.393
|
178320
|
178320
|
[
"time-series",
"standard-deviation",
"outliers",
"moving-window"
] |
614206
|
2
| null |
614137
|
1
| null |
Your time series does not show any intra-week seasonality (see the seasonplot at the right below). Exponential Smoothing (as per `ets()` in the `forecast` package for R) chooses a model with no seasonality, additive errors and an additive trend with strong dampening, so the forecasts are almost flat. Since there is very little time series structure in your data, the prediction intervals are very wide (and actually go below zero, which probably does not make a lot of sense for your data).
Your series does show a number of prolonged periods of low sales. It may be worthwhile figuring out what happened during these times. I believe this has a better chance of improving your forecasts than tweaking the models: [How to know that your machine learning problem is hopeless?](https://stats.stackexchange.com/q/222179/1352).
In any case, you have far too little data for a ML method like LSTM to make much of a difference compared to standard time series methods. I recommend the resources here to learn more about forecasting: [Resources/books for project on forecasting models](https://stats.stackexchange.com/q/559908/1352)
[](https://i.stack.imgur.com/tmoqD.png)
R code:
```
sales <- c(324L, 281L, 691L, 281L, 410L, 346L, 86L, 43L, 389L, 43L, 194L,
22L, 0L, 130L, 65L, 173L, 86L, 281L, 0L, 0L, 65L, 43L, 86L, 65L,
0L, 0L, 130L, 65L, 130L, 173L, 151L, 43L, 0L, 43L, 173L, 108L,
173L, 130L, 65L, 0L, 0L, 0L, 86L, 86L, 0L, 22L, 22L, 22L, 22L,
22L, 43L, 43L, 22L, 65L, 130L, 130L, -86L, 22L, 173L, 43L, 86L,
43L, 86L, 22L, 65L, 0L, 43L, 43L, 0L, 194L, 0L, 0L, 108L, 108L,
130L, 86L, 86L, 22L, 43L, 86L, 22L, 0L, 0L, 43L, 86L, 22L, 86L,
22L, 0L, 108L, 0L, 65L, 108L, 22L, 86L, 43L, 65L, 43L, 0L, 108L,
86L, 0L, 22L, 65L, 151L, 0L, 43L, 86L, 151L, 43L, 43L, 22L, 108L,
0L, 108L, 0L, 43L, 65L, 43L, 108L, 86L, 0L, 151L, 22L, 0L, 108L,
65L, 65L, 22L, 0L, 43L, 22L, 22L, 65L, 43L, 130L, 151L, 108L,
0L, 130L, 151L, 130L, 65L, 0L, 130L, 43L, 0L, 0L, 22L, 43L, 0L,
65L, 108L, 22L, 65L, 0L, 130L, 86L, 86L, 281L, 216L, 22L, 173L,
108L, 173L, 302L, 410L, 86L, 65L, 65L, 22L, 86L, 22L, 194L, 86L,
130L, 86L, 216L, 108L, 173L, 432L, 238L, 151L, 194L, 194L, 324L,
22L, 367L, 324L, 238L, 367L, 410L, 216L, 497L, 259L, 108L, 281L,
281L, 216L, 108L, 259L, 216L, 130L, 65L, 173L, 86L, 65L, 43L,
43L, 86L, 130L, 194L, 108L, 194L, 238L, 108L, 22L, 43L, 65L,
173L, 86L, 151L, 151L, 130L, 22L, 151L, 86L, 281L, 86L, 259L,
65L, 86L, 173L, 65L, 259L, 173L, 108L, 238L, 130L, 151L, 259L,
259L, 151L, 389L, 65L, 259L, 173L, 238L, 108L, 43L, 65L, 173L,
65L, 65L, 216L, 151L, 302L, 86L, 259L, 130L, 86L, 151L, 65L,
238L, 43L, 86L, 130L, 65L, 130L, 259L, 22L, 432L, 173L, 216L,
108L, 130L, 410L, 324L, 86L, 475L, 130L, 410L, 86L, 216L, 151L,
173L, 85L, 302L, 173L, 259L, 281L, 281L, 86L, 173L, 216L, 302L,
22L, 108L, 43L, 108L, 389L, 245L, 43L, 43L, 216L, 22L, 151L,
302L, 259L, 194L, 346L, 43L, 302L, 194L, 151L, 194L, 173L, 43L,
43L, 281L, 367L, 43L, 367L, 43L, 0L, 65L, 86L, 151L, 108L, 281L,
130L, 22L, 86L, 389L, 324L, 216L, 346L, 43L, 216L, 43L, 130L,
497L, 65L, 22L, 43L, 238L, 324L, 0L, 65L, 43L, 86L, 216L, 238L,
43L, 108L, 194L, 108L, 605L, 302L, 130L, 281L, 86L, 86L, 302L,
173L, 216L, 194L, 86L, 259L, 151L, 389L, 194L, 454L, 65L, 130L,
518L, 216L, 43L, 302L, 22L, 151L, 216L, 65L, 302L, 389L, 281L,
410L, 259L, 238L, 151L, 216L, 389L, 194L, 216L, 130L, 194L, 108L,
65L, 475L, 194L, 216L, 43L, 86L, 86L, 130L, 410L, 324L, 173L,
65L, 86L, 151L, 324L, 475L, 281L, 454L, 130L, 259L, 389L, 216L,
86L, 216L, 194L, 130L, 454L, 194L, 108L, 108L, 43L, 86L, 65L,
238L, 65L, 108L, 86L, 65L, 22L, 108L, 216L, 65L, 65L, 346L, 281L,
86L, 302L, 151L, 324L, 194L, 151L, 151L, 108L, 43L, 0L, 0L, 65L,
324L, 281L, 151L, 0L, 0L, 389L, 302L, 432L, 540L, 108L, 346L,
0L, 194L, 238L, 259L, 130L, 238L, 389L, 238L, 518L, 86L, 281L,
0L, 0L, 108L, 43L, 0L, 0L, 0L, 65L, 0L, 0L, 0L, 0L, 0L, 0L, 108L,
194L, 0L, 216L, 238L, 22L, 43L, 259L, 216L, 22L, 410L, 238L,
86L, 151L, 281L, 259L, 151L, 173L, 432L, 43L, 238L, 65L, 389L,
43L, 173L, 324L, 130L, 367L, 194L, 86L, 65L, 65L, 302L, 367L,
367L, 130L, 216L, 454L, 65L, 0L, 432L, 108L, 0L, 216L, 194L,
108L, 281L, 259L, 108L, 259L, 173L, 238L, 367L, 324L, 259L, 324L,
346L, 346L, 194L, 259L, 151L, 151L, 238L, 65L, 346L, 130L, 562L,
43L, 108L, 173L, 43L, 151L, 65L, 151L, 86L, 43L, 194L, 86L, 22L,
0L, 0L, 65L, 43L, 151L, 43L, 108L, 86L, 0L, 130L, 65L, 65L, 108L,
65L, 22L, 0L, 86L, 0L, 130L, 43L, 108L, 86L, 43L, 108L, 238L,
259L, 324L, 497L, 324L, 346L, 389L, 259L, 324L, 151L, 302L, 389L,
216L, 281L, 216L, 324L, 216L, 194L, 259L, 43L, 108L, 302L, 151L,
22L, 389L, 454L, 194L, 324L, 734L, 0L, 108L, 238L, 22L, 0L, 432L,
108L, 43L, 0L, 22L, 43L, 346L, 43L, 65L, 130L, 86L, 302L, 238L,
475L, 216L, 389L, 281L, 497L, 302L, 65L, 475L, 432L, 259L, 173L,
108L, 216L, 43L, 65L, 173L, 238L, 108L, 86L, 173L, 43L, 173L,
518L, 151L, 281L, 238L, 216L, 151L, 238L, 86L, 173L, 108L, 43L,
108L, 65L, 0L, 259L, 216L, 216L, 22L, 216L, 281L, 259L, 238L,
130L, 86L, 238L, 216L, 173L, 216L, 346L, 648L, 216L, 43L, 302L,
302L, 108L, 259L, 216L, 216L, 194L, 173L, 86L, 194L, 130L, 259L,
194L, 86L, 389L, 65L, 410L)
sales_ts <- ts(sales,frequency=5)
library(forecast)
par(mfrow=c(1,2),las=1)
model <- ets(sales_ts)
plot(forecast(model,h=20))
seasonplot(ts(sales,frequency=5),pch=19)
```
| null |
CC BY-SA 4.0
| null |
2023-04-26T15:13:18.763
|
2023-04-26T15:13:18.763
| null | null |
1352
| null |
614208
|
2
| null |
614191
|
0
| null |
If you are estimating the parameters of a GARCH model with maximum likelihood, then you are after the set of parameter values that maximize the likelihood in the (training) sample. If several optimization routines yield different sets of parameter values, pick the one that maximizes the likelihood. The other ones have surely failed to reach the optimum in sample, while the one maximizing the likelihood at least may have reached it. (It could also be just another local optimum, too, but at least it beats the other ones under consideration.)
| null |
CC BY-SA 4.0
| null |
2023-04-26T15:25:52.687
|
2023-04-26T15:25:52.687
| null | null |
53690
| null |
614210
|
1
| null | null |
0
|
39
|
I performed a study where I took 9 measurements from each participant for 2 factors (time/ location) and 3 variables each. These created the following combinations:
T1L1, T1L2, T1L3, T2L1, T2L2, T2L3, T3L1, T3L2, T3L3.
My data is not normally distributed. I ran the Friedman test and Wilcoxon signed-rank test for above combinations. All of the combinations are statistically significant.
Now, I am interested in performing the following pairwise comparisons: T1-T2, T2-T3, T1-T3, L1-L2, L1-L3, L2-L3. Any suggestions? One idea I can think of is just taking means and performing Wilcoxon signed-rank test. For example for T1-T2, I take Mean(T1L1, T1L2, T1L3) and Mean (T2L1, T2L2, T2L3) and then perform Wilcoxon signed-rank test. Is it okay to do this? Are there any other better alternatives?
Wilcoxon signed-rank test
|
How to run Wilcoxon signed-rank test for pairwise comparison of two factors with three variables each?
|
CC BY-SA 4.0
| null |
2023-04-26T15:45:41.630
|
2023-05-03T12:59:40.017
| null | null |
297355
|
[
"nonparametric",
"wilcoxon-signed-rank",
"friedman-test"
] |
614211
|
2
| null |
614197
|
1
| null |
This sounds like a case for the [Diebold-Mariano test](https://stats.stackexchange.com/tags/diebold-mariano-test/info). The test does not assume the prediction errors or losses to be normal. It simply compares the means of the two sets of losses using a $t$-test, possibly using HAC standard errors. In R, the test is implemented in the [dm.test](https://www.rdocumentation.org/packages/forecast/versions/8.21/topics/dm.test) function in the `forecast` package.
| null |
CC BY-SA 4.0
| null |
2023-04-26T15:48:08.683
|
2023-04-26T15:48:08.683
| null | null |
53690
| null |
614212
|
1
|
614253
| null |
2
|
110
|
I am reading the Performer paper [https://arxiv.org/abs/2009.14794](https://arxiv.org/abs/2009.14794). To understand their ReLU kernel used to approximate softmax attention, I need to evaluate $\mathbb{E}[ReLU(x^T w) \cdot ReLU(y^T w)]$ where $w \sim \mathcal{N}(0, I)$. I was able to reduce the problem to the more general one of computing $\mathbb{E}[|X \cdot Y|]$ where $X,Y$ are correlated jointly Gaussians. However, now I am stuck. Is there an approach of computing this expression or bound it from above reasonably without numerical simulation? Cauchy-Schwarz seems to loose for the upper bound.
|
Expectation of the absolute value of the product of correlated jointly gaussians?
|
CC BY-SA 4.0
| null |
2023-04-26T15:56:54.740
|
2023-04-28T00:13:53.317
|
2023-04-27T02:29:16.417
|
20519
|
386633
|
[
"probability",
"normal-distribution",
"expected-value",
"transformers",
"absolute-value"
] |
614213
|
2
| null |
591433
|
0
| null |
If you want to rank the models (and then test whether the best model beats the other models), you will have to define a scalar performance metric (a scalar-valued loss function). With that in place, and since you have more than two models to compare, I refer you to the tag description of [diebold-mariano-test](/questions/tagged/diebold-mariano-test):
>
In comparing multiple forecasts, we need to address the multiple comparisons problem. In such a case, the standard approach is the "multiple comparisons to the best" (MCB) test originally proposed by Koning et al. (2005). <...> It is rank-based, so it works with any accuracy measure. <...> A related alternative would be the Friedman-Nemenyi test (Demsar, 2006). Both the MCB and the Nememyi test are implemented in the TStools package for R.
| null |
CC BY-SA 4.0
| null |
2023-04-26T16:03:48.680
|
2023-04-26T16:03:48.680
| null | null |
53690
| null |
614214
|
2
| null |
482702
|
0
| null |
From the comments:
>
However, there are 0 predicted with high probability of being 1.
This should be the case. If you have an event that occurs with probability $0.95$, then roughly one out of twenty attempts should result in the event not occurring. If you have a large amount of data, that could translate into a large number of these bad predictions.
You can check this by assesing your model calibration. You use `R`, and the `rms` package has two functions of interest.
`rms::calibrate` is the technically correct function to use. However, it only accepts models made within the `rms` functions, so you cannot apply this function to your models. `rms::val.prob`, however, just takes true values and predicted probabilities as inputs, which you will have from your model outputs. This function plots the predicted probabilities and the (estimated) true event probabilities. While this is technically incorrect and `rms::calibrate` should be preferred, [it seems that using val.prob will do little damage in most cases.](https://How%20damaging%20to%20the%20analysis%20would%20it%20be%20to%20run%20probability%20validation%20(%60rms::val.prob%60)%20when%20calibration%20(%60rms::calibrate%60)%20is%20the%20correct%20action?)
In the simulation below, I show that events with low probability can happen once in a while. The number of event $0$ instances with predicted probability $P(y=1) = 0.8$ is about $2\%$. Across many thousands of observations, this could be a fair number of values that you observe exhibiting this behavior. However, the probability validation shows that the predicted probabilities align with true event occurrence probabilities.
```
library(rms)
set.seed(2023)
N <- 100000
p <- rbeta(N, 1, 1)
y <- rbinom(N, 1, p)
rms::val.prob(p, y)
length(p[y == 0 & p > 0.8])/length(p)*100 # I get 1.994%
```
[](https://i.stack.imgur.com/HWcG8.png)
| null |
CC BY-SA 4.0
| null |
2023-04-26T16:34:28.390
|
2023-04-26T16:34:28.390
| null | null |
247274
| null |
614215
|
1
| null | null |
1
|
14
|
I am using the manyglm function from the mvabund package. I have a site X species/family dataset of counts. One organism in the community is vastly more abundant than the others (by multiple orders of magnitude) so a standard practice for sampling is to use one effort to get a count for that organism. The rest of the community is sampled under a separate, larger effort. Is it possible to apply different offsets to different species? Is it ok to assume a singular community at each site when we used different efforts for different species?
Thanks in advance for any advice.
Further clarification if needed: These are zooplankton samples. We go to a site, collect site variables, collect a volume of water. That water is filtered for the abundant organisms - rotifers. A second portion of water, much more, is filtered for the less abundant organisms - crustaceans. Ultimately, we get individuals/Liter for each family or species but the multiplier is different. These data tend to best fit a negative-binomial distribution.
|
Is it reasonable to apply different offsets to different species in community analysis?
|
CC BY-SA 4.0
| null |
2023-04-26T16:40:20.480
|
2023-04-26T16:40:20.480
| null | null |
370084
|
[
"multivariate-analysis",
"negative-binomial-distribution",
"offset"
] |
614216
|
2
| null |
405727
|
2
| null |
Your analysis might be okay for a randomized trial with one observation period. What is problematic with observational data includes:
- You do not model the "treatment" (scholarship) assignment process. If some scholarships are given to, say, underprivileged students who may be more likely to drop-out in the first place, while another scholarship is given to e.g. athletes or legacy admissions or highly performing students that are more likely to stay in the course in the first place, then you end up with confounding. You can't tell the difference between whether you are comparing the students as they were (i.e. the same would have happened even if they had not got the scholarship), the causal effect of the scholarship, or (most likely) a mixture of the two (but likely with a strong effect of the pre-existing student characteristics).
To account for this, you'd need to have model the assignment process, which likely requires quite an in-depth level of information (e.g. details on socio-economic background, financial situation, education of parents/other key family members, past academic track-record etc.). Without those things, it'll be futile to hope to account for the possible sources of confounding and you're likely just wasting your time.
If you have those things, you can try various causal inference methods (e.g. adjustment, doubly-robust methods, propensity score methods etc.). One good book on the topic would be this one, a pre-print is freely accessible at the link location on the author's webpage.
- This only gets worse, if you treat time periods the way you do. E.g. if a student has a scholarship for a period, then looses their scholarship, then has a period without it and then drops-out, causally attributing this to not having a scholarship is really problematic. The reason is that perhaps the reason that explains them dropping-out is what made then loose the scholarship (e.g. dropping test performance) or it is a direct effect of the loss of the scholarship (or some mixture of these things).
- I'm not sure whether you have students without any scholarships at any time in your data, but without those it's also a lot harder to say things about the causal effects of scholarships (you can at best but even that gets a lot harder compare types of scholarships). It may be cleaner to compare those having a scholarship at a fixed point of time (even if they may loose it in the future) going forwards with time-to-event methods.
| null |
CC BY-SA 4.0
| null |
2023-04-26T16:48:28.650
|
2023-04-26T16:48:28.650
| null | null |
86652
| null |
614217
|
1
| null | null |
0
|
9
|
I have run a pooled logit regression (using R) with a dependent variable of probability of CEO turnover. I have several binary and continuous independent variables, and an interaction term between binary and continuous variable.
I am confused about interpreting the results, since interaction term does not have a marginal effect, and it is said that interpreting the logit model coefficients do not make much sense.
What do you recommend to interpret the results, especially the interaction term?
|
Interpretation of results logit model with interaction term
|
CC BY-SA 4.0
| null |
2023-04-26T16:50:01.273
|
2023-04-26T16:52:35.647
|
2023-04-26T16:52:35.647
|
247274
|
386623
|
[
"regression",
"logistic",
"interaction",
"regression-coefficients"
] |
614218
|
2
| null |
614136
|
1
| null |
One possible approach that I've been investigating is the use of the second derivative (double differencing) with averaging and/or summing the values (more on that in a bit).
Here's an example in Python:
```
import numpy as np
d = np.array([0,7.5,9,9,9,8,8,8,8,8,8,8]) #flattens curve
avg = np.average(np.diff(np.diff(d))) # = -0.75
sum = np.sum(np.diff(np.diff(d))) # = -7.5
```
The average provides better "trend" of the second derivative IMHO. The summation seems better at "locating" the approximate local maxima/minima "point".
Here's how to interpret the values:
```
avg = 0 -> flat trend
avg < 0 -> decreasing trend
avg > 0 -> increasing trend
```
We could make this a little more tolerant by assuming flatness is between $\pm 1$ if necessary. For all different time series patterns (more than about a 100 in my data set) this seems to provide a decent solution to address the need that is at the right level of simplicity.
For now, this is what I will be going with unless someone provides a better answer. I may be inclined to include both the "overall average" of the entire time series independent variable plus the average of the second derivative. I will surely do a correlation test to ascertain if I should only keep one of them, but I have a feeling that both could be valuable (though I'm not sure if this commonly practiced).
| null |
CC BY-SA 4.0
| null |
2023-04-26T16:51:32.910
|
2023-04-26T16:57:21.167
|
2023-04-26T16:57:21.167
|
4426
|
4426
| null |
614219
|
2
| null |
353565
|
0
| null |
- The HAC standard errors (e.g. Newey-West) are used to account for autocorrelation in the forecast errors/losses. They are present by design for multiple-step forecasts generated from consecutive rolling windows, as the forecast errors stem from overlapping time intervals. The forecast::dm.test function has this functionality, too.
- A one-sided test can be used if a priori there is no way that the expected value of the forecast loss differential is positive (negative). Usually, this is not realistic, as fancy forecasts do not always beat simple ones. (Also, if we look on the other sider, who could say for sure that a simple forecast will definitely beat a fancy one?)
| null |
CC BY-SA 4.0
| null |
2023-04-26T16:52:19.403
|
2023-04-26T16:52:19.403
| null | null |
53690
| null |
614220
|
2
| null |
614157
|
0
| null |
An initial problem is that your overall model has questionable "statistical significance." Even the generally most reliable overall test, the likelihood-ratio test, only has p = 0.05. We'll put that aside for now, however, to address some other parts of your interpretation that hold for Cox models with interaction terms.
First, when you have an interaction in any regression model, the "main effect" coefficients are only for a highly restricted situation. What you have for the "main effects" of the various levels of the categorical predictors are their associations with outcome (relative to their own reference level) when the interacting predictor is also at its reference level.
Second, the interaction coefficients in any regression model are for additional associations with outcome beyond what you would predict based on the "main effect" coefficients. That seems to be the way that you are interpreting them. I'm a bit worried about the very high values of those coefficients and the extremely wide confidence intervals, however.
Third, it's risky to use terminology like "0.59 times as likely" when dealing with hazard ratios. The hazard at a specific time is the instantaneous risk of an event given that you have already survived that long. That's not the same as the overall "likeliness" of an event; in a standard survival model all individuals are equally likely (and assumed certain) to have an event eventually. The word "hazard" has a well-defined meaning in survival analysis. Stick to it lest you make statements that might not be true.
Fourth, it's not clear where you are getting information on time-to-event from this summary. Under the proportional-hazards assumption, someone with a higher hazard than another would be expected to have a shorter predicted time-to-event, although a test on mean or median survival times might not find that difference to be "statistically significant."
Fifth, this type of summary of any regression model isn't usually adequate to evaluate what's going on with a model containing interaction terms. The coefficient estimate(s) and the apparent "significance" of the "main effect" term for a predictor depend on how that predictor and its interacting predictors are coded.
You need an overall estimate of the significance of each categorical predictor that includes all its levels and its interactions. That can be provided by the `Anova()` function in the R [car package](https://cran.r-project.org/package=car) or the `anova()` function in the [rms package](https://cran.r-project.org/package=rms) when applied to objects generated by `rms` (as with its `cph()` function for Cox models).*
The `rms` package allows for predictions like those you ask for at the end of your question if you use its `cph()` function for a Cox model. The [emmeans package](https://cran.r-project.org/package=emmeans) can provide predictions based on a wide variety of models generated by R packages.
---
*Be very careful in your choice of `anova()` function, as the default in the R `stats` package can produce misleading results with unbalanced data. Make sure that you know which `anova()` or `Anova()` function you are invoking on your model result.
| null |
CC BY-SA 4.0
| null |
2023-04-26T17:03:26.887
|
2023-04-26T17:03:26.887
| null | null |
28500
| null |
614221
|
1
| null | null |
1
|
27
|
I have four different studies, each with the same two continuous predictor variables, $x$ and $z$, and a continuous outcome variable, $y$.
In these studies, I use hierarchical linear regression to analyze the data, and my primary interest is the two-way interaction of $x\times z$ on $y$.
I would like to do an internal meta-analysis of these four studies. Is there a way to get a meta-analytic estimate of the interaction effect ($x\times z$ on $y$)?
|
Meta-analysis of two-way interaction effect
|
CC BY-SA 4.0
| null |
2023-04-26T16:58:12.647
|
2023-05-17T14:41:13.537
|
2023-05-17T14:41:13.537
|
44269
|
386641
|
[
"regression",
"interaction",
"meta-analysis"
] |
614222
|
2
| null |
363052
|
3
| null |
I wanted to comment but I don't have enough reputation to comment. So I hope this will be allowed to stay as an "answer". There's an error in Ben's derivation: when switching variables from $x$ to $y=x/\beta$, the limit of integration $T$ needs to be changed to $T/\beta$ (so scale is taken out). Thus the result (also using Sextus Empiricus's simplification) is
$$\mathbb{E}[X\mid X<T]=\beta\cdot\frac{\gamma(\alpha+1,T/\beta)}{\gamma(\alpha,T/\beta)}=\beta\left(\alpha-\frac{(T/\beta)^\alpha e^{-T/\beta}}{\gamma(\alpha,T/\beta)}\right).$$
That is, truncation at $T$ reduces the mean by $\displaystyle\beta\frac{(T/\beta)^\alpha e^{-T/\beta}}{\gamma(\alpha,T/\beta)}$.
Ben, your answer is accepted, so if you see this, could you edit your answer?
EDIT AFTER A FEW DAYS: I just realized that we can also write the truncated mean as
$$\mathbb E[X\mid X<T]=\alpha\beta\left[1-\beta\cdot\frac{\text{pdf}(T;\alpha+1,\beta)}{\text{cdf}(T;\alpha,\beta)}\right].$$
This form may be easier for computation using pdf/cdf functions (I am thinking R).
It's also intuitive: if we take $T$ to $\infty$, the second term in brackets vanishes, and we recover the Gamma mean $\alpha\beta$.
| null |
CC BY-SA 4.0
| null |
2023-04-26T17:09:19.573
|
2023-04-30T14:57:26.210
|
2023-04-30T14:57:26.210
|
203700
|
203700
| null |
614223
|
1
| null | null |
0
|
5
|
I am doing sarcasm detection using code mixed data. Sentences are in English and Hindi written in English alphabets. Is there a way to apply Facebook MUSE to generate word embeddings out of this data?
|
How can I produce word embeddings from code-mixed data using Facebook MUSE?
|
CC BY-SA 4.0
| null |
2023-04-26T17:17:39.447
|
2023-04-26T17:17:39.447
| null | null |
222213
|
[
"natural-language",
"word-embeddings",
"fasttext"
] |
614224
|
2
| null |
614153
|
0
| null |
A better approach for a curvilinear association between outcome and a continuous predictor is a regression [spline](https://stats.stackexchange.com/tags/splines/info) or a different type of [generalized additive model](https://stats.stackexchange.com/tags/generalized-additive-model/info). Chapter 2 of Frank Harrell's [Regression Modeling Strategies](https://hbiostat.org/rmsc/genreg.html) discusses regression splines.
Whether you keep to the quadratic fit or use a spline, the fundamental regression model will be the same whether or not you center the continuous predictors. Centering can sometimes prevent numerical fitting difficulties, and the coefficient estimates that you see might differ depending on centering. But predictions from the model will be the same regardless, and statistical tests that evaluate together all the coefficients associated with a predictor will also be the same.
| null |
CC BY-SA 4.0
| null |
2023-04-26T17:18:22.770
|
2023-04-26T17:18:22.770
| null | null |
28500
| null |
614225
|
1
| null | null |
0
|
14
|
Say I have several regions' melon produce data. I wish to measure the variability of melon weight in each area, and summarize these as an overall variability metric accounting for the different sizes of each area. In equation, it will be similar to [weighted arithmetic mean](https://en.wikipedia.org/wiki/Weighted_arithmetic_mean#Mathematical_definition):
$$
\frac{\sum_i Variability(i) Area_i}{\sum_i Area_i}
$$
My question is, is it more suitable to use standard deviation or variance to measure variability in this equation, or it does not matter? I feel like standard deviation may have some bias issues, but am not able to write it explicitly.
|
aggregate variance or standard deviation similarly as weighted mean
|
CC BY-SA 4.0
| null |
2023-04-26T17:22:15.337
|
2023-04-27T03:44:48.360
|
2023-04-27T03:44:48.360
|
369002
|
153648
|
[
"variance",
"descriptive-statistics",
"weighted-mean"
] |
614226
|
2
| null |
290725
|
0
| null |
>
I don't know how to interpret this statistic.
The test statistic is a summary characteristic of your dataset. It can be treated as a realization from a random variable with a known distribution under a point-valued null hypothesis $H_0$ or a known family of distributions under a set-valued $H_0$. You can evaluate how atypical this realization looks from the perspective of the null distribution(s). Taking the distributions under the alternative hypothesis $H_1$ into consideration, this yields a $p$-value that you can compare to a predetermined significance level $\alpha$ to help you decide between keeping (i.e. not rejecting) $H_0$ or rejecting it in favour of $H_1$.
Alternatively, you can refer to the critical values that again are determined by the distribution(s) under $H_0$ in light of $H_1$. Compare your test statistic to them and see if the statistic is in a critical region (suggesting to reject $H_0$ in favour of $H_1$) or not (suggesting not to reject $H_0$).
| null |
CC BY-SA 4.0
| null |
2023-04-26T17:24:08.503
|
2023-04-26T17:24:08.503
| null | null |
53690
| null |
614227
|
1
| null | null |
1
|
23
|
Suppose I want to get training data for a model that deals with sentiment analysis for text that indicates an affirmative (yes) or negative (no) response, such as
`"Alright"` $\to$ "YES"
`"no sorry I can't"` $\to$ "NO".
And suppose my way of getting training data is by querying from an SQL database. Suppose I expect there to be a few "difficult" examples that would serve as edge cases to the model, such as something like `"Kinda"` but suppose that the sampling rate for responses of this nature is very low, and since my training data is queried, it may be the case that there will be a few or none examples of this nature.
My questions are:
- Should I upsample this kind of edge case so that my model has been exposed to it to handle this response better? Wouldn't this add bias? I suppose upsampling for a specific response would literally be just repeating it over and over again.
- Do I even need worry about the model not being exposed to this and other "confusing" responses in the training data?
|
Manually adding edge-cases to a text classification model
|
CC BY-SA 4.0
| null |
2023-04-26T17:26:17.350
|
2023-04-28T12:10:10.047
|
2023-04-26T17:28:06.937
|
386642
|
386642
|
[
"machine-learning",
"classification",
"bias",
"data-augmentation"
] |
614228
|
2
| null |
596782
|
0
| null |
You cannot obtain a meaningful estimate of the standard deviation for a particular day if you only have one observation for that day. You do not observe any variation at all, so there is no way to tell what the standard deviation is. A GARCH model employs some restrictive assumptions (essentially, that is what the model is made of) to be able to estimate conditional standard deviations for each day. Ideally, you would like to compare that to standard deviations obtained from a model that does not use restrictive assumptions. But since you only have one observation a day, that is impossible.
You can still evaluate the statistical adequacy of your GARCH model by looking at the standardized residuals. They should have zero autocorrelation at all nonzero lags (Ljung-Box test may be used for that, though [it becomes problematic](https://stats.stackexchange.com/questions/148004) if the conditional mean model is ARMA). The same applies to squared standardized residuals (Li-Mak test can be used for that; ARCH-LM is not suitable, however). Also, the probability integral transform (PIT) of the standardized residuals should be Uniform[0,1] (Kolmogorov-Smirnov test can be used for that). A statistically adequate GARCH model must pass all three tests. You could come up with additional tests, but these three are the standard ones.
For R code, see the [vignette](https://cran.r-project.org/web/packages/rugarch/vignettes/Introduction_to_the_rugarch_package.pdf) and the [reference manual](https://cran.r-project.org/web/packages/rugarch/rugarch.pdf) of the `rugarch` package. [Here](http://www.unstarched.net/r-examples/rugarch/a-short-introduction-to-the-rugarch-package/) are some examples by the author of the package.
| null |
CC BY-SA 4.0
| null |
2023-04-26T18:00:35.210
|
2023-04-26T18:00:35.210
| null | null |
53690
| null |
614229
|
1
| null | null |
4
|
39
|
I have two equations of the form:
$y = X\beta + Z\gamma + r\alpha +\epsilon_1$ (1)
$r = Z\delta +\epsilon_2$ (2)
The basic intuition here is $y$ is total cost data, $X$ is a matrix of variables relating to technologies, and $Z$ is a matrix of geographical attributes that can impact total cost. $r$ is an indicator that is determined in part by $Z$.
Right now, I'm using 2SLS to run this model. The intuition behind the approach seems fine to me. $r$ is only determined by geographical attributes in the regression so it should be okay to run $(2)$, determine $\hat r$, and plug that into $(1)$. However, I don't think I've encountered a regression like this before and am unsure if there would be a problem in using $Z$ as both an instrument in the first stage and an explanatory variable in the second stage of a 2SLS regression.
|
Can a variable be an instrument in the first stage and an exogenous covariate in the second stage of an 2SLS regression?
|
CC BY-SA 4.0
| null |
2023-04-26T18:08:12.097
|
2023-04-26T20:03:40.027
|
2023-04-26T20:03:40.027
|
386643
|
386643
|
[
"regression",
"instrumental-variables",
"2sls",
"simultaneous-equation"
] |
614230
|
2
| null |
614040
|
0
| null |
There's no reason you would not be able to apply a neural network (in Keras or any other software) using binary inputs.
Indeed, the XOR problem is a canonical example of how a neural network is useful, even when the inputs are binary. The XOR task is to have a classifier separate 4 binary datapoints into 2 categories:
|$X_1$ |$X_2$ |$Y$ |
|-----|-----|---|
|1 |1 |1 |
|1 |-1 |-1 |
|-1 |1 |-1 |
|-1 |-1 |1 |
Drawing a single line in $X_1, X_2$ space won't separate the 2 classes of $Y$. However, using interactions among $X_1, X_2$ will. This is achievable with a neural network with a hidden layer.
In other words, interactions among input features, even binary input features, can be essential. Neural networks are one way to achieve that.
You mention logistic regression in your question. XOR can also be solved using logistic regression, if you apply [feature-engineering](/questions/tagged/feature-engineering) and construct interactions manually. For the XOR problem, it is clear that the $X_1 X_2=Y$; logistic regression will succeed. (Though it will exhibit perfect [separation](/questions/tagged/separation) phenomena, unless you take steps to prevent it.)
And obviously a decision tree-based model would have no difficulties with this toy problem. A single tree with two splits would fit this data perfectly.
| null |
CC BY-SA 4.0
| null |
2023-04-26T18:11:03.917
|
2023-04-26T18:27:13.803
|
2023-04-26T18:27:13.803
|
22311
|
22311
| null |
614231
|
1
| null | null |
3
|
455
|
Lets say I am classifying if a user will like a Airbnb listing or not (binary classification model)
I have 2 features:
- Number of house rules the Airbnb host has. Since this is a count, this is a discrete (categorical) variable. We will store this as an int because it is ordered (10 house rules is greater than 9 house rules).
- Price of the Airbnb listing, rounded to the nearest whole dollar. This is clearly a continuous variable. It is also stored as an int.
If I'm not missing anything, do I need to specify anything myself to the machine learning model (let's say logistic regression since XGBoost classifier probably auto handles this) to distinguish between the discrete and continuous int? Or is there nothing to worry about here and it will automatically be handled for me?
|
How does a machine learning model distinguish between ordered discrete int and continuous int?
|
CC BY-SA 4.0
| null |
2023-04-26T18:18:12.057
|
2023-04-26T19:01:10.213
| null | null |
361781
|
[
"logistic",
"classification",
"categorical-data",
"continuous-data",
"discrete-data"
] |
614232
|
2
| null |
614231
|
10
| null |
Discrete $\ne$ Categorical
The number of house rules is a numerical variable. You might find value in having individual indicator variables for each house rule, each of which would be categorical, but the count of house rules is every bit as numerical as the price.
Your software does not need to distinguish between these and should run fine.
Also, I contest that your price variable is continuous when it is rounded to the nearest full dollar. Both variables seem to be counts (number of house rules or number of dollars paid).
| null |
CC BY-SA 4.0
| null |
2023-04-26T18:22:04.533
|
2023-04-26T19:01:10.213
|
2023-04-26T19:01:10.213
|
247274
|
247274
| null |
614234
|
2
| null |
614229
|
3
| null |
Once you model $r$ using $(2)$ and substitute that into $(1)$ to obtain $(1')$, you will get perfect multicollinearity in $(1')$. You will not be able to estimate each of $\gamma$ and $\alpha$, only the linear combination $\gamma+\hat\delta\alpha$.
$$
y=X\beta+Z\gamma+Z\hat\delta\alpha+u \tag{1'} \\
=X\beta+Z(\gamma+\hat\delta\alpha)+u
$$
(I assume you meant $r=Z\color{red}{\delta}+\epsilon_2$ in $(2)$, as otherwise there is a restriction for a pair of coefficients to be equal across $(1)$ and $(2)$.)
| null |
CC BY-SA 4.0
| null |
2023-04-26T18:32:39.553
|
2023-04-26T18:32:39.553
| null | null |
53690
| null |
614235
|
1
| null | null |
3
|
37
|
I give a try to read the arXiv paper [Distributed Adaptive Sampling for Kernel Matrix Approximation, Calandriello et al. 2017](https://arxiv.org/pdf/1803.10172.pdf). I got a code implementation where they compute ridge leverage scores sampling sequentially using this paper, $$\widetilde{\tau}_{t,i}=\frac{1-\epsilon}{\gamma}\left( k_{i,i}-\boldsymbol{k}_{t,i}^\top \overline{\boldsymbol{S}}(\overline{\boldsymbol{S}}\boldsymbol{K}_t\overline{\boldsymbol{S}}+\gamma\boldsymbol{I}_t)^{-1}\overline{\boldsymbol{S}}^\top\boldsymbol{k}_{t,i} \right)\tag1$$
Where, they define a column dictionary as a collection $I_t=\{(i,w_i)\}_{i=1}^t$, where the first term denotes the index of the column and $w_i$ its weight , which is set to zero for all columns that are not retained. And $\boldsymbol{S}_t$ is the weighted selection matrix. For detail information in the section 3, Sequential RLS Sampling. There they redefine the dictionary,
>
a collection $I=\{i,\widetilde{p}_i,q_i\}$, where $i$ is the index of the column (consider as point $\boldsymbol{x}_i$), $\widetilde{p}_i$ tracks the probability used to sample it, and $q_i$ is the number of copies (multiplicity) of $i$.
Question: I didn't understand what they mean by the parameter $q_i$?
The code implementation to compute $\widetilde{\tau}_{t,i}$,
```
def compute_tau(centers_dict: CentersDictionary,
X: np.ndarray,
similarity_func: callable,
lam_new: float,
force_cpu=False):
"""Given a previosuly computed (eps, lambda)-accurate dictionary, it computes estimates
of all RLS using the estimator from Calandriello et al. 2017"""
xp = __load_gpu_module(force_cpu)
diag_norm = np.asarray(similarity_func.diag(X))
# (m x n) kernel matrix between samples in dictionary and dataset X
K_DU = xp.asarray(similarity_func(centers_dict.X, X))
# the estimator proposed in Calandriello et al. 2017 is
# diag(XX' - XX'S(SX'XS + lam*I)^(-1)SXX')/lam
# here for efficiency we collect an S inside the inverse and compute
# diag(XX' - XX'(X'X + lam*S^(-2))^(-1)XX')/lam
# note that in the second term we take care of dropping the rows/columns of X associated
# with 0 entries in S
U_DD, S_DD, _ = np.linalg.svd(xp.asnumpy(similarity_func(centers_dict.X, centers_dict.X)
+ lam_new * np.diag(centers_dict.probs)))
U_DD, S_root_inv_DD = __stable_invert_root(U_DD, S_DD)
E = xp.asarray(S_root_inv_DD * U_DD.T)
# compute (X'X + lam*S^(-2))^(-1/2)XX'
X_precond = E.dot(K_DU)
# the diagonal entries of XX'(X'X + lam*S^(-2))^(-1)XX' are just the squared
# ell-2 norm of the columns of (X'X + lam*S^(-2))^(-1/2)XX'
tau = (diag_norm - xp.asnumpy(xp.square(X_precond, out=X_precond).sum(axis=0))) / lam_new
return tau
```
I failed to understand the link between the actual paper formula $(1)$ (which is mention inside [compute_tau](https://github.com/LCSL/bless/blob/1d385e761ecf39c96f1eb4f470816a4cd732291f/bless.py#L126) as `diag(XX' - XX'S(SX'XS + lam*I)^(-1)SXX')/lam`) and the code implementation of `compute_tau`.
Question: "what is `X_precond` here and what svd decomposition doing here?"
I create a [GitHub issue](https://github.com/LCSL/bless/issues/1) and also mailed to reach the author, but got no luck. It would be greatly appreciated if anyone could assist me in resolving this matter.
|
Understanding the ridge leverage scores sampling from an arXiv paper
|
CC BY-SA 4.0
| null |
2023-04-26T18:47:56.093
|
2023-04-26T18:47:56.093
| null | null |
229442
|
[
"python",
"sampling",
"kernel-trick",
"approximation",
"leverage"
] |
614236
|
1
| null | null |
0
|
10
|
This is the formalization for a continuous regression model as I understand it:
Assume that your outcome $Y$ is normally distributed. Assume that your predictors, $X_1, X_2, \cdots X_n$ are normally distributed and uncorrelated.
We then model
$$
Y \sim a_1 X_1 + a_2 X_2 + \cdots a_n X_n + \epsilon
$$
with $\epsilon \sim N(0, \sigma_\epsilon$). Since the sum of normal random variables is normal, $Y$ will be normal.
However, most regressions also include categorical variables. These are encoded as zero one vectors. In the simple case of a binary categorical that takes on one of two values, I can model it as a Bernoulli random variable, $B \sim \text{Ber}(p)$.
But when I write
$$
Y \sim aX + bB + \epsilon
$$
then $Y$ is no longer normally distributed. In fact according to [this question](https://stats.stackexchange.com/questions/144168/how-to-compute-the-pdf-of-a-sum-of-bernoulli-and-normal-variables-analytically), it appears that the $Y$ is a mixture of Gaussians.
I am confused because I thought that one of the assumptions of regression is that all of your observations (independent and dependent) are normally distributed. Yet, the above formulation suggests that $Y$ does not need to be normal.
I have two questions. First, does this result mean that a regression with categorical variables is a different way of fitting a mixture of gaussians as suggested in [this question](https://stats.stackexchange.com/questions/237486/can-a-gaussian-mixture-model-be-specified-using-a-regression-equation)?
Second, how could I include non-binary categorical variables in the above formalization? What distribution would I use?
|
What assumptions are made about categorical variables in regression models?
|
CC BY-SA 4.0
| null |
2023-04-26T19:38:59.567
|
2023-04-26T19:38:59.567
| null | null |
386647
|
[
"regression",
"multiple-regression",
"random-variable",
"gaussian-mixture-distribution",
"mixture-distribution"
] |
614237
|
1
| null | null |
1
|
28
|
I have been trying to run a zero-inflated poisson model as follows:
```
Model2_2_4 <- zeroinfl(Conflict.intensity ~ Distance.from.fence + Distance.from.water + Distance.from.roads + Elevation, data = data2_2)
```
Here, `Conflict.intensity` is a continuous response variable.
[](https://i.stack.imgur.com/H5DNi.png)
Can someone help me understand why I am getting NaN values and possible ways of troubleshooting, thank you :)
|
Zero-inflated poisson model producing NaN results
|
CC BY-SA 4.0
| null |
2023-04-26T19:53:50.770
|
2023-04-26T22:33:06.803
|
2023-04-26T22:33:06.803
|
284660
|
386652
|
[
"poisson-distribution",
"zero-inflation"
] |
614238
|
1
| null | null |
1
|
13
|
For a university course I'm doing research on the effect of veratridine in earth worm axons. We have tested 6 axons (less than expected due to time limitations) in three conditions: control (no drug added), veratridine (10microM), and washout. Within these conditions we stimulate the axons with five different voltage stimulus intervals, and we measure the action potential success rate. So our dependent variable is the action potential success rate, and our independent variables are condition and stimulus frequency. We chose do use a two-way repeated measures ANOVA for statistical analysis.
There's one problem. One of the axons was not functional anymore by the time we tested it in the third condition (washout) so there are no measurements of this axon in this condition. Thus, the washout condition has a smaller sample size than the other conditions (25 (5 worms x 5 frequencies) compared to 30 (6x5)). Is this a problem for the two-way repeated measures ANOVA? Or can we just run the test? (in RStudio).
|
Unequal sample sizes in two-way repeated measures ANOVA?
|
CC BY-SA 4.0
| null |
2023-04-26T19:59:29.483
|
2023-04-26T19:59:29.483
| null | null |
386654
|
[
"r",
"mathematical-statistics",
"anova",
"repeated-measures",
"sample-size"
] |
614239
|
1
|
614241
| null |
3
|
66
|
I was reading the [helpfile](https://rdrr.io/cran/MASS/man/dropterm.html) for the `dropterm` function in the R MASS package, which is one of the main building blocks of the `stepAIC` function (at least for backwards stepwise regression.) The aim of this function is to identify a variable to remove from a model by sequentially considering models that drop one variable.
In the helpfile, it states:
>
Try fitting all models that differ from the current model by dropping a single term, maintaining marginality.
What does maintaining marginality mean in this context? Is it a special treatment of interactions specified in the model formula, e.g. by dropping interactions first before dropping the interacting variables?
Related:
[How does step function selects best linear Models which includes polynomial effects and interaction effects in R?](https://stats.stackexchange.com/questions/135779/how-does-step-function-selects-best-linear-models-which-includes-polynomial-effe/135783#135783)
|
What does maintaining marginality mean in stepwise regression?
|
CC BY-SA 4.0
| null |
2023-04-26T20:12:15.733
|
2023-05-01T12:57:12.647
|
2023-04-26T20:34:04.947
|
22199
|
22199
|
[
"r",
"model-selection",
"stepwise-regression"
] |
614240
|
1
| null | null |
1
|
31
|
I'm a student trying to adjust a count data GLMM to zero inflation using function glmmTMB.
Model family is nbinom1 family (better AICc compared to nbinom2 and poisson).
My data contains about 20 % zeros and I consider the zeros true zeros due to low abundance of individuals.
The ratio predicted/observed zeros stays 0.25, no matter if i add a ziformula and which ziformula. I add formulas to the model by adding "ziformula=~a", a being different respective fixed effects of the model. AICc is better for the model without ziformulas (lower AICc).
How should I deal with the situation? Can I go on without adding a ziformula?
Thank you very much!
|
Zero inflation formulas do not improve zero inflated model (GLMM)
|
CC BY-SA 4.0
| null |
2023-04-26T20:17:56.223
|
2023-04-26T21:32:12.387
|
2023-04-26T20:35:31.490
|
386651
|
386651
|
[
"mixed-model",
"glmm",
"count-data",
"zero-inflation",
"glmmtmb"
] |
614241
|
2
| null |
614239
|
6
| null |
It means that, if you have an interaction, all the lower terms in that interaction are also included in the model. If your model includes "foo:bar", then your model will also include "foo" and "bar" as independent effects.
As mentioned below in comments, this means that any interaction would be dropped before any of its components is dropped.
| null |
CC BY-SA 4.0
| null |
2023-04-26T20:25:34.083
|
2023-05-01T12:57:12.647
|
2023-05-01T12:57:12.647
|
28141
|
28141
| null |
614242
|
1
| null | null |
1
|
49
|
I know that Sa divided by Sq (arithmetic mean/standard deviation) on a standard normal distribution is equal to 2/(2pi)^0.5 or 0.7978 but need to have the mathematical solution. Can anyone help me with the proof?
Thank you.
|
relationship between arithmetic mean and standard deviation in a normal distribution
|
CC BY-SA 4.0
|
0
|
2023-04-26T20:29:37.347
|
2023-04-27T12:00:51.900
|
2023-04-27T12:00:51.900
|
386655
|
386655
|
[
"normal-distribution",
"variance",
"mean",
"standard-deviation",
"arithmetic"
] |
614243
|
1
| null | null |
0
|
13
|
I have a treatment dummy $t_i$, and 2 groups ($g1$, $g2$) in the treatment and 2 groups ($g3$, $g4$) in the control.
The significance of $t_i$ changes when I use different group dummies, for example, the following two regressions give different results on $t_i$.
$$
DV_i=\alpha+\beta_1t_i+\beta_2g1_i+\beta_3g3_i
$$
$$
DV_i=\alpha+\beta_1t_i+\beta_2g2_i+\beta_3g3_i
$$
My questions is what are the possible reasons that this could happen, and how to deal with that.
Edit:
Could it because, in the first regression, $t_1$ in effect is $g2_i$, and in the second regression $t_1$ is in effect $g_1$, so different significance in $t_1$ in the two regressions simply means the difference between $g2$ and $g4$ in regression 1 and difference between $g1$ and $g4$ in regression 2?
|
Different fixed effect dummies changes significance of treatment effect
|
CC BY-SA 4.0
| null |
2023-04-26T20:39:24.050
|
2023-04-26T21:39:09.277
|
2023-04-26T21:39:09.277
|
370155
|
370155
|
[
"regression",
"fixed-effects-model",
"treatment-effect"
] |
614244
|
1
| null | null |
7
|
441
|
There are various ["universal approximation theorems"](https://en.wikipedia.org/wiki/Universal_approximation_theorem) for neural networks, perhaps the most famous of which is the 1989 variant by George Cybenko. Setting aside technical conditions, the universal approximation theorems say that any "decent" function can be approximated as close as is desired by a sufficiently large neural network.$^{\dagger}$
Similar results exist for other classes of functions. For instance, the [Stone-Weierstrass theorem](https://en.wikipedia.org/wiki/Stone%E2%80%93Weierstrass_theorem) says that decent functions can be approximated by polynomials as well as is desired (again, setting aside the technical details of what constitutes a decent function). [Carleson's theorem](https://en.wikipedia.org/wiki/Carleson%27s_theorem) has this same flavor for approximation by Fourier series.
Does XGBoost also have a sense in which it can be a universal approximator?
$^{\dagger}$What constitutes a "decent" function is left vague as a technical detail of the specific assumptions of the various theorems.
|
XGBoost: universal approximator?
|
CC BY-SA 4.0
| null |
2023-04-26T20:59:36.073
|
2023-04-27T01:15:24.403
| null | null |
247274
|
[
"machine-learning",
"mathematical-statistics",
"boosting",
"supervised-learning",
"approximation"
] |
614245
|
2
| null |
614240
|
0
| null |
You are not assessing the presence of a "zero-inflation effect" properly. Calling zeroes true vs fake sort of makes sense thinking that you may want to interpret the conditional non-zero-inflated part of a model - suppose for instance you're modeling monthly skiing-related accidents, whereas May through October there's no snow at all on which to ski. But the point is: you can't say, because there's 10% zeroes or 20% zeroes or even 50% zeroes, that a model is zero inflated.
For instance, a Poisson($\lambda=1$) random variable will have $\exp(-1) \approx 37\%$ zeroes. You can check if the mean $\hat{\lambda}$ from a Poisson or NB model is at or below the value required to produce a certain number of zeroes. For instance, with 20% zeroes, you'd expect an average $\lambda$ of about 1.6 in a Poisson model.
An analytic solution may not be possible with negative binomial so illustrating a numeric solution:
```
> uniroot(function(x) dpois(0, x) - 0.2, interval = c(0.1, 2))
$root
[1] 1.609438
```
I'm curious how you evaluated the "expected" number of zeroes with your model for the non-zero inflated part. If the ratio is 25%, then the zero inflation only accounts for 5% of the zeroes in your sample, and the other 15% would be explained by very low intensity count process that are likely to be 0, but the actual intensities would be non-zero, low like 0.1 even, but not zero.
Anyway, a ratio of zeroes is a way of assessing calibration, but usually models evaluated internally show calibration. So, no matter what you use (zero inf or non) I would expect the model to show good calibration if you assess it correctly. To the broader question of: how do I detect zero-inflation? The suggestion to use an IC seems reasonable, and so you should just compare the fits with AIC.
| null |
CC BY-SA 4.0
| null |
2023-04-26T21:32:12.387
|
2023-04-26T21:32:12.387
| null | null |
8013
| null |
614246
|
2
| null |
577873
|
1
| null |
You can do an anova per permutation (non-parametric) with the aovp function from lmperm package. I suggest you use perm = "exact", to have a more robust test.
For a following post hoc test you can also use pairwise.perm.t.test from RVaideMemoire package (it allows you to do a correction, and to defined the number of permutation you wish).
This is what I have used so far and it worked quite well...
Cheers
| null |
CC BY-SA 4.0
| null |
2023-04-26T21:39:33.253
|
2023-04-26T21:39:33.253
| null | null |
386659
| null |
614247
|
1
| null | null |
0
|
5
|
Problem statement:
I am working on a project for a major game producer. The company is planning to sell the SKUs of one of their game titles (a title has multiple games) to the people who are purchasing their consoles. Let's say there are 3 games in that title - G1, G2, & G3. I have a set of accounts (players) who have engaged on their (gaming company's) consoles in the past. I want to identify the accounts that are highly likely to purchase this title's SKUs with their console purchase. Simply put, identify the specific accounts that are most likely to buy the title's SKUs along with their console purchase.
What do I have:
The spend on the games (of this title) by each account and the # distinct games they have played out of these three.
[](https://i.stack.imgur.com/H5qzu.png)
My Approach:
First, I want to take a normalized spend value for each account. That is, for account "A", the normalized value would be 40/2 = 20. Similarly, for others. Then, I want to take the average by taking the sum of the "GAMES PLAYED" column and dividing it by the sum of the "SPEND ON THE GAME" column. I'll then see if the normalized value for each account (from the above example, 20 for account "A") is greater than that average value in which case I'll assign "Y" corresponding to "A" to mark him a potential purchaser.
The Doubt:
This could be one way of solving this problem. Another way could be taking the average of the normalized spend value and then checking whether the normalized spend value for each account is greater than this avg. normalized spend value and mark those accounts "Y" or "N" accordingly.
Which method is more sound and whether there is a problem with any of them?Also, what would taking the average of an average mean here (average of the normalized spend value which essentially is an average value)?
To add to it, there could be people who have neither purchased the games of this title nor spent on it and also people that have purchased the games but have not spent.
|
Which mean value to consider for checking over indexing accounts
|
CC BY-SA 4.0
| null |
2023-04-26T21:47:38.647
|
2023-04-26T21:47:38.647
| null | null |
332360
|
[
"mean",
"normalization"
] |
614248
|
1
| null | null |
2
|
58
|
I am trying to get hazard ratio predictions from a coxph model at different levels of a given predictor. I know this is possible in SAS using the HAZARDRATIO function, but I do not know how to do this in R. I am using a large biomedical dataset, and in order to properly assess the relationship between my predictor of interest (inflammation) and outcome (mortality), I need to adjust for categorical values such as education level, ethnicity, sex, location (there are a limited number of assessment centers), and smoking status.
I have fit my hazard model as follows:
`coxph(Surv(Follow_up, Mortality) ~ Leukocyte_Count + Platelet_Count + Age + Sex + Smoking_Status + Education + Ethnicity + Location + Inflammation_Score, dataframe, na.action=na.exclude)`
Essentially, I want to generate risk predictions for a few given "individuals" with different inflammation scores. To do this, I am using the predict.coxph function in the following way:
`predict(inflammation_model, newdata = mean_DF, se.fit = TRUE, type = “risk”)`
Where "mean_DF" is:
```
mean_DF <- with(dataframe,
data.frame(Leukocyte_Count = rep(mean(Leukocyte_Count),5),
Platelet_Count = rep(mean(Platelet_Count),5),
Age = rep(mean(Age),5),
Sex = c("Male","Male","Male","Male","Male"),
Location = c("Boston","Boston","Boston","Boston","Boston"),
Smoking_Status = c("Never","Never","Never","Never","Never"),
Education = c("High School","High School","High School","High School","High School"),
Ethnicity = c("White","White","White","White","White"),
Inflammation_Score = c(-10,-5,0,5,10)
```
I am trying to ascertain the impact of only different inflammation scores on the outcome, so I have set all continuous covariates to the mean for each "individual" and all the factors to the same level. However, in the [documentation for this function](https://stat.ethz.ch/R-manual/R-devel/library/survival/html/predict.coxph.html) it states that the reference value for the predict function is the mean within each strata. Since I cannot (or do not know how to) calculate the mean for categorical variables, I am stuck using just the same "level" for each. I have three questions, although the third may be a different topic.
- How does predict.coxph calculate the "mean" for categorical variables to serve as the reference when predicting a hazard ratio?
- How can I set my categorical variables to this "mean" value? This is mostly for data visualization purposes - I do not want to make a misleading figure by selecting, e.g. "High School" as the level for education and having that be different from the mean value and therefore impact the predicted risk relative to the "mean"
- How can I plot a predicted survival curve based on the output of predict.coxph?
|
How do you select "means" for categorical values when using predict.coxph?
|
CC BY-SA 4.0
| null |
2023-04-26T22:11:24.163
|
2023-04-27T10:07:25.537
| null | null |
371493
|
[
"r",
"survival",
"cox-model"
] |
614249
|
1
| null | null |
1
|
36
|
I am wondering about the following: For a symmetric matrix $A \in \mathbb{R}^{n \times n}$ and vector $x \in [-1,1]^n$, if $X$ is a random vector in $\mathbb{R}^n$ such that w.h.p. $X_i \not\in [-1,1] \; (\forall i)$ does it then hold with high probability that $x^\top A^2 x = \|A x\|_2^2 \leq \|A X\|_2^2 = X^\top A^2 X$?
Intuitively, I am thinking that this should be true because w.h.p. $X$ has (pointwise) more "extreme" values than $x$.
|
Is there a high probability bound of quadratic forms?
|
CC BY-SA 4.0
| null |
2023-04-26T22:14:31.010
|
2023-04-26T22:14:31.010
| null | null |
382809
|
[
"probability",
"linear-algebra",
"probability-inequalities",
"quadratic-form",
"norm"
] |
614250
|
1
| null | null |
1
|
12
|
I have a series of independent measurement of a percentage of binary events.
(e.g. 60%, 65%, 55%, 59%, 60%).
How can I test that the observed distribution is different from a theoretical of 50%.
|
Test that observed percentages differ from theoretical distribution (e.g. 50%)
|
CC BY-SA 4.0
| null |
2023-04-27T00:18:11.987
|
2023-04-27T00:18:11.987
| null | null |
386663
|
[
"percentage"
] |
614251
|
2
| null |
614244
|
9
| null |
Yes, it makes sense to characterise GBMs as "universal function approximators" and in particular "greedy" ones, as put forward in Friedman's (2001) (uber-classic) [Greedy function approximation: A gradient boosting machine](https://projecteuclid.org/journals/annals-of-statistics/volume-29/issue-5/Greedy-function-approximation-A-gradient-boostingmachine/10.1214/aos/1013203451.full). The greediness here stemming on how we gradually/stage-wise increase the ensemble's capacity by adding units from the same family of known universal approximators (here trees). For that matter, let's remember that Boolean functions (i.e. trees) can be represented as real polynomials; a succinct (and surprisingly readable) intro on that can be found in Nisan & Szegedy (1992) [On the degree of boolean functions as real polynomials](https://link.springer.com/article/10.1007/BF01263419). I have found Section [12.5 Universal approximation](https://kenndanielso.github.io/mlrefined/blog_posts/12_Nonlinear_intro/12_5_Universal_approximation.html) from the online blog version of the book [Machine Learning Refined](https://www.cambridge.org/highereducation/books/machine-learning-refined/0A64B2370C2F7CE3ACF535835E9D7955#overview) by Watt et al. a nice overview of how universal approximations come into play in ML, that section includes a small sub-section on tree-based universal approximators in particular too.
| null |
CC BY-SA 4.0
| null |
2023-04-27T01:15:24.403
|
2023-04-27T01:15:24.403
| null | null |
11852
| null |
614252
|
1
|
614292
| null |
1
|
37
|
I'm reading the paper "How to estimate the effect of treatment duration on survival outcomes using observational data"([https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6889975/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6889975/)), and it states that:
>
"Quantifying the effect of treatment duration on survival outcomes is not straightforward because only people who survive for a long time can receive a treatment for a long time. Suppose we want to estimate the effect of statins on the mortality of patients with cancer using a healthcare database.1 A direct comparison of long term users, short term users, and non-users would be biased because long term users have, by definition, survived for a long time. Several methods can be used to tackle this bias, but some do not enable estimation of absolute risks or appropriate adjustment for time varying confounders".
I'm curious about what some of these methods are. Thank you!
|
How to estimate the effect of treatment duration on survival outcomes
|
CC BY-SA 4.0
| null |
2023-04-27T01:24:48.963
|
2023-04-27T13:52:08.153
| null | null |
363964
|
[
"survival",
"treatment-effect",
"observational-study"
] |
614253
|
2
| null |
614212
|
3
| null |
#### Probably not the best solution, but a direction that may be useful.
Suppose
\begin{align}
\begin{bmatrix}
X \\ Y
\end{bmatrix} \sim N_2\left(\begin{bmatrix} \mu_1 \\ \mu_2 \end{bmatrix},
\begin{bmatrix}
\sigma_1^2 & \rho\sigma_1\sigma_2 \\
\rho\sigma_1\sigma_2 & \sigma_2^2
\end{bmatrix}\right).
\end{align}
It is well known that the conditional distribution of $X$ given $Y = y$ is univariate normal:
\begin{align}
X | Y = y \sim N(\mu_1 + \rho\sigma_1\sigma_2^{-1}(y - \mu_2), (1 - \rho^2)\sigma_1^2). \tag{1}
\end{align}
This suggests that, to bound $E[|XY|] = E[E[|X||Y||Y]] = E[|Y|E[|X||Y]]$, we can evaluate $E[|X||Y]$ first. In [this answer](https://stats.stackexchange.com/questions/292465/how-to-find-a-closed-form-expression-for-e-maxx-y/600661#600661) (or use the property of [folded normal distribution](https://en.wikipedia.org/wiki/Folded_normal_distribution)), it is shown that if $Z \sim N(\mu, \sigma^2)$, then
\begin{align}
E[|Z|] = \sigma\sqrt{\frac{2}{\pi}}e^{-\frac{\mu^2}{2\sigma^2}} + \mu(1 - 2\Phi(-\mu\sigma^{-1})), \tag{2}
\end{align}
where $\Phi$ is CDF of $N(0, 1)$. $(1)$ and $(2)$ together then imply
\begin{align}
& E[|X||Y] = \sqrt{1 - \rho^2}\sigma_1\sqrt{\frac{2}{\pi}}e^{-\frac{[\mu_1 + \rho\sigma_1\sigma_2^{-1}(Y - \mu_2)]^2}{2(1 - \rho^2)\sigma_1^2}} \\
+& (\mu_1 + \rho\sigma_1\sigma_2^{-1}(Y - \mu_2))\left(1 - 2\Phi\left(-\frac{\mu_1 + \rho\sigma_1\sigma_2^{-1}(Y - \mu_2)}{\sqrt{1 - \rho^2}\sigma_1}\right)\right). \tag{3}
\end{align}
A trivial upper bound for $(3)$ is
\begin{align}
\sqrt{1 - \rho^2}\sigma_1\sqrt{\frac{2}{\pi}} + |\mu_1| + |\rho|\sigma_1\sigma_2^{-1}(|Y| + |\mu_2|),
\end{align}
which gives an upper bound for $E[|XY|]$ as
\begin{align}
& CE[|Y|] + |\rho|\sigma_1\sigma_2^{-1}E[Y^2] \\
=& C\left[\sigma_2\sqrt{\frac{2}{\pi}}e^{-\frac{\mu_2^2}{2\sigma_2^2}} + \mu_2(1 - 2\Phi(-\mu_2\sigma_2^{-1}))\right] + |\rho|\sigma_1\sigma_2^{-1}(\sigma_2^2 + \mu_2^2), \tag{4}
\end{align}
where $C = \sqrt{1 - \rho^2}\sigma_1\sqrt{\frac{2}{\pi}} + |\mu_1| + |\rho|\sigma_1\sigma_2^{-1}|\mu_2|$.
---
#### Updates
Added R code of comparing bound $(4)$ and the Cauchy-Schwarz bound $\sqrt{(\sigma_1^2 + \mu_1^2)(\sigma_2^2 + \mu_2^2)}$. As expected, bound $(4)$ is sharper when $X$ and $Y$ are weakly correlated while Cauchy-Schwarz bound is sharper when $X$ and $Y$ are highly correlated. Different setups of $\mu_i$ and $\sigma_i$ also result in different winners. Those who are interested can play with it.
```
bounds <- function(mu1, mu2, sigma1, sigma2, rho) {
C1 <- sqrt(1 - rho^2) * sigma1 * sqrt(2/pi) + abs(mu1) + abs(rho * mu2) * sigma1/sigma2
C2 <- sigma2 * sqrt(2/pi) * exp(-mu2^2/(2 * sigma2^2)) +
mu2 * (1 - 2 * pnorm(-mu2/sigma2))
C3 <- abs(rho) * sigma1 * (sigma2^2 + mu2^2)/sigma2
UB1 <- C1 * C2 + C3
UB2 <- sqrt((sigma1^2 + mu1^2) * (sigma2^2 + mu2^2))
c(UB1, UB2)
}
para_mat <- cbind(rep(0, 10), rep(0, 10), rep(1, 10), rep(1, 10), seq(0.1, 0.9, len = 10))
colnames(para_mat) <- c("mu1", "mu2", "sigma1", "sigma2", "rho")
My_bound <- apply(para_mat, 1, function(t) bounds(t[1], t[2], t[3], t[4], t[5])[1])
CS_bound <- apply(para_mat, 1, function(t) bounds(t[1], t[2], t[3], t[4], t[5])[2])
cbind(para_mat, My_bound, CS_bound)
## Result:
mu1 mu2 sigma1 sigma2 rho My_bound CS_bound
[1,] 0 0 1 1 0.1000000 0.7334287 1
[2,] 0 0 1 1 0.1888889 0.8140485 1
[3,] 0 0 1 1 0.2777778 0.8893436 1
[4,] 0 0 1 1 0.3666667 0.9589474 1
[5,] 0 0 1 1 0.4555556 1.0222792 1
[6,] 0 0 1 1 0.5444444 1.0784391 1
[7,] 0 0 1 1 0.6333333 1.1260001 1
[8,] 0 0 1 1 0.7222222 1.1625473 1
[9,] 0 0 1 1 0.8111111 1.1834650 1
[10,] 0 0 1 1 0.9000000 1.1774961 1
```
| null |
CC BY-SA 4.0
| null |
2023-04-27T02:25:18.123
|
2023-04-28T00:13:53.317
|
2023-04-28T00:13:53.317
|
20519
|
20519
| null |
614255
|
2
| null |
614248
|
2
| null |
- The reference level is given by the $means component of the fitted model
- I don't think you can: a categorical variable only has the categories that it has.
- You can't. That's what survfit.coxph is for. The predicted curve is absolute, not relative, so you don't need to worry about means; just set the covariates to the values you want.
| null |
CC BY-SA 4.0
| null |
2023-04-27T03:31:16.233
|
2023-04-27T03:31:16.233
| null | null |
249135
| null |
614260
|
2
| null |
613251
|
2
| null |
We can rewrite the defining equation as
$$X = -C + e^{\mu + \sigma z}$$
So if $Q_1, Q_2, Q_3$ are the quartiles of $X$, with $z=-0.675$, $z=0$, $z=+0.675$, then
$$(Q_1+C)(Q_3+C)=(Q_2+C)^2$$
$$C = \frac{Q_2^2-Q_1 Q_3}{Q_1-2Q_2+Q_3}$$
The denominator here is a quantile-based measure of skewness, which should be positive for any dataset reasonably modeled by a shifted lognormal.
One approach using this is:
- Let $\hat{C}$ be the result of applying the above formula to the quartiles of the population.
- Let $\hat{\mu}$ be the corresponding MLE estimate for $\mu$, i.e. the mean of $\ln(X+\hat{C})$.
- Let $\hat{\sigma}$ be the corresponding MLE estimate for $\sigma$, i.e. the standard deviation of $\ln(X+\hat{C})$.
Another approach using this is:
- Let $\hat{C}$ be as above.
- Let $\hat{\mu}=\ln(Q_2+\hat{C})$.
- Let $\hat{\sigma}=\dfrac{1}{1.35}\ln\dfrac{Q_3+\hat{C}}{Q_1+\hat{C}}$
where 1.35 is the interquartile range of the normal distribution.
The first approach is partially justified by MLE; the second approach matches the quartiles exactly. Both are simple and give reasonable starting points for further refinement.
| null |
CC BY-SA 4.0
| null |
2023-04-27T04:20:48.980
|
2023-05-02T20:23:23.783
|
2023-05-02T20:23:23.783
|
225256
|
225256
| null |
614261
|
1
| null | null |
0
|
18
|
I have a dataset consisting of 3 variables: a time series related to hourly energy load (expressed in thousands of MWh), a dummy variable related to business hours (e.g. the dummy is 1 the current hour is between 9-18, 0 otherwise) and one for weekend days (e.g. the dummy is 1 if it's Saturday or Sunday, 0 otherwise).
I want to perform a forecasting task on the energy load time series using the other two dummy variables as covariates. The forecasting will be performed with neural nets.
If I normalize the time series variable to a 0-1 scale, will the effect of the dummies be different in the prediction? What is usually the correct approach in these cases?
Note: the time series is stationary
|
On effects of dummy variables on scaled data
|
CC BY-SA 4.0
| null |
2023-04-27T05:00:12.493
|
2023-04-27T05:00:12.493
| null | null |
346686
|
[
"time-series",
"neural-networks",
"forecasting",
"normalization"
] |
614264
|
2
| null |
613947
|
1
| null |
You seem to have a few misunderstandings about ROC curves.
>
I am using ROC curves for multi-label classification.
ROC curve are tools to assess the discrimination ability of binary classifiers. Some extensions exist for different types of problems such as multi class or multi label classification, but they are not ROC curves strictly speaking.
>
an ROC curve is parameterized by a discrimination threshold
A ROC curve is parameterized over all possible discrimination thresholds between $-\infty$ and $+\infty$.
>
With a discrimination threshold of 0.9, we assign that observation correctly and no observation incorrectly.
With a threshold of 0.9, we indeed (correctly) assign observation 1 to the positive predicted class.
All other observations are assigned to the negative predicted class. Because observations 2-5 are < 0.9, we assign them to the negative predicted class. As a result, observations 2 and 4, which should be positive, are misclassified as negatives, and decrease the True Positive Rate (sensitivity) and the AUC.
Because ROC curve is designed for binary classification problems, there is no such thing as "unassigned". If things are not positive, they are negative. If this assumption is not appropriate for your problem, then you're not having a binary classification problem, and ROC curves may be the wrong tool for you.
>
The True Positive Rate is 1 and the False Positive Rate is 0, which is the ideal point at the top left in an ROC curve. We never see that point in an ROC curve
This is wrong, this point is seen as soon as you have a perfect classifier. This might be hard to achieve in your field or for your problem, but it definitely exists.
>
How exactly does an ROC curve use the discrimination threshold?
I refer you to [this CV question: What does AUC stand for and what is it?](https://stats.stackexchange.com/q/132777/36682), which should answer this part of your question.
| null |
CC BY-SA 4.0
| null |
2023-04-27T06:30:23.927
|
2023-04-27T06:30:23.927
| null | null |
36682
| null |
614265
|
1
| null | null |
6
|
583
|
I have a dataset of 2 conditions. Each condition has 15 measurements. I tested them using paired t-test to find if the difference is statistically significant. Should I use a p-value correction method for such a single test with a small sample size? If I should, which methods are suitable for this case (except Bonferroni)?
|
P-value adjustment for a single test with low sample size
|
CC BY-SA 4.0
| null |
2023-04-27T06:52:06.970
|
2023-05-23T11:06:36.197
|
2023-04-28T14:09:37.887
|
53690
|
369427
|
[
"t-test",
"p-value",
"multiple-comparisons",
"small-sample"
] |
614266
|
1
| null | null |
0
|
32
|
I am working on a PV energy production forecasting problem. With various ML models (ANN, RNN, LSTM) I am trying to predict the energy for the following day, based on the historical data.
The aggregated dataset consists of multiple individual datasets from PV systems with different peak values. My idea is to normalize each PV system by `production_value / max_production_value` before aggregating the datasets.
So far so good, however, when forecasting the production (always one day in the future) a question arises. I want to see which day was forecasted most accurately, and for that I am using the RMSE. The problem is, even if my data is normalized, the production is in summer higher than in winter. (Summer production after normalizing ~1, and in winter ~0.4).
Now when I compare the RMSE of a summer day with the RMSE of a winter day, the winter has a lower RMSE. Even after a visual inspection it is visible that summer days are more accurate.
Now I am looking for a way to quantify the error in a way that is comparable in this case. I was thinking about normalizing each day again before calculating the RMSE. However, I have the feeling there should exist a more straight forward way.
|
Comparing RMSE values across different datasets
|
CC BY-SA 4.0
| null |
2023-04-27T07:12:27.617
|
2023-04-27T10:48:09.363
|
2023-04-27T10:24:13.230
|
247274
|
386676
|
[
"machine-learning",
"normalization",
"model-evaluation",
"rms"
] |
614267
|
2
| null |
614265
|
10
| null |
No, for a single test you have a single p-value, thus no correction method is needed. Indeed, [multiple comparisons problem](https://en.wikipedia.org/wiki/Multiple_comparisons_problem) arises when you perform many statistical tests or when you build many confidence intervals on the same data. Also, the small sample size issue is irrelevant to the multiplicity issue.
If you are worried about the validity of the p-value in light of the small sample size, then you may try a test via Bootstrap or permutation.
| null |
CC BY-SA 4.0
| null |
2023-04-27T07:17:27.697
|
2023-04-27T07:35:47.277
|
2023-04-27T07:35:47.277
|
56940
|
56940
| null |
614268
|
1
| null | null |
1
|
46
|
Working with decision tree and I have couple of questions:
- Should I always do random forest before or I can just do the decision tree, skip the random forest part?
- Should I always have a training data set and test set? Or, I can run the model just on the orginal data?
- What method is suitable if you don't have any factors in the data set all are numeric?
- Root node error: here we have 101653/118 = 861.46, seemse a lot? Is it now overfitted? Possible solutsions?
Added the code and giving background on my data:
- ob.prcnt is value % of eaten insects in specific place
- I also have 11 variables that are all specific percentage of habitat in sampling place.
I would like to see how these habitat amounts affect the insects eaten %.
A good guide on decision tree and how to do them in R is appreciated!
```
> fit.tree14<- rpart(ob.prcnt ~ ., data=P14_Q1)
> #fit.tree14
> #summary(fit.tree14)
> rpart.plot(fit.tree14, extra=1, type=2, digits=3,
+ clip.right.labs=TRUE, under=TRUE,
+ branch=1, tweak = 1.1, gap=6, space=1) #, main = "2015", cex.main = 1.5)
>
> #rpart.rules(fit.tree)
> printcp(fit.tree14)
Regression tree:
rpart(formula = ob.prcnt ~ ., data = P14_Q1)
Variables actually used in tree construction:
[1] Agricultural.land.excluding.permanent.grassland Semi.natural.habitat
[3] Woody.linear
Root node error: 101653/118 = 861.46
n= 118
CP nsplit rel error xerror xstd
1 0.096492 0 1.00000 1.0270 0.12096
2 0.015656 2 0.80702 0.9749 0.12522
3 0.011298 4 0.77570 1.0441 0.13470
4 0.010000 5 0.76440 1.0522 0.13746
```
|
Making a decision tree with numeric data
|
CC BY-SA 4.0
| null |
2023-04-27T07:25:59.637
|
2023-04-27T12:43:45.753
|
2023-04-27T12:43:45.753
|
178468
|
370528
|
[
"random-forest",
"cart",
"rpart",
"decision"
] |
614270
|
1
| null | null |
4
|
195
|
I have data for hundreds of individuals over several (often tens) of years. My dependent variable is a binary event that may or may not have happened for each individual in some given year. The event can only happen once per individual. Independent variables include individual characteristics that do not vary over time (e.g., gender) and measures for each individual that vary over time (e.g., output of that person in year t).
Minimum viable example of my data for two individuals with far less observations per individual and considerably less independent variables than I truly have:
|Event |Year |Individual |Gender |Output |
|-----|----|----------|------|------|
|0 |2020 |Person X |Male |10 |
|0 |2021 |Person X |Male |15 |
|1 |2022 |Person X |Male |20 |
|0 |2023 |Person X |Male |15 |
|... |... |... |... |... |
|0 |2020 |Person Y |Female |12 |
|1 |2021 |Person Y |Female |21 |
|0 |2022 |Person Y |Female |23 |
|0 |2023 |Person Y |Female |28 |
My aim is to test whether each independent variable explain the occurence of the event. I would also like to keep the possibility to assess whether the fixed characteristics like gender could explain the event even when the time-varying measures are similar between individuals.
At this point, I am not sure which model to use. I was considering logistic regression, but the observations are probably not independent as there are several measurements across individuals. Other options that have come across thus far have been survival models and mixed-effects models, but I am not familiar with them, so I cannot properly judge whether these models would suit my problem better.
What model(s) should I consider given my data and objectives? I am happy to provide any further details if necessary.
Thanks in advance!
|
What kind of model is best suited for this type of data and research question?
|
CC BY-SA 4.0
| null |
2023-04-27T08:18:59.093
|
2023-04-27T15:30:08.087
|
2023-04-27T09:00:51.120
|
343056
|
343056
|
[
"regression",
"hypothesis-testing",
"logistic",
"mixed-model",
"survival"
] |
614271
|
1
|
614349
| null |
1
|
47
|
Say I have a (generally high-dimensional) random variable $X$ with known, continuous CDF $F(X)$.
Is there a good algorithm for drawing values of $X$ that doesn't require that I calculate the joint density?
I'm specifically interested in the GEV distribution used in the latent variable formulation of the nested logit:
$$F\left(\vec\nu\right) = \exp\left(- ∑_{n ∈ N} \left(∑_{k∈n} \exp\left(\frac{-\nu_k}{λ_n}\right)\right)^{λ_n}\right).$$
That distribution should have analytic PDFs, but they're pretty gross (and are going to get even grosser with multiple layers of nests).
[Edit: this question was labelled a duplicate of [Inverse of cumulative density function for Multivariate Normal Distribution](https://stats.stackexchange.com/questions/530488/inverse-of-cumulative-density-function-for-multivariate-normal-distribution). It's not obvious why -- is the assumption that the inverse-transform algorithm is the only algorithm for drawing from a CDF? That doesn't seem right.]
|
Drawing numbers using the CDF
|
CC BY-SA 4.0
| null |
2023-04-27T08:59:18.510
|
2023-04-27T23:24:16.067
|
2023-04-27T18:55:39.163
|
161943
|
161943
|
[
"random-generation",
"multinomial-logit"
] |
614272
|
2
| null |
3931
|
0
| null |
[Here's a very good overview and full proof](https://web.archive.org/web/20230128002946/http://math.oxford.emory.edu/site/math117/besselCorrection/)
>
In the more general case, note that the sample mean is not the same as the population mean. One's sample observations are naturally going to be closer on average to the sample mean than the population mean, resulting in the average $(x−\bar{x})^2$ value underestimating the average $(x−μ)^2$ value. Thus, $s^2_{biased}$ generally underestimates $σ^2$ with the difference between the two more pronounced when the sample size is small.
| null |
CC BY-SA 4.0
| null |
2023-04-27T08:59:40.013
|
2023-04-27T08:59:40.013
| null | null |
356970
| null |
614273
|
1
| null | null |
1
|
42
|
In my experiment, There are two fixed effects, Temperature and Condition with 4 and 3 levels respectively. There are total of 12 different containers, 3 per temperature and individuals of all 3 Conditions were reared together in each container. Since a container was shared by all condition treatments but only experiences one temperature, I am modelling it using a linear mixed effects model as -
`Response variable ~ Temperature * Condition + (1|Temperature:Container) `
I want to follow this up with an anova. Is using the current random effects structure the correct approach?
|
What would the random effect structure be for this experimental design?
|
CC BY-SA 4.0
| null |
2023-04-27T09:56:48.873
|
2023-04-27T09:56:48.873
| null | null |
386688
|
[
"r",
"mixed-model",
"anova",
"lme4-nlme"
] |
614274
|
1
| null | null |
0
|
11
|
I’m creating a research proposal poster in which I’m looking at if a reading scheme increases intrinsic motivations to read.
I’ve chosen a sample of participants with half taking part in the scheme, and half being the control group who dont take part. All participants will complete a reading questionnaire at two points in time to see if motivation scores change.
Would this be a 2x2 mixed design, and if so can i run an independent t-test to investigate this?
I’m inexperienced with statistics so any help would be appreciated!
|
What test of difference would I need to run!
|
CC BY-SA 4.0
| null |
2023-04-27T10:03:35.043
|
2023-04-27T10:03:35.043
| null | null |
386689
|
[
"t-test"
] |
614275
|
2
| null |
614248
|
1
| null |
Essentially, what you are looking for is a plot of the marginal effect of the inflammation score on the survival probability, adjusted for further variables. Using the mean of variables does not really do the trick here, even if you only had continuous variables. I developed an R-package specifically for this purpose called `contsurvplot`.
It uses a specified model (e.g. your cox model) to perform g-computation to estimate the marginal survival probabilities for the population and plots it using different methods. For example, you could create the plot you are looking for using code like this:
```
library(contsurvplot)
library(riskRegression)
library(survival)
library(ggplot2)
# using data from the survival package
data(nafld, package="survival")
# fit cox-model with age
model <- coxph(Surv(futime, status) ~ age + male + bmi, data=nafld1, x=TRUE)
# plot effect of age on survival using defaults
plot_surv_lines(time="futime",
status="status",
variable="bmi",
data=nafld1,
model=model,
horizon=c(15, 30, 40))
```
[](https://i.stack.imgur.com/XCMOw.png)
This example shows the marginal survival probability over time for different values of the body-mass-index. Instead of picking some arbitrary values you could use a survival area plot:
```
plot_surv_area(time="futime",
status="status",
variable="bmi",
data=nafld1,
model=model)
```
[](https://i.stack.imgur.com/lJJVH.png)
The methodology and further plots are explained in detail in the associated publication: [https://arxiv.org/pdf/2208.04644.pdf](https://arxiv.org/pdf/2208.04644.pdf) (soon to be published in "Epidemiology").
| null |
CC BY-SA 4.0
| null |
2023-04-27T10:07:25.537
|
2023-04-27T10:07:25.537
| null | null |
305737
| null |
614276
|
2
| null |
614268
|
2
| null |
Random forest is one way (things like gradient boosted decision trees - e.g. XGBoost/LightGBM are other ways - which tend to have often have slightly better prediction performance) of building trees that in combination ("as an ensemble") can be much more complex than a single tree without overfitting. By "without overfitting" I don't mean that they cannot overfit, but rather than it turns out to be easier to regularize the process of building an ensemble of trees (by tuning various hyperparameters of the process) so that they don't overfit (as assessed by e.g. cross-validation). Whether you want these more complex models is mostly a function of what's your goal. Often, you will need the more complex ones, if you primarily want to optimize performance and interpretation is not the main goal (if interpretation is very important various interpretation tools exist, but also there's research into reasonably performant ways of building much simpler models such as [CORELS](https://corels.eecs.harvard.edu/corels/)).
I.e. whether you always want to do random forest vs. just a decision tree really depends on what you want to achieve and the context.
Regarding training and test (and validation set), you usually want (at least) three things:
- to train your model based on data: for this you need a training set
- to choose hyperparameters of your model: you usually need to do this on data that's not your training set (otherwise you tend to overfit), one solution is a separate "validation set" (or cross-validation to be more efficient) - once you've chosen your hyperparameters, you can re-train on the combined training + validation set with the chosen hyperparameters
- to know how well your model is working: if you use the data you trained on or on which you choose the hyperparameters (aka the validation set), then you are potentially misleading yourself (and others, often very, very badly - I've seen flawed evaluations on the training data that suggested near perfect performance when the model was just producing useless garbage), so if you want to know how well your model works you need new data to evaluate it on
For the last two points, the considerations outlined [here](https://www.fast.ai/posts/2017-11-13-validation-sets.html) apply. I.e. you don't just need any data or to thoughtlessly randomly split your data, you really need to think about this.
If you don't have factors in the data, this does not change too much except that it makes your life easier. Purely numeric data are easier to deal with for many algorithms and a lot of effort (e.g. target encoding, creating embeddings etc.) goes into how to represent categorical features for models that don't deal with them so well.
| null |
CC BY-SA 4.0
| null |
2023-04-27T10:17:56.970
|
2023-04-27T10:17:56.970
| null | null |
86652
| null |
614277
|
2
| null |
614266
|
0
| null |
First thing come to my mind is the Root-mean-square percentage error (RMSPE). Definition [here](https://stats.stackexchange.com/questions/413249/what-is-the-correct-definition-of-the-root-mean-square-percentage-error-rmspe).
Basically it tells us how far in percentage a prediction is from ground truth, thus adjusted for fluctuation of the ground truth.
| null |
CC BY-SA 4.0
| null |
2023-04-27T10:48:09.363
|
2023-04-27T10:48:09.363
| null | null |
294554
| null |
614278
|
1
| null | null |
2
|
44
|
I am a beginner in linear mixed models on RStudio and would like some advice on what I would like to do with my data. I work in the field of cognitive neuroscience and my research focuses on understanding face processing in adults. We measure face processing using eye-tracking measures (here pupil dilation) and a paradigm using social (images and videos of real faces and avatars) and non-social (objects) stimuli.
I would explore if the physiological engagement, indexed by pupil diameter variations, is caused by the motion. To do that, we quantified the motion amount for each video by a coefficient. My data set consists of 7 variables (participant, movements, actors/actresses, categories, movement coefficient, and pupil dilation) with 1320 observations.
Categories, movements, and actors/actresses describe my stimuli:
- categories are broken down into 3 components: object (non-social stimulus), avatars, and real faces (social stimulus)
- movements are broken down into 3 components: static, micro, and macro movement
- actors/actresses: there are 4 videos (2 actors+2actresses) per category per movement
I was thinking that a LMM would be a good analysis to answer my question, as I can consider the fixed effect and random effect but I am a bit lost about the model's writing... I tried random effect for a particular participant to allow the deviations of the intercept of that participant's pupil dilation from the population. In addition, I was thinking to add another random effect for a particular motion coefficient where the deviations in the ordinate of the pupil dilation of the motion coef in question from the full motion coefficient sample.
So I tried multiple models like this, to see which one best fit to explain my data (package lme4, lmer function) :
- modela <- lmer(pupil_dilation ~ categories*movements*actors + (1 | participants) + (1 | motion_coef), data = data, REML = FALSE)
- modelb <- lmer(pupil_dilation ~ categories+movements+actors + categories:movements:actors + (1 | participants) + (1 | motion_coef), data = data, REML = FALSE)
- modelc <- lmer(pupil_dilation ~ categories+movements+actors + (1 | participants) + (1 | motion_coef), data = data, REML = FALSE)
- modeld <- lmer(pupil_dilation ~ categories + movements + (1 | participants) + (1 | motion_coef), data = data, REML = FALSE)
- modele <- lmer(pupil_dilation ~ categories + (1 | participants) + (1 | motion_coef), data = data, REML = FALSE)
For your information: categories, movements, actors, motion_coef, and participants were converted as factors.
Here IDK if these models make sense, notably with my random effects.
So few questions come to my mind:
- Is LMM a good way to answer my question?
- Do I have to normalize my data before starting my LMM?
- Are the models above seem consistent according to my research question?
I hope I was clear about my description. Also, I am sorry if I didn't explain well about the LMM but as I am new I tried my best!
Thank you all in advance for your precious help!
Camille
|
RStudio: Help on Linear Mixed Models
|
CC BY-SA 4.0
| null |
2023-04-27T10:48:26.160
|
2023-04-27T11:55:08.677
|
2023-04-27T11:55:08.677
|
164061
|
386690
|
[
"r",
"mixed-model",
"lme4-nlme",
"linear-model",
"linear"
] |
614279
|
1
|
614282
| null |
0
|
50
|
I've used time-series forecasting (ETS, ARIMA, etc.) models but because of the depth of my data I'm exploring using the survival and related packages in R to forecast future period survival probabilities. For example, if we have 24 months of survival data, I'm interested in forecasting months 25-36 using the statistical parameters derived from analyzing months 1-24, using either Kaplan-Meier or a parametric distribution like Weibull. In the form of simulation with multiple simulation paths derived, as I've done with traditional time-series methods. I'm exploring alternatives to the traditional time-series methods because they seem overly-simplistic in light of depth of data I have (similar to what you see in the lung and kidney datasets that are part of the survival package). Please, does anyone know if this type of probabilistic future period forecasting is possible in survival analysis, and know of any simple examples on-line that I can start with?
|
Use survival analysis to predict survival probabilities for future periods not contained in the survival dataset, like a time-series forecast?
|
CC BY-SA 4.0
| null |
2023-04-27T10:50:15.820
|
2023-04-27T11:20:37.003
| null | null |
378347
|
[
"r",
"time-series",
"forecasting",
"survival"
] |
614280
|
1
| null | null |
1
|
11
|
I have inherited a dataset with a truly chaotic trial design. 32/155 varieties are duplicated at least once, most having 2-4 reps, but one is repeated x5, another x8, and another x19. I was told that the 'blocks' are just the four columns. Here's a link to the trial design:
[https://nbicloud-my.sharepoint.com/:x:/g/personal/mjones_nbi_ac_uk/ESIUBnup2dBHkqsHob7EvrgBh2saqh1Q4ls8lqSed_v1zA?e=Oyt8nW](https://nbicloud-my.sharepoint.com/:x:/g/personal/mjones_nbi_ac_uk/ESIUBnup2dBHkqsHob7EvrgBh2saqh1Q4ls8lqSed_v1zA?e=Oyt8nW)
I've colour coded varieties that are repeated 4+ times and italicised those repeated 2-3 times to give a better sense of the design.
I have a load of phenotypic data for each plot and I'd like to apply some kind of correction for field spatial heterogeneity. I've looked into using SpATS and I think it's close to what I need, but I don't think any of the five specifiable field designs quite fit my data. (Incomplete block design, resolvable incomplete block design, randomized complete block design, row column design, resolvable row column design.)
Can anyone help me wrangle some kind of BLUEs/BLUPs/breeding values out of this mess? (Or point me to a better forum to ask this question!)
Many thanks,
Max
|
How to apply spatial correction to field data based on many randomly dispersed check varieties with varying levels of replication?
|
CC BY-SA 4.0
| null |
2023-04-27T10:57:44.893
|
2023-04-27T10:57:44.893
| null | null |
386682
|
[
"mixed-model",
"linear-model",
"spatial"
] |
614281
|
2
| null |
614279
|
0
| null |
for discrete time survival problems, a simple logistic regression framework with calendar month as an input is straightforward to use.
your data set should be in so called [person-period format](https://stats.oarc.ucla.edu/r/faq/how-can-i-convert-from-person-level-to-person-period/), where for each month the person survived until the previous month, you create a row indicating whether they survived the current month.
You can then interact time with any of your other independent variables, or use spline interpolation for your time variable etc - ie use any of the standard input transformations you would use with linear/logistic regression.
then to predict in the future you predict probability of surviving each month given survived up till then ( from logistic regression output, just changing the input time)
and your simulation is just a markov chain using these probability outputs
| null |
CC BY-SA 4.0
| null |
2023-04-27T11:07:03.053
|
2023-04-27T11:07:03.053
| null | null |
27556
| null |
614282
|
2
| null |
614279
|
1
| null |
One obvious solution, as you already alluded to is to use a parametric model (such as Weibull regression). Non-parametric/semi-parametric models like Kaplan-Meier or Cox do in their standard implementation not extrapolate beyond the largest observed event time. There's other implementations that more or less do the same thing, but can extrapolate, e.g. an exponential model with piecewise constant hazard rates can be very similar to Cox regression, but will extrapolate if you just assume the hazard rate of the last interval going forward (similarly, e.g. the `brms` R package implements a kind of Cox model using M-splines for the baseline hazard function which I think should permit extrapolation).
E.g. if you use R, one obvious option is to penalize Weibull regression
`fit <- survreg(Surv(time, has_event) ~ 1 + ridge( predictor1, predictor2, ... , theta=1, scale=T), dist = "weibull", data=mydata)`, where the ridge-penalty `theta` could be picked using cross-validation. After fitting, the traditional shape parameter is given by `1/fit$scale` and the scale parameter for a record `predict(object=fit, newdata=newdata, type="lp")`, so that you get an estimated survival curve by plotting the Weibull CDF with these parameters, but that ignores uncertainty about the parameters. This gets a lot easier to deal with in the Bayesian version of Weibull regression, which is nicely supported in the `brms` R package (if you want instead of individual patient time predictions to predict the parameters of the distribution, there's some nuances: see [here](https://discourse.mc-stan.org/t/brms-how-to-keep-it-consistent-for-mu-and-shape-in-posterior-linpred-weibull-time-to-event/30725/2)).
Alternatively, a more machine learning oriented approach would be to e.g. use [XGBoost with an accelerated failure time](https://xgboost.readthedocs.io/en/stable/tutorials/aft_survival_analysis.html) loss, but calibration tends to be poor. However, there's [alternatives that try to fix this](https://loft-br.github.io/xgboost-survival-embeddings/how_xgbse_works.html).
| null |
CC BY-SA 4.0
| null |
2023-04-27T11:20:37.003
|
2023-04-27T11:20:37.003
| null | null |
86652
| null |
614284
|
1
| null | null |
0
|
33
|


More context: these are the residuals from a mixed model where the response variable is a % that varies between 60% and 95%.
Have tried the following transformations:
- Log
- Sqrt (and variations)
- 1/x (and variations)
- arcsin (after transforming % to values between 0 and 1)
- Yeo-Johnson
|
What transformation could I still try to approach normality?
|
CC BY-SA 4.0
| null |
2023-04-27T10:26:29.437
|
2023-05-17T14:35:20.413
|
2023-05-17T14:35:20.413
|
11887
| null |
[
"r",
"data-transformation"
] |
614286
|
1
| null | null |
1
|
23
|
Say I have the following treatment:
\begin{array}{cccccccccc}
\hline
unit & year & treatment & d^{-3} & d^{-2} & d^{-1} & d^{0} & d^{+1} & d^{+2} & d^{+3} \\
\hline
1 & 2000 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 2001 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 2002 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
1 & 2003 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
1 & 2004 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
1 & 2005 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
1 & 2006 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
1 & 2007 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 or 1? \\
1 & 2008 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 2009 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 2010 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 2011 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline
2 & 2000 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
2 & 2001 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
2 & 2002 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
2 & 2003 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
2 & 2004 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
2 & 2005 & 1 & 0 or 1? & 0 & 0 & 0 & 0 & 1 & 0 \\
2 & 2006 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 or 1? \\
2 & 2007 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
2 & 2008 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
2 & 2009 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
2 & 2010 & 0 & 0 & 0 & 0 & 0 & 0 & 0 or 1? & 0 \\
2 & 2011 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 or 1? \\
\hline
3 & 2000 & 1 & 0 & 0 & 0 & 0 & . & . & . \\
3 & 2001 & 1 & 0 & 0 & 0 & 0 & . & . & . \\
3 & 2002 & 1 & 0 & 0 & 0 & 0 & . & . & . \\
3 & 2003 & 1 & 0 & 0 & 0 & 0 & . & . & . \\
3 & 2004 & 1 & 0 & 0 & 0 & 0 & . & . & . \\
3 & 2005 & 1 & 0 or 1? & 0 & 0 & 0 & . & . & . \\
3 & 2006 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
3 & 2007 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
3 & 2008 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
3 & 2009 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
3 & 2010 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
3 & 2011 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\hline
\end{array}
The treatment effect is self-explanatory. Just think of it as some kind of regulation. For unit 1, it turns on in year 2004, but is repelled in 2007 onwards. For unit 2, it comes into effect in 2003, repelled in 2006 to 2007, comes back in 2008, but repelled in 2010.
For unit 3, the same applies but the regulation is already in place at the start of the sample period. Another key detail is that we don't know when the regulation came into effect, i.e., it could be 1999 or it could be 1979, we do not know any information outside the sample period.
Estimating the generalized diff-in-diff here is also pretty straightforward, there have been many discussions on this, one particular example similar to mine is [here](https://stats.stackexchange.com/questions/504144/difference-in-differences-dynamic-treatment-group-timing). However, my question is how can I estimate a dynamic (i.e., event study) version of this non-staggered diff in diff when the treatment can turn on and off? That is, what should the dummy variables $d^{-3}$ to $d^{+3}$ be? I have filled in the matrix as best as I can, but there are a few question marks as follows:
- For unit 1 year 2007: Should $d^{+3}$ be 0 or 1 here? The regulation is already repelled in 2007, meaning technically $d^{+3}$ should be 0 right? If the regulation was still in place until 2008, then $d^{+3}$ should be 1, am I correct?
- For unit 2 year 2005. Should $d^{-3}$ be 0 or 1 here? I am leaning towards it being 0 since this is still two years after 2003, meaning it is still the "post" period relative to the 2003 introduction of the regulation. How about the other question marks for unit 2?
- For unit 3, since we don't know when the regulation came into effect then $d^{+1}$, $d^{+2}$, and $d^{+3}$ should all be missing from 2000 to 2005 right? If we had information outside the sample period, for example, the regulation came into effect in 1999, then we could fill in $d^{+1}$, $d^{+2}$, and $d^{+3}$. But as it stands, we can't. Am I correct?
|
Dynamic effects of non-staggered difference in difference
|
CC BY-SA 4.0
| null |
2023-04-27T11:47:39.883
|
2023-04-27T11:47:39.883
| null | null |
56031
|
[
"econometrics",
"difference-in-difference",
"generalized-did"
] |
614288
|
1
|
614295
| null |
1
|
38
|
I have this interaction plot and I'm having some trouble interpreting it. I can see that level 3 of factor A cuts through both 1 and 2, which means that level 3 of A has significant interaction with B. However, levels 1 and 2 of A are almost parallel. I thought this would suggest that the interaction $(ab)_{ij}$ when $i=1,2$ is not significant. However, the t-test I get for each coefficient of $(ab)_{ij}, i,j=1,2,3$ in the fitted model has a p-value <$10^{-3}$, suggesting that each one of those coefficients is (very) significant and so is each one of these interactions. Did I misunderstand something?
The data can be shown in the following table. I've also added a picture of the coefficients and the p-value of the t-test.
```
| Y | A | B |
|------|---|---|
| 580 | 1 | 1 |
| 568 | 1 | 1 |
| 570 | 1 | 1 |
| 550 | 2 | 1 |
| 530 | 2 | 1 |
| 579 | 2 | 1 |
| 546 | 3 | 1 |
| 575 | 3 | 1 |
| 599 | 3 | 1 |
| 1090 | 1 | 2 |
| 1087 | 1 | 2 |
| 1085 | 1 | 2 |
| 1070 | 2 | 2 |
| 1035 | 2 | 2 |
| 1000 | 2 | 2 |
| 1045 | 3 | 2 |
| 1053 | 3 | 2 |
| 1066 | 3 | 2 |
| 1392 | 1 | 3 |
| 1380 | 1 | 3 |
| 1386 | 1 | 3 |
| 1328 | 2 | 3 |
| 1312 | 2 | 3 |
| 1299 | 2 | 3 |
| 867 | 3 | 3 |
| 904 | 3 | 3 |
| 889 | 3 | 3 |
```
[](https://i.stack.imgur.com/scDMM.png)
[](https://i.stack.imgur.com/iFo8t.png)
|
Interpretting Interaction plots and significance
|
CC BY-SA 4.0
| null |
2023-04-27T12:06:05.950
|
2023-04-27T14:26:48.180
|
2023-04-27T13:37:39.447
|
359476
|
359476
|
[
"anova",
"categorical-data",
"interaction"
] |
614289
|
1
|
614290
| null |
1
|
79
|
The probability that A wins when he/she has $\alpha$ points and player B having $\beta$ points is equal to
$$ p_\text{win}(\alpha,\beta)=\binom{\alpha+\beta+1}{\alpha-1}p^\alpha(1-p)^\beta $$
where $p$ is the probability that A wins a point.
Next I implement this into the probability mass function of the binomial distribution
$$\operatorname{Binomial}(n\mid N, p_\text{win}) $$
As I result I can infer from $n$ and $N$ the probability that A wins a point, $p.$ However, during the sampling there are a significant amount of zeros and ones for $p.$ resulting in the following error:
>
Probability parameter is 1/inf/nan, but must be in the interval [0, 1]
To overcome this I would like to transform $p$ to its unconstrained equivalent, $x.$ I use logit parameterization for this
$$\operatorname{BinomialLogit}(n\mid N, x) = \operatorname{Binomial}(n\mid N, \operatorname{logit}^{-1}(x))$$
I am unsure how to rewrite $p_\text{win}$ in terms of $x.$ to the above. Please advise
|
My beta-binomial model has probabilities at exactly 0 and 1. How can I fix it?
|
CC BY-SA 4.0
| null |
2023-04-27T12:29:34.850
|
2023-05-17T23:17:51.927
|
2023-04-27T13:18:59.603
|
22311
|
233132
|
[
"logistic",
"binomial-distribution"
] |
614290
|
2
| null |
614289
|
2
| null |
For some combinations of $\alpha, \beta$, the beta distribution concentrates its mass near 0 or 1. One option is to impose a prior on $p$. For example, you could consider
$$
p \sim \text{Beta}(a + \alpha, b + \beta)
$$
where $a > 0$ and $b >0$, which will bound $p$ away from the extremes. Choosing $a=b$ might make sense, if you believe the teams are evenly matched prior to playing the game. The larger $a$ and $b$ are, the lower the variance of $p$. The same is true for $a + \alpha$ and $b + \beta$.
Alternatively, you could implement your idea & re-parameterize the model: draw $x$ from some distribution, and then transforming it to the $[0,1]$ interval. For instance, $x \sim \text{Normal}(\mu, \sigma^2)$ is symmetric about 0 for $\mu = 0$ and therefore $p=\text{logit}^{-1}(x)$ is symmetric about 0.5 on the probability scale.
The re-parameterization solution is not a panacea, though, because you may not want a symmetric distribution (if the teams are not evenly matched). However, setting $\mu$ very far from 0 will risk $p$ getting so close to 0 or 1 that the you have the same problem that you do now.
---
From the perspective of coding, you could write a series if `if/then` statements to detect `p` close to 0 or 1. In the case of `p=0`, you know a binomial distribution will have 0 successes; for `p=1` a binomial distribution will have $N$ successes.
Or set `p = median([p, eps, 1 - eps])` where `eps` is a small value, such as `1e-6`. There's statistically principled about this, it's just a kludge.
| null |
CC BY-SA 4.0
| null |
2023-04-27T13:16:16.150
|
2023-05-17T23:17:51.927
|
2023-05-17T23:17:51.927
|
22311
|
22311
| null |
614291
|
1
| null | null |
0
|
10
|
so basically I want to test differences between two systems. I am collecting time needed for executing queries, every query was executed 50 times. I have defined some queries such as q1, q2... qn each query has some specific meaning for example queries q1,q2,q3 are aggregation queries, thus those are part of a group "aggregation queries". I want to compare two systems per group of queries not separately per every query. The question is how/if can I merge the results of multiple queries into one for preforming hypothesis test (Mann-Whitney U). I assume that every set of records per query is a separate population and should not be merged right?
|
is it possible to merge multiple populations for hypothesis testing
|
CC BY-SA 4.0
| null |
2023-04-27T13:47:18.827
|
2023-04-27T13:47:18.827
| null | null |
386699
|
[
"hypothesis-testing",
"assumptions",
"population"
] |
614292
|
2
| null |
614252
|
1
| null |
In the [paper you cite](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6889975/), Hernán explains some methods in detail. The point is that not all such methods are unbiased unless they are combined appropriately.
The 3-step method recommended in the paper, for an observational study with a set of defined intended treatment durations, is:
- Make copies of all individuals such that each individual has a copy with one of the possible treatment durations. That gets around the problem of not knowing the initial treatment strategies for the individuals.
- For each row, if the received treatment is known to have deviated from the treatment duration indicated on the row, right-censor the case at the time of the deviation. That step ensures that no data row provides information beyond a potentially observable combination of time, treatment, and outcome in the data.
- To remove the "selection bias" introduced in the prior step, use a type of inverse probability weighting on the cases: "Informally, uncensored individuals receive a weight equal to the inverse of their probability of being uncensored."
A valid alternative noted in the paper is the "g-formula" method of Robins. See [this page](https://stats.stackexchange.com/q/612029/28500) for an outline.
This is one example of a general causal modeling approach that asks what would have happened if, hypothetically, each individual might have been cloned to receive each of the treatments in parallel. See "[Causal Inference: What If?](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/)" by Hernán and Robins for a useful introduction to such approaches.
| null |
CC BY-SA 4.0
| null |
2023-04-27T13:52:08.153
|
2023-04-27T13:52:08.153
| null | null |
28500
| null |
614293
|
2
| null |
117732
|
0
| null |
As @vtshen mentions, there must be a standardization of Deviance values in cv.glment. After tracing the function code provided by @shadowtalker, I have arrived at the line that corroborates the hypothesis:
```
cvraw=switch(type,
response =-2*((y==2)*log(predmat)+(y==1)*log(1-predmat)),
class=y!=predmat
)
cvm=apply(cvraw,2,mean)
```
Instead of calculating the sum of residual deviances as `deviance()` function does, in cv.glmnet (cvlognet) they consider the mean of them. So, yes, there is a normalization of the deviance relative to the number of observations.
| null |
CC BY-SA 4.0
| null |
2023-04-27T13:52:29.310
|
2023-04-27T13:52:29.310
| null | null |
66470
| null |
614294
|
1
| null | null |
4
|
141
|
Let's say we start off by generating 2 random uniform variables between 1 and 100
```
rv1=np.random.uniform(1,100,1000)
rv2=np.random.uniform(1,100,1000)
rv2=sm.add_constant(rv2)
sm.OLS(rv1, rv2).fit().summary()
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.000
Model: OLS Adj. R-squared: -0.001
Method: Least Squares F-statistic: 0.06250
Date: Thu, 27 Apr 2023 Prob (F-statistic): 0.803
Time: 13:42:31 Log-Likelihood: -4764.7
No. Observations: 1000 AIC: 9533.
Df Residuals: 998 BIC: 9543.
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 49.2696 1.840 26.774 0.000 45.658 52.881
x1 0.0079 0.032 0.250 0.803 -0.054 0.070
==============================================================================
Omnibus: 630.187 Durbin-Watson: 2.023
Prob(Omnibus): 0.000 Jarque-Bera (JB): 58.502
Skew: 0.054 Prob(JB): 1.98e-13
Kurtosis: 1.820 Cond. No. 119.
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
```
Unsurprisingly, the $R^2$ is 0, the constant is very close to the mean of our range and the x1 term is a high p-value.
Now if I do:
```
scipy.stats.ttest_ind(rv2[:,1],rv1)
Ttest_indResult(statistic=0.803742011784952, pvalue=0.4216415967859215)
```
If I'm not mistaken, the high p-value of the T-test says we fail to reject the null that the two random variables have the same mean, or for lay business people, they're the same.
I start with random data because the results with the random data makes sense to me and I can easily reconcile them.
Now let's take some real data. The data happens to be price forecasts for a single commodity but from two different forecasters. The forecast is the hourly price for delivery during that hour for a year.
```
OLS Regression Results
==============================================================================
Dep. Variable: v1 R-squared: 0.098
Model: OLS Adj. R-squared: 0.098
Method: Least Squares F-statistic: 947.6
Date: Thu, 27 Apr 2023 Prob (F-statistic): 1.15e-197
Time: 13:54:54 Log-Likelihood: -46785.
No. Observations: 8760 AIC: 9.357e+04
Df Residuals: 8758 BIC: 9.359e+04
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Intercept -38.6710 2.712 -14.260 0.000 -43.987 -33.355
v2 2.5122 0.082 30.783 0.000 2.352 2.672
==============================================================================
Omnibus: 15615.843 Durbin-Watson: 0.551
Prob(Omnibus): 0.000 Jarque-Bera (JB): 13358851.473
Skew: 13.107 Prob(JB): 0.00
Kurtosis: 192.506 Cond. No. 167.
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
```
The T-test between v1 and v2 are
```
Ttest_indResult(statistic=18.47596253739749, pvalue=1.6810632200828115e-75)
```
The p-value of the t-test says that we reject the null that the two variables have the same mean.
The $R^2$ says that 9.8% of the variation in v1 can be explained by this model.
What I'm trying to do is describe, if any, the (colloquial) significance of the T-test saying they're different while there's, at least weakly, a linear relationship between the two. It's not hard to conceive that two variables can have different means but are still linearly related. Is there no good reason to use the t-test in this way?
If it helps, here's a scatter of `v1` and `v2`with the above trendline.
[](https://i.stack.imgur.com/WXcWY.png)
|
Is there sense to be made for two-tailed T-test and OLS of two variables?
|
CC BY-SA 4.0
| null |
2023-04-27T14:21:21.417
|
2023-04-27T15:11:11.220
|
2023-04-27T15:11:11.220
|
247274
|
24521
|
[
"regression",
"t-test",
"least-squares",
"r-squared"
] |
614295
|
2
| null |
614288
|
1
| null |
Because the residual standard error for your data set is rather small compared to the magnitude of your data, even with a relatively small sample size of $k=3$ replicates per group, the model will flag small effects as statistically significant.
That said, it is important to note that an interaction is flagged as significant when the variable (factor) A statistically significantly interacts with the variable B. To clarify, we do not say that any one level of the variable A interacts with B. If there is indeed an interaction, that means that some subset of the groups deviates from the additive model (where there is the same change in all levels of A or all levels of B...and this change can be determined just by looking at the marginal trends for each of the two variables).
With that in mind, once we have a statistically significant interaction, we of course want to investigate to see if we can determine which of the experimental cells is deviating from the marginal trends. As the OP notes, the most visually striking variation from the "parallel" trend in the transitions from one category to the next in the means plots is with (3,3). Consequently, this is most likely why the interaction was flagged as statistically significant.
However, this extreme deviation from the additive model makes it a bit more challenging to just look at the graph to determine which---if any---of the other experimental cells are deviating from the marginal trends.
As there does not appear to be a difference in the "slope" of the lines for levels 1 and 2 (across either of the categorical variables), the next step is to look at a post hoc 2x3 ANOVA to see if there is an interaction present among those subsets of the levels of these categorical variables. And in this case, it appears $p=0.084$, so...your assessment of there not actually being any special about cell (1,2) is probably justified.
I hope this analysis/interpretation proves useful.
| null |
CC BY-SA 4.0
| null |
2023-04-27T14:26:48.180
|
2023-04-27T14:26:48.180
| null | null |
199063
| null |
614296
|
2
| null |
614270
|
4
| null |
This seems to be a classic case of a survival model with time-varying covariates where you are interested in the time between some reference time and the time of the event. If you only have annual data, that's probably best analyzed with a discrete-time survival model.
A discrete-time survival model can be implemented as a binomial regression model on a data set like yours with one row per time period per individual. You need to have a column for the time elapsed since the reference time. For example, if these are employees of a corporation, that might be the time since hire. Also, if the event can happen at most once per individual, then an individual doesn't provide data about event risk at times after the event. So no data rows should be included for an individual after the event time.
You then perform a binomial regression with the elapsed time included as a predictor, perhaps flexibly modeled with a regression spline. [This page](https://stats.stackexchange.com/a/99293/28500) outlines the principles based on logistic regression, and provides some references. [This web page](http://gseacademic.harvard.edu/%7Ewilletjo/dsta.htm) has links to some classic presentations by Singer and Willett. A complementary log-log link (instead of a logit link) is more aligned with the Cox type of proportional hazard survival analysis, as described on [this page](https://stats.stackexchange.com/q/429266/28500) and its links.
One warning: make sure that any predictors in your model are in fact predictors of the event. With this type of data, it's possible for a covariate's value during the time period of the event to be a result of the event's occurrence during the time period rather than a cause of the event. Think carefully about whether your data are at risk of that type of misinterpretation.
| null |
CC BY-SA 4.0
| null |
2023-04-27T14:46:57.547
|
2023-04-27T15:30:08.087
|
2023-04-27T15:30:08.087
|
28500
|
28500
| null |
614297
|
2
| null |
614294
|
3
| null |
The use of a 2 independent samples t-test is not justified in this context, because the data is matched-pairs data. So, from that perspective, the answer to the OP's question is: No, there is not a good reason to use the t-test in this way (as it is not the correct test for the data).
However, if you ask about running a dependent-samples t-test and the correlation, then there is indeed a good rationale for this. I'll give the example I use in my survey methodology course. You have two raters, and you want to know that they rate the subjects in a similar fashion...and this means both (1) on average and (2) in relation to each other.
Let's think thru the 4 possible scenarios:
- If the correlation is n.s. ($p > \alpha$) and the t-test is stat.sig. ($p < \alpha$), this is the worst case scenario. The raters are rating the subjects differently (the means are not the same) and there is no relationship between when one rater rates the subjects high/low compared to the other rater. (bad situation)
- If the correlation is stat.sig and the t-test is stat.sig., this is probably a workable case for assessment, as you could "curve" one of the raters to essentially match the other rater. (workable situation)
- If the correlation is n.s. and the t-test is n.s., then this means the average of the ratings are essentially the same...but it means that it may be happening in a random or haphazard way as there is no relationship between the two raters' scores. (probably not good for an external validity argument for the rating of the construct being assessed)
- If the correlation is stat.sig. and the t-test is n.s., then this means the averages are about the same, and the raters are relatively consistent in how they are rating each subject. (best scenario)
While I am not specifically seeing if---or how---this example may relate to the OP's scenario, it does demonstrate that there are times when you want to consider both the t-ratio and the correlation.
Hope this helps, and happy to clarify more if desired.
| null |
CC BY-SA 4.0
| null |
2023-04-27T15:03:41.723
|
2023-04-27T15:03:41.723
| null | null |
199063
| null |
614298
|
2
| null |
614294
|
4
| null |
There are four possibilities.
- Same mean, linearly related
- Same mean, no linear relationship
- Different mean, linearly related
- Different mean, no linear relationship
Simulations of each of these are below.
```
import numpy as np
# Define sample size
#
N = 100_000
# Same mean, linearly related
#
x = np.random.uniform(-1, 1, N) # Mean of 0
y = x + np.random.normal(0, 0.1, N) # Means add, so mean of 0
# Same mean, not linearly related
#
x = np.random.uniform(-1, 1, N) # Mean of 0
y = np.random.uniform(-1, 1, N) # Mean of 0, totally independent of x
# Different mean, linearly related
#
x = np.random.uniform(-1, 1, N) # Mean of 0
y = x + np.random.normal(0, 0.1, N) # Means add, so mean of 0
# Different mean, not linearly related
#
x = np.random.uniform(-1, 1, N) # Mean of 0
y = np.random.uniform(0, 2, N) # Mean of 1, totally independent of x
```
Thus, whether or not two variables have equal means is totally unrelated to their correlation or regression $R^2$.
| null |
CC BY-SA 4.0
| null |
2023-04-27T15:09:32.820
|
2023-04-27T15:09:32.820
| null | null |
247274
| null |
614299
|
1
| null | null |
0
|
16
|
Would anyone be able to help solve this: $\dfrac{\mathbb E[|X-\mu|]}{\sqrt{\mathbb E[(X-\mu)^2]}}=\sqrt{\frac2\pi}$ for a normal distribution. Related to the mean of a half-normal distribution?
|
proof of a relationship related to half-normal distribution
|
CC BY-SA 4.0
|
0
|
2023-04-27T15:10:33.563
|
2023-04-27T15:10:33.563
| null | null |
386655
|
[
"normal-distribution",
"variance",
"mean",
"standard-deviation",
"arithmetic"
] |
614300
|
2
| null |
531375
|
1
| null |
You seem to be getting it. PCA and UMAP give functions that can be applied to new data. In that sense, you can learn a PCA or UMAP transformation for several folds and then apply that transformation to new data.
Unfortunately, t-SNE does not work this way; t-SNE works on all data at once and does not produce a function to apply to new data. Thus, t-SNE is not a viable option for cross-validation where you apply a learned function to new data.
| null |
CC BY-SA 4.0
| null |
2023-04-27T15:25:27.747
|
2023-04-27T15:25:27.747
| null | null |
247274
| null |
614301
|
1
| null | null |
0
|
13
|
I have a dataset consisting of subjects with a number of measurements associated with each them. Each of a subjects measurements has a time series associated with it and all patients have the same set of measurements.
I have been told that the nominal value of the measurements can be grouped into categories (I am expecting two to four groups per measurement) based on another discrete ordinal variable $a \in A | \forall a \in \mathbb{Z} \land a \ge a_{min} \land a \le a_{max}$.
I would like to create categories of similar groups for each measurement (i.e. the categories can be different for different measurements) based on the following constraints:
- Within a group of subjects, the means for a given measurement of the associated time series should be statistically similar between the subjects within the group.
- Between groups the means of the time series for a given measurement should be statistically different.
- Categories should retain the ordinal relationship that was there before i.e. category 1 might contain values 1,2 and 3, while category 2 might contain 4 and 5 etc., however, category 1 with values 1 and 3, and category 2 with values 2, 4 and 5 should not be allowed.
- The minimum number of groups based on the 3 constraints given above.
Does anyone know if there there a specific method to do this? I wondered about using clustering , however I don't know a method of doing this while maintaining the ordinal nature of the data.
Any help would be greatly appreciated!
|
Merging an ordinal variable into categories based on the mean value of unknown groups
|
CC BY-SA 4.0
| null |
2023-04-27T15:46:29.500
|
2023-04-27T15:46:29.500
| null | null |
322125
|
[
"hypothesis-testing",
"statistical-significance",
"clustering",
"categorical-data",
"ordinal-data"
] |
614302
|
2
| null |
614265
|
7
| null |
With a small sample size, there are legitimate concerns.
- What kind of power do you have to reject a false null hypothesis?
- If your data lack normality, do you have enough data for the t-test to be robust to the deviation to the assumed normality?
The latter feeds into the second, as deviations from normality tend to affect t-test power, rather than t-test size. That is, such deviations from the assumed normality make it more difficult to reject false null hypotheses, rather than making it easier to reject true null hypotheses.
However, having a small sample size does not make it so your test statistic lacks the claimed distribution. The small sample size is accounted for by using a low number of degrees of freedom. What could be concerning is that, when the sample size is low, the true distribution of the t-stat might differ to a meaningful extent from the claimed distribution, meaning that your p-values are not really telling you what they are supposed to tell you, since they are calculated from incorrect distributions.
Since you only have one test, no p-value adjustment is needed to account for multiple tests.
| null |
CC BY-SA 4.0
| null |
2023-04-27T16:06:49.370
|
2023-04-27T16:06:49.370
| null | null |
247274
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.