Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
613556
|
2
| null |
613555
|
1
| null |
See [this](https://stats.stackexchange.com/a/371145/214015) CV answer.
For `N=300`, `n=10`, and `x=1`, a `BetaBin(N,0.5,0.5)` prior would result in a Beta-Binomial posterior distribution on `M-x` (the number of bad widgets in the remaining 290):
`BetaBin(300 - 10, 1 + 0.5, 10 - 1 + 0.5)`
As a function:
```
pBBpost <- function(N, n, x, a = 0.5, b = 0.5) {
N_n <- N - n
M_x <- 0:N_n
cumsum(exp(lgamma(N_n + 1) + lgamma(a + b + n) - lgamma(N_n + a + b + n) - lgamma(a + x) - lgamma(b + n - x) + lgamma(M_x + a + x) + lgamma(N - M_x + b - x) - lgamma(M_x + 1) - lgamma(N_n - M_x + 1)))
}
N <- 300L
n <- 10L
x <- 1L
plot(0:(N - n), pBBpost(N, n, x), xlab = "Bad widgets remaining", ylab = "CDF")
```
[](https://i.stack.imgur.com/wmvo4.png)
| null |
CC BY-SA 4.0
| null |
2023-04-20T10:44:27.877
|
2023-04-20T12:29:43.843
| null | null |
214015
| null |
613557
|
1
| null | null |
0
|
24
|
I have been using logistic regression to measure the association between two variables. Let us say, we want to test the association between A and B. A is the outcome and B is the exposure. The association between A and B gives me an OR = 5, and it is significant when looking at the p-value.
```
A ~ B
```
Then I want to adjust for C. The association between A and B is still significant, but the OR for that association changed to OR = 4.7.
```
A ~ B + C
```
Later I adjusted for D, and the OR for A and B's association is now changed to OR = 5.3, and it is still significant.
```
A ~ B + C + D
```
My question is, what does this mean? These small changes, the decreasing and increasing of OR after adjusting? How is this interpreted? Maybe it means nothing?
|
Odd ratio changes after adjusting, what does that mean?
|
CC BY-SA 4.0
| null |
2023-04-20T12:40:19.137
|
2023-04-20T12:40:19.137
| null | null |
382622
|
[
"logistic",
"odds-ratio"
] |
613558
|
1
| null | null |
2
|
26
|
I have a piece of code that I'm adding a feature onto, and I need to test performance before and after the feature addition. So I used a test instance with a large amount of data and triggered the endpoint with and without my feature (flag on and flag off) and now I have a data set with the elapsed times as follows: (The same endpoint was called 100 times parallely for each flag value):
|Flag off |Flag on |
|--------|-------|
|32412 |24568 |
|28550 |22996 |
|26574 |25395 |
|35558 |21346 |
|...... |..... |
What is the best way to show a comparison between the elapsed times? The purpose is to show there was no regression with the flag on.
If there was a better way to conduct the pexperiment, what would it be?
|
What's the best way to visualize load testing results
|
CC BY-SA 4.0
| null |
2023-04-20T12:54:24.880
|
2023-05-07T09:29:34.117
| null | null |
160975
|
[
"data-visualization",
"dataset",
"experiment-design"
] |
613560
|
1
| null | null |
0
|
23
|
I am working on multiple regression in order to realize a marketing mix model. However I have some concerns with the procedure. First, the idea of transforme data in order to incorporate the carryover effect : the intuition behind the carryover effect is clear however from what I have understood, the way to do it is unclear, I explain this now. Consider the model
$$
y_t = \beta_{0} + \sum_{i=1}^{n}\beta_ix_{i,t} + \epsilon_{t}
$$
where $y_t$ is the sales volume of a company and the $x_{i}$'s some explanatory variables that correspond to channel media, price, distribution, etc..
The idea is to consider a variable $x_{i,t}$ that has been deployed in time $t$ with a carryover effect, that is, the variable has a kind of persistency for customers after time $t$ (one can think of a campaign TV for example) and include this persistency by considering a well-chosen transformation such that for the next period ($t+1$, $t+2$,...) the effect of the variable $x_{i}$ appears. My concern is the following : by doing that, we loose the equality in our model since we added some values for next periods that are not present in our initial data since $y_t$ has not changed ?
So I would like to know where I am wrong in my reasoning please in order to fully understand this concept.
Thank you a lot !
|
Question about mix marketing modeling (MMM)
|
CC BY-SA 4.0
| null |
2023-04-20T13:09:15.327
|
2023-05-08T00:45:37.407
| null | null |
375362
|
[
"regression",
"mixed-model",
"multiple-regression",
"modeling",
"marketing"
] |
613564
|
2
| null |
613447
|
0
| null |
Limiting the stepwise search duration with pmdarima's StepwiseContext solved the memory error.
| null |
CC BY-SA 4.0
| null |
2023-04-20T13:51:01.980
|
2023-04-20T13:51:01.980
| null | null |
385599
| null |
613565
|
1
| null | null |
0
|
21
|
I am currently trying to estimate stock levels in a monte carlo simulation, i have certain integer sample data for product A,B,C with each having a sample size of 100. When trying to do a fit to a type of distribution, what should be the data type of sample data - continuous or discrete?
Edit:
I am trying to model the demand pattern of 75 SKUS and estimate what should be the inventory levels be at the production line. The SKUs are are split into Five categories A,B,C,D,E. For categories C,D,E- distribution type are exponential for C(beta = 3) and poisson for D and E (Lambda = 1 for D and Lambda =0.5 for E). For A and B, i have 5 separate data sets of sample demand data for each SKU, i need to decide what distribution do they follow, and for that i need to select what kind of data type are the sample datasets - continuous or discrete before deciding the fit. The sample data sets are historic demands of each product over a period of 100 days. hope this gives more clarity
|
Continuous or Discrete Sample Data?
|
CC BY-SA 4.0
| null |
2023-04-20T13:55:10.553
|
2023-04-20T14:50:17.260
|
2023-04-20T14:50:17.260
|
386154
|
386154
|
[
"probability",
"distributions"
] |
613566
|
1
|
613590
| null |
0
|
26
|
This is from an exercise found [here](https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf) on page 60. My question is: what is meant by $\sum\limits_{i_1=1}^D\sum\limits_{i_2=1}^{i_1}...\sum\limits_{i_M=1}^{i_{M-1}}w_{i_1i_2...i_M}x_{i_1}x_{i_2}...x_{i_M}$? For instance, for $M=3$ with $D=4$ we get $$\sum\limits_{i_1=1}^4\sum\limits_{i_2=1}^{i_1}\sum\limits_{i_3=1}^{i_2}w_{i_1i_2i_3}x_{i_1}x_{i_2}x_{i_3},$$ but what are the first summations over? The first one for example can't be summed over just $i_3=1$ and $i_3=i_2$ because that wouldn't make sense, just as summing over $i_3=1,i_3=2,i_3=3,i_3=i_2$ wouldn't make sense either (in both these cases we get redundancies and the wrong number of terms, since the original redundant formula (1.133) gives $D^M=64$ terms while the new formula given above is supposed to give only 10). Eg $i_3$ summed over $1$ and $i_2$ would give $$
\sum\limits_{i_1=1}^4\sum\limits_{i_2=1}^{i_1}\sum\limits_{i_3=1}^{i_2}w_{i_1i_2i_3}x_{i_1}x_{i_2}x_{i_3}\\
\sum\limits_{i_1=1}^4\sum\limits_{i_2=1}^{i_1}w_{i_1i_21}x_{i_1}x_{i_2}x_1+w_{i_1i_2i_2}x_{i_1}x_{i_2}x_{i_2}\\
\sum\limits_{i_1=1}^4w_{i_111}x_{i_1}x_1x_1+w_{i_111}x_{i_1}x_1x_1+w_{i_1i_11}x_{i_1}x_{i_1}x_1+w_{i_1i_1i_1}x_{i_1}x_{i_1}x_{i_1}\\
$$
which is completely wrong; similarly when summing over $1,2,3,i_{M-1}$.
|
How to rewrite a multivariate polynomial term without redundancies?
|
CC BY-SA 4.0
| null |
2023-04-20T13:55:44.103
|
2023-04-20T16:34:42.347
|
2023-04-20T14:23:48.020
|
919
|
386158
|
[
"notation",
"sum"
] |
613568
|
1
| null | null |
1
|
25
|
I have successfully trained a ResNet50V2 model. I used transfer learning on ImageNet database, and launch the inference on the unseen data. I obtain a high accuracy (>80%), but when I print the probability vector for each label, I obtain strangely high values. The sum of the elements is one, sure, but it is like the CNN is almost sure when it performs a prediction. For the predicted label I recover a probability of 1 and this trend is quite common in all the 200 predictions.
In the last dense layer I use the softmax activation function. Is there an error?
|
Strange high values in the probability vector in Image Classification
|
CC BY-SA 4.0
| null |
2023-04-20T14:06:57.797
|
2023-04-20T14:06:57.797
| null | null |
379875
|
[
"classification",
"softmax",
"residual-networks"
] |
613569
|
1
|
613587
| null |
3
|
114
|
I am beginning a project that will employ regression/covariate adjustment to estimate the average effect of treatment on the treated (ATT) and I realize that I have two questions concerning how one estimates the ATT in such a setting and how one interprets the regression output when using GLMs.
First, I am slightly confused on the specification of a desired treatment effect under a regression adjustment framework. For example, in alternative strategies, such as matching or weighting, syntax for executing these methods typically supports an explicit argument where one specifies the desired treatment effect: `effect = ate`, `qoi = att`, something along these lines. However, in a standard regression formula in R `y ~ x1 + x2 + ..., data = data` I do not know how to effectively specify the argument for the treatment effect that I want to estimate. By default, does regression adjustment estimate the ATE? If so, how does one modify this?
Second, after reading papers by [Mood 2010](https://academic.oup.com/esr/article-abstract/26/1/67/540767), [Hanmer and Kalkan 2012](https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1540-5907.2012.00602.x), and [Norton and Dowd 2018](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5867187/), it is apparent to me that, when modeling non-continuous outcomes, regression coefficients, odds ratios, IRRs, hazard ratios, etc. may prove problematic in interpretation. One solution is to estimate average marginal effects (AMEs). This leads me to my second question. Suppose that I estimate the ATT for a treatment on a count outcome. I then estimate the AME. Can I effectively interpret this AME as the ATT, or are they fundamentally different quantities of interest?
|
Estimating and Interpreting the ATT with Regression Adjustment and Marginal Effects
|
CC BY-SA 4.0
| null |
2023-04-20T14:16:06.063
|
2023-04-20T20:50:01.773
| null | null |
360805
|
[
"regression",
"causality",
"treatment-effect",
"marginal-effect"
] |
613570
|
2
| null |
144433
|
2
| null |
This is really, really confusing use of the term "standard error". I teach Introductory Statistics at a college, and this is one of the most confusing details in R for students (along with R using standard deviation and not variance in its various `pnorm`, `qnorm`, etc. commands).
A [standard error](https://en.wikipedia.org/wiki/Standard_error), from a statistical sense, is defined as "a standard deviation of an estimator/statistic". It is a resampling concept: the standard error of the slope estimate, for example. If you were to resample your data, the estimate of the slope will vary, and this type of standard deviation we call a standard error.
But the standard deviation of the residuals is not a resampling concept – it is directly observable in the data. So what R reports as "residual standard error" really is an "estimated standard deviation of the residuals". It is like the difference between [$s$](https://en.wikipedia.org/wiki/Standard_deviation#Corrected_sample_standard_deviation) (an estimated standard deviation) and [$\sigma$](https://en.wikipedia.org/wiki/Standard_deviation#Definition_of_population_values) (a true/theoretical standard deviation), not the [difference between $\sigma$ (a true/theoretical standard deviation) and $\sigma/\sqrt{n}$ (a true/theoretical standard error)](https://en.wikipedia.org/wiki/Standard_error#Standard_error_of_mean_versus_standard_deviation).
| null |
CC BY-SA 4.0
| null |
2023-04-20T14:40:03.440
|
2023-04-20T16:16:14.903
|
2023-04-20T16:16:14.903
|
44269
|
386164
| null |
613571
|
2
| null |
144433
|
2
| null |
As in mentioned by a comment by [NRH](https://stats.stackexchange.com/users/4376/nrh) to one of the [other answers](https://stats.stackexchange.com/a/144444/247274), the documentation for `stats::sigma` says:
>
The misnomer “Residual standard error” has been part of too many R (and S) outputs to be easily changed there.
This tells me that the developers know this terminology to be bogus. However, since it has crept into the software, changing to correct terminology is difficult and not worth the trouble of doing so when experienced statisticians know what is meant.
| null |
CC BY-SA 4.0
| null |
2023-04-20T14:51:28.893
|
2023-04-20T19:27:52.027
|
2023-04-20T19:27:52.027
|
247274
|
247274
| null |
613572
|
1
| null | null |
0
|
16
|
I have developed the following KNN code and tested it against several datasets with >90% accuracy (the wdbc dataset from UCI for one, granted it was a categorical result), however when using the same code with the pima Indians diabetes data set I get very poor results. I am trying to impute the BMI from the adjacent columns. I have removed the 0 values of the BMI and separated the data set into 70% train/30% test. Then set up the BMI for labels etc. My accuracy is 1% and I cannot understand why. Somehow I have missed cleaning the data correctly but I cannot find where I am missing this. Any help is appreciated.
Here is the code and a data sample is below that:
```
#new script to predict Pima Indians diabetes
setwd("C:/Users/jbrow/Desktop/kaggle/pima")
#load tidyverse
library(tidyverse)
#read in the csv file to a data frame
pima.full <- read.csv(file = "diabetes.csv", header = TRUE)
#we will use pima.work as the working data frame
#scale the data - necessary for the Euclidean distance to work
properly
#create a function to normalize data from 0 - 1
#remove all the BMI equal to 0 and select the columns with
correlation to BMI
work.df <- pima.full %>% filter(BMI != 0) %>% select(2,3,4,5,6)
#create normalization functions
data_norm <- function(x){((x - min(x))/(max(x)-min(x)))}
data_denorm <- function(x){x*(max(x)-min(x))+min(x)}
pima.norm <- as.data.frame(lapply(work.df[,-5],data_norm))
#split the data set into training and testing
set.seed(123)
#number of training rows of the total data
sizedf <- floor(0.7 * nrow(pima.norm))
#get the row numbers
train_ind <- sample(seq_len(nrow(pima.norm)),size = sizedf)
#training data from four relevant normalized columns
data_train <- pima.norm[train_ind,]
#testing data from same normalized columns
data_test <- pima.norm[-train_ind,]
#training labels - column 6 - BMI
train_labels <- work.df[train_ind,5]
#testing labels
test_labels <- work.df[-train_ind,5]
#model run
#install.packages('class')
library(class)
knn.BMI <- knn(train = data_train,
test = data_test,
cl=train_labels,
k=round(sqrt(nrow(data_train))))
# plotting results
con_mat = as.matrix(table(Actual = test_labels, Predicted =
knn.BMI))
knn_acc = sum(diag(con_mat))/length(train_labels)
```
I have tried the same code logic against other datasets with good results so I believe the issue lies with preparing the data. Kind of at a lost now.
Apologies that I cannot get the tables to print legibly but the data sets are from the pima indian set on Kaggle.
columns from work.df: Glucose BloodPressure SkinThickness Insulin BMI
columns (normalized) from pima.norm: Glucose BloodPressure SkinThickness Insulin
|
My KNN result not as expected for all numerical data
|
CC BY-SA 4.0
| null |
2023-04-20T14:54:49.853
|
2023-04-20T14:54:49.853
| null | null |
386166
|
[
"r",
"data-imputation",
"k-nearest-neighbour"
] |
613573
|
2
| null |
613336
|
0
| null |
As pointed out by @Sal Mangiafico, some helpful documentation on numpy.percentile is provided [here](https://numpy.org/doc/stable/reference/generated/numpy.percentile.html), which in turn draws on the useful paper:
>
R. J. Hyndman and Y. Fan, “Sample quantiles in statistical packages,”
The American Statistician, 50(4), pp. 361-365, 1996
As explained in the paper, various approaches to the problem are possible. To illustrate a few in the context of this example:
- The inverted CDF method produces an estimate of $1$, which matches up with how I would have naively approached the problem.
- The linear method (the default) views our 10 draws as dividing the space into $9$ intervals, each with width $1/9 = 0.\dot{1}$. Since we want the 5th percentile (0.05), we want the first interval, which ranges from $1$ to $2$. How far along the interval should we be? Using linear interpolation, it's $1 + 0.05/0.\dot{1} = 1.45$.
- Hyndman and Fan (1996) recommend the median unbiased method when the sample distribution function is unknown. In this example, this method also yields an estimate of 1.
I believe that all methods yields identical estimates when the sample size goes to infinity.
| null |
CC BY-SA 4.0
| null |
2023-04-20T14:55:51.140
|
2023-04-20T14:55:51.140
| null | null |
290040
| null |
613574
|
1
| null | null |
0
|
25
|
How can I calculate $R^2$ for multilevel longitudinal analysis without Level-1 variance in `R`? One predictor is a factor (Treatment: A, B, C).
|
Calculate $R^2$ longitudinal multilevel analysis
|
CC BY-SA 4.0
| null |
2023-04-20T15:01:27.540
|
2023-04-20T16:14:18.250
|
2023-04-20T16:14:18.250
|
44269
|
386167
|
[
"regression",
"panel-data",
"multilevel-analysis",
"r-squared"
] |
613575
|
1
| null | null |
3
|
90
|
I am trying to plot data with a large number of points. The goal is to see the basic distribution - location, dispersion, shape - of the observations.
With a simple scatterplot, even with low `alpha`, the result is too visually dense:
[](https://i.stack.imgur.com/KcDUB.png)
Following [this answer](https://stackoverflow.com/questions/7714677/scatterplot-with-too-many-points) and suggestions [here](https://r-graphics.org/recipe-scatter-overplot), I think switching to `hexbin` will work well.
However, I also have a version of the same plot with points coloured by groups, e.g.
[](https://i.stack.imgur.com/0jW9a.png)
The goal here is to highlight how the distributions of the data differ by group (e.g. less dispersion, shifted weight of distribution, etc.), and potentially by other conditions.
In this case, hexbins or rectangular bins won't solve the problem. What could I do instead?
(In the example given there are two groups; sometimes I need more than two, so a more generalisable answer would be helpful too)
|
Visualising scatterplot with too many points and two or more groups
|
CC BY-SA 4.0
| null |
2023-04-20T15:05:16.750
|
2023-04-24T20:50:47.510
|
2023-04-20T15:24:01.587
|
361155
|
361155
|
[
"r",
"data-visualization",
"scatterplot",
"ggplot2"
] |
613576
|
1
| null | null |
0
|
17
|
Consider a model such that the log-likelihood function of a $n$-dimensional parameter $\theta$ is given by (can be approximated by)
$$\tag{1}
L(\theta)=L(\widehat \theta) -\frac12 (\theta - \widehat \theta)^{\top}I(\widehat \theta)(\theta - \widehat \theta),
$$
where $I(\theta)$ is the observed Fisher information.
The marginal likelihood $L_i (\theta_i)$ for the $i$-th parameter reads, in this case
$$\tag{2}
L_i(\theta_i)=L(\widehat\theta)-\frac{(\theta_i-\widehat\theta_i)^2}{2\left(I(\widehat \theta)^{-1}\right)_{ii}}.
$$
The profile likelihood for the $i$-th component of $\theta_i$ reads
$$
\mathrm{PL}(\theta_i)=\max_{\theta_{-i}}L(\theta_i,\theta_{-i}),
$$
where $\theta_{-i}$ denotes the vector with all components of $\theta$ except the $i$-th.
In case the likelihood can be written as (1), does the PL coincide with (2), or is close to it?
|
Marginal likelihood and profile likelihood: Gaussian case
|
CC BY-SA 4.0
| null |
2023-04-20T15:10:07.350
|
2023-04-20T15:10:07.350
| null | null |
376700
|
[
"confidence-interval",
"maximum-likelihood",
"marginal-distribution",
"profile-likelihood"
] |
613577
|
2
| null |
184495
|
1
| null |
The point of comparing to such a simple model that achieves accuracy according to the class ratio is the point of doing the comparison. You have heavy imbalance in your problem, so you can achieve high accuracy by predicting the majority category every time. You should check if you can do better than that. Comparing to this kind of baseline is [analogous to $R^2$ in regression](https://stats.stackexchange.com/a/605819/247274) and makes a lot of intuitive sense. Your job is to make a classifier that is accurate (it probably isn't, but that's a [separate issue](https://stats.stackexchange.com/a/312787/247274)). For your model to be worthwhile, you must be able to outperform some jerk who guesses the majority category every time.
You probably do not have to use any kind of artificial balancing in your pipeline, least of all downsampling that discards precious data. You can get a model that achieves $99.5\%$ accuracy and put that in context. If your baseline model predicts the majority category every time and can achieve accuracy of $99.9\%$, then you actually have an enormous increase in the error rate: you have increased the error rate by $400\%!$ Below, I show this in software that implements what I sometimes call $R^2_{\text{accuracy}}$, for reasons I explain [here](https://stats.stackexchange.com/a/605451/247274).
```
r2acc <- function(e1, e0){
# e1: error rate of your model
# e0: error rate of the baseline model
return(
1 - (e1/e0)
)
}
r2acc(0.005, 0.001) # 0.005 corresponds to 99.5% accuracy, 0.001 to 99.9% accuracy
```
$$
R^2_{\text{accuracy}} = 1 - \dfrac{
\text{Error rate of the model under consideration}
}{
\text{Error rate of a model that naïvely predicts the majority class every time}
}
$$
While there are still issues with using hard classifications instead of using the predicted probabilities, at least this $R^2_{\text{accuracy}}$ statistic picks up that a model with an impressive-looking $99.5\%$ accuracy is quite poor when the imbalance means that $99.9\%$ of the cases belong to one category so that a model could achieve $99.9\%$ accuracy by being a jerk and guessing that majority category every time.
| null |
CC BY-SA 4.0
| null |
2023-04-20T15:14:18.457
|
2023-04-20T15:14:18.457
| null | null |
247274
| null |
613578
|
1
| null | null |
1
|
36
|
I am looking to compare efficacy of five different types of traps (b, s, v, w, y) for catching insects. Traps where set at four different locations (hh, mc, mf, sf), each subdivided into four separate plots (1, 2, 3, 4). Each plot was sampled with the all five traps for the same time.
I am thinking to go with parametric tests [i.e., ANOVA (catch ~ trap, catch ~ location, etc.)] for analysis but would perhaps need to transform data prior (assumptions for ANOVA not met). What would be your recommendation for transforming such data to use in a parametric test (considering the zeros are real zeros and I would like to keep them)?
Or would going with a non-parametric test (`kruskal.test`) perhaps be a better option?
Example data set:
```
location <- c("hh", "hh", "hh", "hh", "hh", "hh", "hh", "hh", "hh", "hh", "hh", "hh", "hh", "hh", "hh", "hh", "hh", "hh", "hh", "hh", "mc", "mc", "mc", "mc", "mc", "mc", "mc", "mc", "mc", "mc", "mc", "mc", "mc", "mc", "mc", "mc", "mc", "mc", "mc", "mc", "mf", "mf", "mf", "mf", "mf", "mf", "mf", "mf", "mf", "mf", "mf", "mf", "mf", "mf", "mf", "mf", "mf", "mf", "mf", "mf", "sf", "sf", "sf", "sf", "sf", "sf", "sf", "sf", "sf", "sf", "sf", "sf", "sf", "sf", "sf", "sf", "sf", "sf", "sf", "sf")
plot <- c(1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4)
trap <- c("b", "s", "v", "w", "y", "b", "s", "v", "w", "y", "b", "s", "v", "w", "y", "b", "s", "v", "w", "y", "b", "s", "v", "w", "y", "b", "s", "v", "w", "y", "b", "s", "v", "w", "y", "b", "s", "v", "w", "y", "b", "s", "v", "w", "y", "b", "s", "v", "w", "y", "b", "s", "v", "w", "y", "b", "s", "v", "w", "y", "b", "s", "v", "w", "y", "b", "s", "v", "w", "y", "b", "s", "v", "w", "y", "b", "s", "v", "w", "y")
catch <- c(4, 13, 19, 1, 4, 2, 19, 20, 2, 1, 3, 13, 20, 4, 3, 2, 5, 26, 4, 1, 7, 23, 46, 3, 10, 1, 12, 11, 1, 2, 2, 6, 6, 1, 0, 5, 14, 32, 4, 9, 1, 1, 12, 0, 0, 0, 8, 13, 1, 2, 0, 14, 12, 1, 1, 3, 7, 20, 3, 6, 1, 9, 7, 1, 6, 2, 1, 14, 5, 3, 0, 6, 15, 0, 1, 0, 0, 9, 2, 0)
df <- data.frame(location, plot, trap, catch)
df$location <- as.character(df$location)
df$trap <- as.character(df$trap)
print(df)
#fit anova
oneway_total_trap <- aov(catch ~ trap, data = df)
summary(oneway_total_trap)
#test anova residuals
par(mfrow=c(2,2))
plot(oneway_total_trap)
par(mfrow=c(1,1))
```
Thank you for the comments. I understand that I should run the normality/homoscedasticity tests AFTER I ran the model (ANOVA in this case). Correct me if I am wrong. Looking at the plots it seems as though residuals of the ANOVA are not homoscedastic. How would I go forward now to properly test the residuals for normality (Shapiro?)? Also, does this mean that I should indeed transform my data prior to running the model? I am an absolute beginner and I appreciate any suggestions on how to move forward from here.
|
Transforming data for parametric testing
|
CC BY-SA 4.0
| null |
2023-04-20T15:20:36.280
|
2023-04-25T13:18:27.113
|
2023-04-25T13:18:27.113
|
386095
|
386095
|
[
"normal-distribution",
"anova",
"data-transformation"
] |
613579
|
1
| null | null |
0
|
7
|
I have an object detector and now I have to decide which confidence threshold to use for each class. How can I determine what is the best confidence threshold for each class? Once decided, how can I evaluate this model at the given confidence threshold?
Everybody seems to just be using variations of AP (or mAP), which though evaluates the model at different confidence thresholds.
|
How can I evaluate the performance of a object detector at a fixed confidence threshold?
|
CC BY-SA 4.0
| null |
2023-04-20T15:21:20.100
|
2023-04-20T15:21:20.100
| null | null |
186016
|
[
"neural-networks",
"computer-vision",
"metric",
"object-detection"
] |
613580
|
2
| null |
291633
|
0
| null |
If you know the prior then the mean of the posterior distribution of $p$ is an unbiased estimator. In your example with uniform prior, this means that you add 1 tails and 1 heads to the estimates.
That is, if we use the following line in your code
```
err = sum((heads[top]+1)/(m+2)) - sum(probs[top])
```
, then the histogram will center more around zero
[](https://i.stack.imgur.com/LdSOy.png)
(here I have adjusted the breaks and I am using `set.seed(1)`)
---
In this example the unbiased result depends on the prior information which is not unknown for the example. In practice, the prior might be uncertain, but we could get a reasonable estimate by using the data from all the coins together (for example by fitting a beta-binomial distribution to the data from the coins).
| null |
CC BY-SA 4.0
| null |
2023-04-20T15:21:38.447
|
2023-04-20T15:21:38.447
| null | null |
164061
| null |
613581
|
1
| null | null |
0
|
17
|
According to A.F. Zuur et al. (2009) in 'Mixed Effects Models and Extensions in Ecology with R', p. 239ff and this post: [When to use an offset in a Poisson regression?](https://stats.stackexchange.com/questions/11182/when-to-use-an-offset-in-a-poisson-regression), `offset()` can be used in Ecology when samples vary in volume (or area, depth range etc.) and are thus a (better) alternative to using the density as a response. The coefficient in front of it is then forced to be 1.
Can I also use `offset()` to weight counted insect stages based on their damaging effect (in my case leaf consumption)?
Since normally counts with a higher value in the offset are weighted lower (same count in higher volume means lower density), I thought of using the reciprocal value of feeding rate in the offset (same count with higher feeding rate is somehow similar to a higher density). So I have the counts of 5 stages of an insect (Imago, L1, L2, L3, L4) on the same plant. The problem I see is, that these stages are correlated. I tried to account for the correlation by using plant as a random effect.
I simulated some data inspired by data I observed in a field experiment and ran a GLM:
```
library(simstudy) #package to model correlated Poisson distributions
l_t <- c(0.1, 1.5, 1.0, 1.5, 3.5) #lambdas for Imagines, and larval stages L1-L4 for treated plants
l_ut <- c(0.2, 3.0, 2.0, 0.5, 0.2) # for untreated plants (more L1 & L2 but less L3 & L4)
r <- matrix(c(1.00, 0.07, 0.18,0.12, 0.00,
0.07, 1.00, 0.12,0.06,-0.08,
0.18, 0.12, 1.00,0.43,-0.04,
0.12, 0.06, 0.43,1.00, 0.16,
0.00,-0.08,-0.04,0.16, 1.00),
ncol = 5) # correlation matrix for Imagines and larvae
set.seed(654321)
dx_t <- genCorGen(50, nvars = 5, params1 = l_t,
dist = "poisson", rho = r,
corstr = "cs", wide = TRUE) # count data of treated plants
set.seed(654321)
dx_ut <- genCorGen(50, nvars = 5, params1 = l_ut,
dist = "poisson", rho = r,
corstr = "cs", wide = TRUE) # count data of untreated plants
x_t <- dx_t[,-1] # delete ID column
x_ut <- dx_ut[,-1]
library(tidyverse)
x <- bind_rows(x_t,x_ut)
names(x) <- c("I","L1","L2","L3","L4")
plant <-seq(1,100) # plant IDs
treatment <- c(rep("treated",50),rep("untreated",50)) # two different treatments
df <- data.frame(x,plant,treatment)
head(df)
df2 <- df %>%
gather("stage","count",1:5) # each insect stage becomes a row
df_c <- data.frame (stage = c("I", "L1", "L2", "L3", "L4"),
consumption = c(1/20.8, 1/1, 1/2.1, 1/8.4, 1/17.1)) # Consumption rates of the individual stages
df3 <- left_join(df2, df_c, by = "stage") # Assignment of consumption to the stages
head(df3)
library(glmmTMB)
offset_model <- glmmTMB(count ~ treatment + (1 | plant) + offset(log(consumption)),
family = "poisson",
data = df3) # model with plant as random factor and consumption in offset
summary(offset_model)
library(DHARMa)
simulateResiduals(offset_model, plot = T)
```
Unfortunately, the DHARMa residual plots look terrible. So I have the feeling that this is might not be the way to go.
Reading this post ([In a Poisson model, what is the difference between using time as a covariate or an offset?](https://stats.stackexchange.com/questions/175349/in-a-poisson-model-what-is-the-difference-between-using-time-as-a-covariate-or)) and this one ([Can Weights and Offset lead to similar results in poisson regression?](https://stats.stackexchange.com/questions/297859/can-weights-and-offset-lead-to-similar-results-in-poisson-regression)) makes me think that using consumption as a covariate or specifying it as weights is not the right way to go either.
Anyone got a better idea?
|
(Ab)Use of offset() in Poisson GLM to weight the damaging effect of counted insect stages
|
CC BY-SA 4.0
| null |
2023-04-20T15:22:43.577
|
2023-04-20T15:22:43.577
| null | null |
383278
|
[
"correlation",
"generalized-linear-model",
"poisson-distribution",
"offset",
"glmmtmb"
] |
613582
|
1
| null | null |
0
|
45
|
Trying to understand GNNs better, I copied the code from this [blog post](https://medium.com/@pytorch_geometric/link-prediction-on-heterogeneous-graphs-with-pyg-6d5c29677c70) from PyG documentation. I copied and pasted the code without modifications, which works as described in the post. It is a link prediction on the MovieLens dataset.
After the training loop with a reasonable accuracy score, I want to use the trained Model to predict new links in an unseen dataset.
To do it, I re-run the previous code with the extra following line:
```
torch.save(model.state_dict(), "my_path.pth")
```
My second step then is to create a new script to use the trained model to predict new links:
The script is something like copying the first 40 lines of the previous script (to load movies and users again) and trying to predict the links:
```
checkpoint = torch.load("my_path.pth", map_location=torch.device('cpu'))
model = Model(hidden_channels=64)
model = model.to(device)
model.load_state_dict(checkpoint)
batch_size = 32 # Set the batch size for the DataLoader
shuffle = False # Do not shuffle the data during iteration
num_workers = 0 # Set the number of worker threads for loading the data (0 for no parallelization)
new_dataloader = LinkNeighborLoader(
data,
# Sample 30 neighbors for each node for 2 iterations
num_neighbors=[20, 10],
# Use a batch size of 128 for sampling training nodes
batch_size=batch_size,
edge_label_index=(("user", "rates", "movie"), (data["user", "rates", "movie"].edge_index)),
)
model.eval()
preds, probs, pred_labels = [], [], []
for _, batch in enumerate(new_dataloader):
batch.to(device)
pred = model(batch)
preds.append(pred)
prob = torch.sigmoid(pred)
probs.append(prob)
pred_labels.append((prob > 0.9).long())
print(len(pred_labels))
print(len(probs))
```
Instead of creating a new synthetic dataset, I tried using the data I used for training to check if the predictions would work properly. I purposely deleted some information about users and movies to have a different dataset size. However, it raises the error `size mismatch for movie_emb.weight: copying a param with shape torch.Size([9742, 64]) from checkpoint, the shape in the current model is torch.Size([9216, 64]).`
It makes sense because I used the number of nodes of my graph in the model definition (e.g., line `self.user_emb = torch.nn.Embedding(data["user"].num_nodes, hidden_channels)`), but now I'm wondering if it's mandatory always to use the same number of nodes for trained GNNs.
- Is this assumption correct?
- If so, is the standard pipeline for Link predictions always performing training with the knowledge links and then predicting the new ones?
|
What is the correct pipeline to work with Link prediction using GNNs
|
CC BY-SA 4.0
| null |
2023-04-20T15:36:51.213
|
2023-04-23T09:23:48.387
| null | null |
103715
|
[
"python",
"graph-neural-network"
] |
613583
|
2
| null |
563901
|
0
| null |
>
Does there exist any non-degenerate probability distribution function $F$ such that if $X_1,X_2,\dots \overset{\text{iid}}{\sim} F$, then there do not exist any sequences $(a_n) \subset \mathbb R_{>0}$, $(b_n) \subset \mathbb R$ such that
$$
\frac{\max\{ X_1, \dots, X_n\} - b_n}{a_n}
$$
converges in distribution to a non-degenerate distribution?
Any discrete distribution whose maximum value in the domain has non-zero probability is an example for a distribution $F$ that is not degenerate but $\max\{ X_1, \dots, X_n\}$ converges to the maximum value of the domain and becomes a degenerate distribution. Hence, we can not find $a_n$ and $b_n$ such that there is convergence to a non-degenerate distribution.
An other example, for continuous distributions, that does not converge are the distributions with Super-Heavy Tails as described here: [How do we call a more extreme case of fat tails than a power law?](https://stats.stackexchange.com/questions/469236/) (they are distributions for which any order statistic of a sample will have infinite expectation values).
(Edit note: I got to that idea of super heavy tails by thinking of neccesary condition of the tail behaviour. In my edits you can see a line of thought about it, but it is incorrect and I have to refine it, so I deleted it.)
| null |
CC BY-SA 4.0
| null |
2023-04-20T15:37:57.217
|
2023-04-20T19:07:32.927
|
2023-04-20T19:07:32.927
|
164061
|
164061
| null |
613584
|
1
| null | null |
1
|
11
|
I have a question regarding the significance testing of an adjusted odds ratio vs. the significance testing of the adjusted difference between two proportions using a z-score. This is following the running of a logistic regression. Anecdotally the p-value obtained from testing the odds ratio is smaller, but I don't know if this is a strict rule or changes at some threshold of N. Is there a proof comparing the two, or a quick summary of why they are strictly different/not necessarily strictly different? Thanks in advance.
|
Does testing the statistical significance of an odds ratio always produce a smaller p-value than a z-score obtained from the test of two proportions?
|
CC BY-SA 4.0
| null |
2023-04-20T15:41:10.863
|
2023-04-20T15:41:10.863
| null | null |
386170
|
[
"statistical-significance",
"p-value",
"proportion",
"z-score",
"odds"
] |
613585
|
1
| null | null |
0
|
15
|
I'm currently reproducing a method on my data set. In the literature, training and test accuracies are generally high, mostly between 90% to 99%. I get a training accuracy of 100% and a test accuracy of 100%, too. I used SVM and k-NN with same results. However, I believe that something is not right ;)
The type of data is illustrated in the picture below. It's a time series consisting of basically rectangular non-stochastic signals that vary in amplitude depending on the class.
[](https://i.stack.imgur.com/txY5M.png)
In the literature and the method I want to reproduce, features are not extracted from this signal but the whole time series of a sample of a class is used as an input for training and classification. Due to the nature of the data and how it is acquired, the test set is usually a subset of the training data.
Well, one reason the models accuracy is so high is probably because I used a subset of the training data for testing. However, when I use the same data for a multiclass problem, the accuracy drops significantly. On the other hand I also wonder if it could be because the input is a complete time series. What possible explanations are there for the fact that the classification rate in the binary task is significantly higher than in the multiclass problem, although I am already overfitting by the method?
|
100% training and test accuracy in binary classification task
|
CC BY-SA 4.0
| null |
2023-04-20T15:43:52.163
|
2023-04-20T15:43:52.163
| null | null |
253634
|
[
"time-series",
"overfitting"
] |
613586
|
2
| null |
120329
|
0
| null |
This answer applies for scikit-learn in python.
Both logit from statsmodels and LogisticRegression from scikit-learn can be used to fit logistic regression models. However, there are some differences between the two methods.
Logit from statsmodels provides more detailed statistical output, including p-values, confidence intervals, and goodness-of-fit measures such as the deviance and the likelihood ratio test. It also allows for more advanced modeling options, such as specifying offset terms, incorporating robust standard errors, and modeling hierarchical data structures.
LogisticRegression from scikit-learn, on the other hand, provides a more user-friendly interface and is better suited for large-scale machine learning applications. It allows for easy cross-validation, regularization, and feature selection, and is generally faster and more scalable than logit from statsmodels.
In this case, either logit or LogisticRegression could be used to fit the logistic regression model with the two indicator variables. The choice between the two methods may depend on the specific needs of the analysis, such as the desired level of statistical inference or the computational resources available.
| null |
CC BY-SA 4.0
| null |
2023-04-20T15:51:54.187
|
2023-04-20T15:51:54.187
| null | null |
386171
| null |
613587
|
2
| null |
613569
|
5
| null |
The method to estimate representative treatment effects using regression is called g-computation and works with any outcome type as long as the effect measure can be specified as a contrast between means (e.g., a mean difference, a ratio between marginal probabilities, a ratio between marginal odds, etc.). Here's how this works:
- Fit a model for the outcome. Ideally this is a flexible model that includes interactions between the treatment and covariates.
- Generate predicted values from this model setting all units' treatment value to "treated"
- Generate predicted values from this model setting all units' treatment value to "control"
- Compute the mean of the predictions under treatment (2) and the mean of the predictions under control (3)
- Compute a contrast between these two means.
This method of g-computation estimates the ATE. To estimate the ATT, steps 2 and 3 should be done using only the treated units. The control units are still used to fit the model in 1, but only the treated units are used to compute the predicted values.
To get standard errors, you can use bootstrapping or the delta method (the latter of which is exactly accurate when the outcome model is linear and the contrast is the difference in means but only an approximation otherwise).
In R, this is really easy using the `marginaleffects` package:
```
#Fit the outcome model
fit <- glm(Y ~ A * (X1 + X2 + X3), data = data)
#Generate predictions and contrast them
avg_comparisons(fit, variables = "A",
newdata = subset(data, A == 1))
```
This works for any GLM, e.g., logistic regression, Poisson regression etc. To compute contrasts that aren't the difference in means/risk difference, just supply an argument to `comparison` and `transform` (e.g., to get the risk ratio/relative risk, you would set `comparison = "lnratioavg", transform = "exp"`).
This quantity is related to an AME, though that term is a bit ambiguous because of the multiple meanings of the word "marginal". The word "marginal" in AME means the instantaneous rate of change when the predictor is changed by a tiny amount. For a binary predictor, we are not changing it by a tiny amount; we are going from 0 to 1 (or whatever values you have). So AME is not an accurate way to describe this contrast, though I often use it because it is very closely related in computation and concept to a true AME. Rather, this is a "contrast between the average adjusted predictions". Kind of a mouthful.
| null |
CC BY-SA 4.0
| null |
2023-04-20T15:59:24.823
|
2023-04-20T20:50:01.773
|
2023-04-20T20:50:01.773
|
116195
|
116195
| null |
613588
|
1
| null | null |
0
|
10
|
If you can explain these, it will be highly appreciated! Thanks in advance
1)Here is the code
S3 method for class 'formula' randomForest(formula, data=NULL, ..., subset, na.action=na.fail) ## Default S3 method: randomForest(x, y=NULL, xtest=NULL, ytest=NULL, ntree=500, mtry=if (!is.null(y) && !is.factor(y)) max(floor(ncol(x)/3), 1) else floor(sqrt(ncol(x))), replace=TRUE, classwt=NULL, cutoff, strata, sampsize = if (replace) nrow(x) else ceiling(.632*nrow(x)),...
What I understand is if we build a random forest, it will have a bunch of trees with bootstrap samples. If we have sample with replacement, we will have a sample size of nrow(x), which is the number of all the observations, but some of them are duplicates. Long story short, if we have 600 obseravtions in total, we will have 400 observations that are unique and 200 are duplicate. Then, each decision tree is trained on a randomly selected number of observations (i.e. 400 observations) from the all the training set (i.e. sample size of 600), with replacement. This process is known as bootstrapping. Those that are not selected to be trained are called out-of-bag.
Please correct me if I am wrong.
- Also, if we have else, we will have 0.632*nrows. do we have out-of-bag observations? And is each decision tree trained on a randomly selected number of observations from the all the training set?
|
random forest and bootstrap out of bag
|
CC BY-SA 4.0
| null |
2023-04-20T16:11:28.823
|
2023-04-20T16:11:28.823
| null | null |
382257
|
[
"random-forest",
"bootstrap"
] |
613589
|
2
| null |
589585
|
0
| null |
You forgot a logarithm in your expression and you should use
$$KL(p(x;\mu,\sigma^{2}),p(x)) = \int p(x;\mu,\sigma^{2}) \log \left(\frac{p(x;\mu,\sigma^{2})}{p(x)}\right) dx$$
---
A problem with the improper prior $p(x) \propto 1$ is that the constant approaches zero and can not be eliminated. So the divergence will be infinite.
---
You can see the divergence as the the expectation of the log likelihood ratio. This likelihood will be infinite because the improper prior assigns zero probability to events that have non-zero probability given the normal distribution.
| null |
CC BY-SA 4.0
| null |
2023-04-20T16:31:51.460
|
2023-04-20T16:38:36.800
|
2023-04-20T16:38:36.800
|
164061
|
164061
| null |
613590
|
2
| null |
613566
|
0
| null |
This is another way of writing
$$
\sum_{i_1,i_2,\ldots,i_M=1 \\ i_1\geq i_2 \geq \ldots \geq i_M}^D w_{i_1 i_2 \ldots i_M}x_{i_1}x_{i_2}\ldots x_{i_M}
$$
Try listing the possible values of $(i_1,i_2,i_3)$ in the case where $M=3$ and $D=4$. You've got $(1,1,1)$, $(2,1,1)$, $(2,2,1)$, $\ldots$
| null |
CC BY-SA 4.0
| null |
2023-04-20T16:34:42.347
|
2023-04-20T16:34:42.347
| null | null |
238285
| null |
613591
|
1
|
613858
| null |
2
|
43
|
I am performing analysis on data of ~8,000 patients who have COVID. This is my scenario. I have information of each patient being classified as either severe or non-severe in how COVID-19 affected them. I also have a set of all the mutations that each patient had.
Thus, I generated frequencies as such: Frequency = P(Having Mutation X and a Severe Clinical Response) divided by P(Mutation X). I did this for all mutations that have at least 30 patients to satisfy proper conditions for the statistical test.
From my original dataset of 8,000 patients, I know that about 1,000 had a severe clinical response. Thus, my null hypothesis is that for any given mutation the frequency, as calculated above, should be equal to 1/8 if it has no correlation to a severe response to COVID. What is the best way to perform a statistical test that can compare the severity frequency of each mutation to the population proportion (1/8) and also account for sample size (i.e. if one mutation has a frequency of 0.9, but only 30 patients have it, it's likely not as significant as a mutation which 5,000 patients have even if the frequency is lower, at 0.5).
I am thinking of doing a chi-squared test for homogeneity but am not sure about how to go about it because I don't think it compares to the proportion of the entire dataset and I'm not sure if it accounts for sample size either. I've done feature selection so not looking to do that again but I'm trying to do statistical tests on my original data set. Thanks.
|
How do I use a chi squared test for homogeneity/other statistical method for this situation?
|
CC BY-SA 4.0
| null |
2023-04-20T16:41:06.947
|
2023-04-23T20:29:44.403
| null | null |
386173
|
[
"hypothesis-testing",
"chi-squared-test",
"bioinformatics"
] |
613593
|
1
| null | null |
2
|
23
|
I have panel data with an ordinal outcome that was assessed by two independent reviewers - i.e. each is unaware of the others' assessment. The time points for assessment are not necessarily concurrent, and the lengths of the series are not necessarily equal. Specifically, there are 4 possible levels where the 4th level, when observed, is the terminus of the series. We can assume the duration of scanning is frequent enough that interval censoring does not apply, that is that the state of the series can be carried forward until the next state is observed.
I would like to know what methods may be used to assess the concordance of these time series. Previously there was the concordance correlation coefficient of Lin that considers the orthogonal distance of observations from a concordance line for a continuous response. For these ordinal responses, I know that the "distance" between ordinal categories is too great to be considered continuous, but the specific value represents a range of possible continuous responses.
|
Concordance of an ordinal time series
|
CC BY-SA 4.0
| null |
2023-04-20T17:12:49.663
|
2023-04-20T17:12:49.663
| null | null |
8013
|
[
"time-series",
"concordance"
] |
613594
|
1
| null | null |
3
|
35
|
I am taking a class and we are about to talk about Kalman filters and smoothing and I am trying to do some reading ahead.
On the Wikipedia page for ['Smoothing (stochastic processes)'](https://en.wikipedia.org/wiki/Smoothing_problem_(stochastic_processes)) it says 'the smoothing problem (not to be confused with smoothing in statistics, image processing and other contexts) is the problem of estimating an unknown probability density function recursively over time using incremental incoming measurements.' On the page for [Smoothing](https://en.wikipedia.org/wiki/Smoothing) in the statistical context it says, 'In statistics and image processing, to smooth a data set is to create an approximating function that attempts to capture important patterns in the data, while leaving out noise or other fine-scale structures/rapid phenomena.'
I am a bit confused about the difference between these two topics. Especially because on both pages Kalman filters are listed as algorithms that are used for smoothing and filtering. Is smoothing in statistics just a special case of the more general smoothing problem in stochastic processes?
Any clarification or reading advice is greatly appreciated!
|
The difference between smoothing problems in stochastic processes and statistics
|
CC BY-SA 4.0
| null |
2023-04-20T17:24:14.683
|
2023-04-20T22:06:09.457
| null | null |
386174
|
[
"mathematical-statistics",
"stochastic-processes",
"smoothing"
] |
613595
|
2
| null |
92266
|
1
| null |
AN obvious baseline to which your model can be compared is the model you have that uses no features, that is, a model that makes predictions just based on the categories.
Think this way: if I have a stack of $900$ dog photos mixed with $100$ cat photos, which animal would you predict is in a given photo if I did not let you see the photo? The costs of incorrect decisions would influence your decision, sure, but it is clear that it is much more likely that there is a dog in the photo than a cat. Consequently, predicting the majority category every time would be a viable strategy. In this case, you would score an impressive-looking $90\%$ accuracy without doing any kind of fancy modeling.
[This is totally analogous to $R^2$ in regression problems.](https://stats.stackexchange.com/a/605451/247274) Further, it has intuitive sense. If you could score $90\%$ accuracy by making the same prediction every time, a model with $80\%$ accuracy, on its own, sounds like it has a solid score. However, using that model has doubled the error rate from $10\%$ to $20\%$. Thinking about good models, if you can score $90\%$ with the naïve model that predicts the "dog" category every time, and then you make a model that scores $90\%$ accuracy, that sounds like only a $9\%$ improvement, perhaps not very impressive. Thinking in terms of error rates, however, you have halved the error rate from $10\%$ to $1\%$. That is, your model makes mistakes a tenth as frequently as it did before. The $R^2_{\text{accuracy}}$ I give in the above link would be $0.9$ for such a model, and this can be interpreted without the issues with classification accuracy that arise in problems when one category has more instances than another.
To your problem, you have three categories. Without even looking at the features, you could get a certain error rate by predicting the most common category every time. Your model better be able to do better than that and achieve a lower error rate.
I have a related answer [here](https://stats.stackexchange.com/q/595957/247274), and [another answer to that question](https://stats.stackexchange.com/a/596002/247274) offers an interesting viewpoint that goes against my stance (granted, with a different sense of what is valued, which is not just the accuracy in that answer, so the two answers are perfectly compatible, depending on what is valued).
| null |
CC BY-SA 4.0
| null |
2023-04-20T17:27:50.623
|
2023-04-20T17:27:50.623
| null | null |
247274
| null |
613597
|
1
| null | null |
1
|
46
|
"Overfit" is a commonly discussed concept in ML community. However, I tend to feel that there might be abuse of using this terminology. I wonder what it means when we talk about overfit, especially when we are talking about "overfit on some data".
Specifically, I learned that the text-book definition (from Elements of Statistical Learning or Wikipedia) of overfit is that the test accuracy may drop as model complexity increases. However, I noticed many people also say "a model that overfits to some data". Here are some examples that I have seen/heard:
- In a decision tree, if a feature is duplicated, then the tree may "overfit to the duplicated feature" so that the model performance degrades. (I heard this from daily conversation with my peers.)
- In a linear model, if pure noise is added to the feature, then the model may "overfit to the noise" so that the model performance degrades.
(I heard this from daily conversation with my peers.)
- When training a neural network, more epochs will tend to make the network "overfit" because the model starts learning the noise instead of data pattern. (This is a commonly accepted conceptual explanation. For example, see this thread from StackExchange)
For sure, the model performance on test dataset will degrade in all above 3 scenarios. However, is the explanation referring "overfit" a correct one? What exactly does it imply when we say a model is "overfitting on some data"?
|
Questions on what it means when we talk about "overfit"
|
CC BY-SA 4.0
| null |
2023-04-20T17:34:29.900
|
2023-04-20T17:48:13.077
| null | null |
288530
|
[
"machine-learning",
"overfitting",
"definition"
] |
613598
|
2
| null |
37807
|
0
| null |
If the model has calibrated probabilities, then this is exactly what the predicted probabilities tell you.
Calibration refers to the idea that, if a model predicts that an event will happen with probability $p$, then it should actually happen with probability $p$. If this happens, then the model outputs can be taken literally.
If the model lacks calibration, [there are methods to try to calibrate the outputs](https://scikit-learn.org/stable/modules/calibration.html) so you can have this interpretation of the model outputs as the desired true probabilities of event occurrence.
All of this can be done independent of the accuracy score ($60\%$ for your problem), which is based on binning the probabilities in a way that might or might not make sense for a given application. For instance, despite having two categories, there might be three decisions you make, depending on the predicted probability. Our [Stephan Kolassa](https://stats.stackexchange.com/users/1352/stephan-kolassa) gets into this idea [here](https://stats.stackexchange.com/a/469059/247274) and [here](https://stats.stackexchange.com/questions/312119/reduce-classification-probability-threshold/312124#312124).
| null |
CC BY-SA 4.0
| null |
2023-04-20T17:34:31.110
|
2023-04-20T17:34:31.110
| null | null |
247274
| null |
613599
|
2
| null |
93352
|
0
| null |
Putting this here for future viewers: I had the same issue come up, and it was because one of my independent variables had -Inf values. I recoded those to NA and it worked.
See if there are Inf values with:
```
table(d$incomp)
```
And then recode them with:
```
d$incomp[is.infinite(d$incomp)] = NA
```
| null |
CC BY-SA 4.0
| null |
2023-04-20T17:35:09.650
|
2023-04-20T17:35:09.650
| null | null |
331571
| null |
613600
|
1
| null | null |
0
|
19
|
I have a glmer model in R that does something similar to `correct_response ~ time*kappa_score + (1|pid)`. Because I generally prefer plotting in python, I save model coefficients and emtrends data to disc and want to plot the interaction of time & kappa at two time points as a function of kappa.
To do this, I am currently running:
```
CI = 1.96
kappa = np.linspace(-1, 1, 1000)
time = np.linspace(-2.5, 2.5, 1000)
fig, ax = plt.subplots(nrows = 1, ncols = 3, figsize = (6.6, 2.0))
# kappa interaction early
lower = 1 / (1 + np.exp(-((ß_kappa - CI*SE_kappa)*kappa + (ß_tkappa - CI*SE_tkappa)*time.min()*kappa)))
upper = 1 / (1 + np.exp(-((ß_kappa + CI*SE_kappa)*kappa + (ß_tkappa + CI*SE_tkappa)*time.min()*kappa)))
mu = 1 / (1 + np.exp(-((ß_kappa)*kappa + (ß_tkappa)*time.min()*kappa)))
ax[0].fill_between(kappa, lower, upper, alpha = .25, color = colors[0])
ax[0].plot(kappa, mu, '--', alpha = 0.9, color = colors[0], linewidth = 1)
# kappa interaction late
lower = 1 / (1 + np.exp(-((ß_kappa - CI*SE_kappa)*kappa + (ß_tkappa - CI*SE_tkappa)*time.max()*kappa)))
upper = 1 / (1 + np.exp(-((ß_kappa + CI*SE_kappa)*kappa + (ß_tkappa + CI*SE_tkappa)*time.max()*kappa)))
mu = 1 / (1 + np.exp(-((ß_kappa)*kappa + (ß_tkappa)*time.max()*kappa)))
ax[0].fill_between(kappa, lower, upper, alpha = .25, color = colors[3])
ax[0].plot(kappa, mu, '-', alpha = 0.9, color = colors[3], linewidth = 1)
```
However, the results that I obtain are confusing to me (see attached image). While the confidence interval for time = 2.5 seems reasonable to me, the one for time = -2.5 absolutely does not (given that it is barely noticeable). I have checked in R using `plot(effects("time*kappa_score"), x.var = "kappa_score")`, which produces plots that seem comparable, except that there the confidence interval for time = -2.5 is substantially larger.
[](https://i.stack.imgur.com/ibutO.png)
Now, I realise that this is due to `time.min()` being negative, but I can't help but wonder how these are calculated differently in R (see attached image). Does anybody have any insights?
[](https://i.stack.imgur.com/7DHj5.png)
|
Manually plotting confidence intervals of interaction from generalized linear mixed model
|
CC BY-SA 4.0
| null |
2023-04-20T17:37:32.637
|
2023-04-20T17:37:32.637
| null | null |
228778
|
[
"r",
"python",
"lme4-nlme"
] |
613601
|
2
| null |
613597
|
1
| null |
In statistical learning theory, a quantity of great interest is the gap between empirical risk (expected training error) and population risk (expected test error) when train and test sets are sampled IID from the same population.
In this context, overfitting refers to the size of the gap. No gap means there's no overfitting, large gap = method is overfitting. This is a property of the method rather than model, and refers to expected size of the gap averaged over all train/test pairs, rather than for a specific train/test dataset.
However, you can use gap between specific train/test pair to infer the expected size of the gap. IE, if you see a large difference in train/test errors, then probably the average over all train/test pairs is also large.
No overfitting + low train error also implies low test error.
| null |
CC BY-SA 4.0
| null |
2023-04-20T17:43:06.273
|
2023-04-20T17:48:13.077
|
2023-04-20T17:48:13.077
|
511
|
511
| null |
613602
|
1
| null | null |
0
|
16
|
I have taken body temperatures of 5 hens in March 2022 and body temperatures of the same hens in July 2022. I have also measured egg production, feed intake, and body weight for both months. I am trying to show that high environmental temperatures alter the hen's physiology, behavior, and egg production. Please help. I don't know what test or tests to run for my research.
|
Which statistical test should I use for my research?
|
CC BY-SA 4.0
| null |
2023-04-20T18:09:13.507
|
2023-04-20T18:13:29.903
|
2023-04-20T18:13:29.903
|
56940
|
386176
|
[
"hypothesis-testing",
"repeated-measures"
] |
613603
|
2
| null |
613276
|
1
| null |
In the linked question they were talking about negative binomial models, which by default use the $\log$ as the link-function, since it deals with counts and frequencies just like Poisson-models. Beta-models deal with stuff like probabilities which want a link-function that deals with the limits at 0 and 1, the default being $\text{logit}$.
What the default link-function is for your family of models in R can be found with `help`, e.g.: `help(nb)`, or `help(betar)`. Other link-functions can be specified, but this is advanced.
The inverse link-function is generally assumed to be easy to find. `plogis` being R for $\text{logit}^{-1}$ is a little weird, but since it rises strictly monotonically from 0 to 1 it works as a CDF and so R implemented it as one. Probit models pull the same trick in the other direction: [https://en.wikipedia.org/wiki/Probit_model](https://en.wikipedia.org/wiki/Probit_model)
| null |
CC BY-SA 4.0
| null |
2023-04-20T18:12:15.663
|
2023-04-20T18:12:15.663
| null | null |
341520
| null |
613605
|
1
| null | null |
1
|
11
|
For my thesis I am analysis habitat use of animals. I want to compare habitat selection coeficients between groups, for instance between males and females or between different places. But I do not know of a statistical method that can do this. Does anyone know? Most habitat selection functions are based on clogit. However the one I am using is a poisson regression ([https://rdrr.io/github/ctmm-initiative/ctmm/man/RSF.fit.html](https://rdrr.io/github/ctmm-initiative/ctmm/man/RSF.fit.html)). I do not really understand but this is what the page says "the output is a log-likelihood is that of the continuous Poisson point process", the class is "ctmm" so I could not direcly use the model in a function I guess. I was already looking around and thought effect modification might also be appropiiate. But ut I don know know enhough statistics to understand it all. So I was hoping someone could point me in the right direction.
Here is some example data of one indiviudal:
low est high
Land_Use.323_12 (1/Land_Use.323_12) -1.531084e+00 -0.4869885 0.5571072
Land_Use.252_12 (1/Land_Use.252_12) -2.072346e+00 -0.9814354 0.1094758
Land_Use.251_12 (1/Land_Use.251_12) -7.794530e+00 -3.6031231 0.5882833
Land_Use.47_12 (1/Land_Use.47_12) -2.249040e+00 -1.1812521 -0.1134644
Land_Use.45_12 (1/Land_Use.45_12) -1.118897e+01 -3.9057274 3.3775193
Land_Use.38_12 (1/Land_Use.38_12) -5.000368e+00 -2.4402699 0.1198286
Land_Use.37_12 (1/Land_Use.37_12) -8.038857e+00 -3.4006010 1.2376549
Land_Use.36_12 (1/Land_Use.36_12) -5.483149e+00 -3.5370652 -1.5909819
Land_Use.35_12 (1/Land_Use.35_12) -7.526605e+00 2.7482648 13.0231348
Land_Use.30_12 (1/Land_Use.30_12) -2.786229e+01 -0.3129170 27.2364516
were a negative number is that the animal selects against a land use. and a positive that it selects for a habitat use. This is the function I used [https://rdrr.io/github/ctmm-initiative/ctmm/man/RSF.fit.html](https://rdrr.io/github/ctmm-initiative/ctmm/man/RSF.fit.html)
Thanks!
|
How to compare habitat selection coeficients between groups?
|
CC BY-SA 4.0
| null |
2023-04-20T19:22:26.430
|
2023-04-20T19:22:26.430
| null | null |
386181
|
[
"regression-coefficients",
"poisson-regression",
"group-differences"
] |
613606
|
1
| null | null |
1
|
56
|
I have two main independent variables - gender (0/1) and extraversion (continuous) in a logit regression. In the full sample, the gender X extraversion interaction shows no statistical significance. However, when I split the sample into male vs female subgroups, extraversion is statistically significant (p<0.05) among women, but not among men (p=0.09). The coefficient of extraversion in both subgroups are positive. What does this mean? Can I conclude that extraversion significantly predicts women's outcomes, but not for men? Thank you!
|
When interaction is not significant in full sample but subgroup analysis shows significance
|
CC BY-SA 4.0
| null |
2023-04-20T19:36:50.130
|
2023-04-21T01:16:25.663
| null | null |
321893
|
[
"regression",
"logistic",
"interaction"
] |
613608
|
2
| null |
566718
|
0
| null |
A quick check is to ask if you are able to outperform a baseline model that predicts the majority category every time. You mention imbalance. Depending on how imbalanced your data are, you might be able to get better than $99.4\%$ accuracy by just predicting the majority category every time, such as if one category accounts for $99.9\%$ of the cases and you predict that category every time. This would scream out overfitting, as moving from a simple model (predict one category every time) to a complex model (whatever you do) results in a rise in the in-sample performance yet a drop in out-of-sample performance.
| null |
CC BY-SA 4.0
| null |
2023-04-20T19:42:30.670
|
2023-04-20T19:42:30.670
| null | null |
247274
| null |
613609
|
1
| null | null |
2
|
44
|
I have a question about the Diebold-Mariano test. I have different forecasting horizons (n-ahead = 1 to 40) and different forecasting origins (26). I.e. I employ a rolling origin evaluation approach. I want to compare the forecasting results of two empirical models. Now, should I:
- Compare all n-ahead=1 forecasting errors for the two models, set h=1, and perform the test. Then, I would do the same for n-ahead=2 forecasts, but set h=2, and so on.
- Alternatively, is it more common to compare the average errors over all forecasting origins? However, this would result in information loss, wouldn't it?
|
Diebold-Mariano test for evaluating 40 different forecast horizons
|
CC BY-SA 4.0
| null |
2023-04-20T19:43:59.740
|
2023-04-21T07:11:48.997
|
2023-04-21T07:11:48.997
|
53690
|
371263
|
[
"time-series",
"forecasting",
"model-evaluation",
"diebold-mariano-test"
] |
613610
|
1
| null | null |
0
|
25
|
Every day, I keep track of the processing times for each input to my CPU and create empirical cumulative distribution functions (ECDFs) based on this data. Let's assume I have 100 observations per day and I do this process for 10 days, resulting in 10 ECDFs each of which with 100 points. Now, the challenge is to estimate the underlying distribution of the data. What method or approach do you suggest?
To illustrate this, let's assume that the underlying distribution follows a normal distribution with a mean of 10, but with some noise. In other words, the underlying cumulative distribution function (CDF) for each day has a mean of 10 plus some noise. For the sake of discussion, let's assume that on colder days, the CPU works slightly slower, resulting in a higher mean processing time than 10. However, since we do not directly measure temperature or any external factor, we only observe this effect as noise in the data. The same situation applies to the standard deviation (sigma) of the underlying distribution. Please refer to the following Python code and figure for a visual representation.
```
import numpy as np
import plotly.express as px
# Generate random data from a normal distribution
np.random.seed(0)
n_samples = 100 # Number of samples
n_ecdfs = 10 # Number of ECDFs to generate
mu = 10 # Mean of the normal distribution
sigma = 1 # Standard deviation of the normal distribution
# Compute the ECDFs for each column in the data
ecdfs = []
for i in range(n_ecdfs):
# I also change the mu and sigma the underlying distribution of each ecdf slightly
data = np.random.normal(mu+np.random.uniform(-0.5, 0.5),
sigma+np.random.uniform(-0.2, 0.2),
n_samples)
x = np.sort(data)
y = np.arange(1, n_samples + 1) / n_samples
ecdf = np.column_stack((x, y))
ecdfs.append(ecdf)
# Create plotly figure
fig = px.line(title='Empirical Cumulative Distribution Functions (ECDFs) from Normal Distribution')
# Add ECDF traces to the figure
for i in range(n_ecdfs):
fig.add_scatter(x=ecdfs[i][:, 0], y=ecdfs[i][:, 1], mode='lines', name=f'ECDF {i + 1}')
# Set x-axis label
fig.update_xaxes(title_text='X')
# Set y-axis label
fig.update_yaxes(title_text='ECDF')
# Show plot
fig.show()
```
[](https://i.stack.imgur.com/WYDxU.png)
P.S: Definitely one approach can be to put them all together and draw an ecdf and try to fit a cdf to it. However, I am very curious to know if there is another approach.
|
Estimation of Distribution using multiple ECDFs
|
CC BY-SA 4.0
| null |
2023-04-20T19:52:46.230
|
2023-04-21T03:19:32.377
|
2023-04-21T03:19:32.377
|
362671
|
307922
|
[
"estimation",
"cumulative-distribution-function",
"parameterization",
"empirical-cumulative-distr-fn"
] |
613611
|
1
| null | null |
2
|
60
|
I recently learned log-transforming a variable and it helps when our data is skewed. But, I also learned that the normality assumption of the data/variables is not required. The only requirement is the error being normally distributed. In that case, why would one log a variable? I used to think it was because the data needs to be normally distributed, but since it's not, I started wondering why?
|
Why do we log variables if normality is not required?
|
CC BY-SA 4.0
| null |
2023-04-20T19:54:23.147
|
2023-04-20T20:44:04.220
|
2023-04-20T20:20:00.717
|
247274
|
355204
|
[
"regression",
"data-transformation",
"logarithm"
] |
613612
|
2
| null |
613611
|
2
| null |
One good reason to take the $\log$ of a variable is to change the interpretation. [While this interpretation can break down](https://stats.stackexchange.com/a/488704/247274), using a $\log$ transformation phrases the regression in terms of percent change. That is, instead of saying that a one-unit increase in a feature leads to the outcome increasing by $3$, you can say that a one-unit increase in a feature leads to a $3\%$ increase in the outcome. This can be quite convenient. For instance, if the outcome is money, the impact a $\$3$ increase has depends on how much money you have, and a natural way to think is in terms of percent increase. Whether you are a billionaire or a child with a piggybank, increasing wealth by $3\%$ has a considerable meaning, but a $\$3$ increase is much more impactful on the child than the billionaire.
| null |
CC BY-SA 4.0
| null |
2023-04-20T20:05:00.353
|
2023-04-20T20:05:00.353
| null | null |
247274
| null |
613613
|
1
| null | null |
2
|
26
|
Scenario: Rats with condition and treatments. The response measurement is a gold standard measure of the condition. However, uncommonly, rats can die during the study. That means, technically, those rats are "censored", since the study goes on.
It is not a survival study, so survival isn't the outcome. However, time is a variable (time-course study). The response measurement is a count.
Optimal way to account for this rare "censorship"? In some runs, there are no animal deaths. In other runs, one or two animals may die out of 20+.
|
Data with rare censoring
|
CC BY-SA 4.0
| null |
2023-04-20T20:09:45.983
|
2023-04-20T20:09:45.983
| null | null |
28141
|
[
"censoring"
] |
613614
|
1
| null | null |
0
|
17
|
I am new to probability, and I am struggling with the right language to ask or google this question.
I have a population of size $n$ and everyone is say some color. I want to verify that at least half the population is red to within some $\epsilon$ accuracy. How many members of the population do I need to query?
I am not sure which of the main theorems of probability to use - chernoff bound, Markov, central limit theorem, etc.
|
How many samples to be reasonably sure?
|
CC BY-SA 4.0
| null |
2023-04-20T20:18:33.297
|
2023-04-20T20:18:33.297
| null | null |
386184
|
[
"probability",
"sampling",
"hoeffdings-inequality"
] |
613615
|
2
| null |
329170
|
0
| null |
You can use a short sliding window of a number of samples that can be parametrized, and compute the range over it. When this window is at the "edge" of a jump, the range statistic will change considerably. Than let A and B be the indices of successive range changes, i.e. jumps. You can push each of the values between these indices by the value of range, i.e. the depth of the jump, and get rid of it.
Another method you can use is to make an index of the range statistic as well as mean, than use the sliding window approach to find anomalies in the range-mean index series.
| null |
CC BY-SA 4.0
| null |
2023-04-20T20:28:48.140
|
2023-04-20T20:28:48.140
| null | null |
282477
| null |
613616
|
1
| null | null |
0
|
13
|
Originally asked on StackOverflow and now brought over here upon recommendation from another commenter.
I'm an undergrad writing a very simple lab report in which I'm analysing some data on observed numbers of birds at two different study sites. I'm using R 4.2.2 to create the code to run the analysis. The exact data frame I'm using was imported off an Excel spreadsheet but I'll replicate the data frame here.
```
Location <- c(rep("Leighton Moss",6),rep("Eaves Wood",6)
Period<-c(rep(c(rep("Midday",3),rep("Afternoon",3)),2))
Activity<-c(rep(c("Resting","Foraging","Travelling"),4))
Count<-c(88,68,54,30,41,33,13,28,13,55,47,46)
df<-data.frame(Location,Period,Activity,Count)
```
I want to know if there's an equivalent of Chi-squared test of association for three categorical variables (in this case Location, Period and Activity) as opposed to two. In this instance, Location and Period are acting as predictor variables, and the Activity acted as the Outcome variable, and at each combination of Location and Period I counted the number of birds engaged in each of the three activities.
I ideally want to avoid just doing separate three separate Chi-squared tests for Location vs Activity, Period vs Activity and Location vs Period, but I can do that if push comes to shove.
I think I can build a three-way contingency table like this to help:
[
A commenter on StackOverflow (@qdread) suggested using a multinomial model, which does make sense, but I'm not entirely sure how to construct the model to best fit my data and implement said construction in R.
Any help you can offer would be great. Thanks in advance!
|
Chi-Squared Test of Association Equivalent for 3 Categorical Variables and Observed Frequency
|
CC BY-SA 4.0
| null |
2023-04-20T20:37:42.823
|
2023-04-20T20:37:42.823
| null | null |
386182
|
[
"r",
"chi-squared-test",
"multinomial-distribution",
"multinomial-logit"
] |
613617
|
2
| null |
613611
|
2
| null |
If you're modeling a response which is fundamentally multiplicative rather than additive in terms of the contributions of respective effects, consider taking the log so that you have a more meaningful summary of the difference. For instance, in a study of farms of heterogeneous acreages, the effect of a novel fertilizer should be modeled as a % increase in yields. The mean difference on the log scale, and its CI, can be exponentiated to produce proportional changes. For instance, $\exp(0.05) \approx 1.05$ which can be interpreted as a 5% increase.
| null |
CC BY-SA 4.0
| null |
2023-04-20T20:44:04.220
|
2023-04-20T20:44:04.220
| null | null |
8013
| null |
613619
|
1
| null | null |
3
|
51
|
EDIT: SOLUTION FOUND, SEE BELOW QUESTION
I'm an undergrad writing a very simple lab report in which I'm analysing some data on observed numbers of birds at two different study sites. The exact data frame I'm using was imported off an Excel spreadsheet but I'll replicate the data frame here.
```
Location <- c(rep("Leighton Moss",6),rep("Eaves Wood",6)
Period<-c(rep(c(rep("Midday",3),rep("Afternoon",3)),2))
Activity<-c(rep(c("Resting","Foraging","Travelling"),4))
Count<-c(88,68,54,30,41,33,13,28,13,55,47,46)
df<-data.frame(Location,Period,Activity,Count)
```
I want to know if there's an equivalent of Chi-squared test of association for three categorical variables (in this case Location, Period and Activity) as opposed to two.
I ideally want to avoid just doing separate three separate Chi-squared tests for Location vs Activity, Period vs Activity and Location vs Period, but I've found the solution to code for that in case I do need to do that.
I think I can build a three-way contingency table like this to help:
[](https://i.stack.imgur.com/uuzxu.png)
Beyond that I'm not sure what analysis to do as I think multilinear regression involves a continuous variable? Any code/function that I can use to do this would be great.
Any help y'all can offer is always much appreciated.
Solution
The answer posted below by a fellow commenter had Activity as a covariate which wasn't what I was after; however, I adapted that answer to suit my needs. In the end, I went for a two-pronged analysis.
Firstly, a multinomial logistic regression analysis using the `multinom` function of the `AER` package, with Activity as the outcome variable and Period and Location as predictor variables. To do this I listed each observation individually rather than using count data, and the following line of code.
```
multi_mo<-multinom(Activity ~ Location + Period + Location*Period, data=[data frame name], model=TRUE)
summary(multi_mo)
```
Secondly, I separated the count data by Activity category, and conducted Poisson regressions on each individual Activity category with Period and Location as covariates. This was just done using the `glm` function, as shown below.
```
Resting_Model <- glm(Resting_Count ~ Location + Period + Location:Period,
family="poisson",
offset=log(Total_Observations),
data=[data frame name])
summary(Resting_Model)
```
Hopefully this helps someone with the same question!
|
Chi-Squared Test of Association Equivalent for 3 Categorical Variables and Observed Frequency
|
CC BY-SA 4.0
| null |
2023-04-20T18:53:17.173
|
2023-05-21T00:18:38.283
|
2023-05-21T00:18:38.283
|
386182
|
386182
|
[
"r",
"count-data"
] |
613620
|
1
| null | null |
0
|
11
|
The question "Develop a hypothesis, and constructed appropriate null and alternative/experimental hypothesis.
a) Hyacinth bulbs are sold to a retailer in packs of 100 which claim to have equal numbers of bulbs producing blue and pink flowers. Test at the 10% level of significance whether the retailer has a right to complain that there are not equal numbers of each colour. Find the critical region for the test."
Would anyone be able to help point me in the right direction or show an example of what is being asked?
Thank you and I'm sorry if this seems basic.
|
I am unsure whether this is a binomial or normal distribution question and how to go about solving it?
|
CC BY-SA 4.0
| null |
2023-04-20T21:29:35.443
|
2023-04-20T21:29:35.443
| null | null |
386186
|
[
"distributions",
"normal-distribution",
"binomial-distribution"
] |
613621
|
1
| null | null |
2
|
26
|
In a comment on [this](https://stats.stackexchange.com/questions/67443/does-the-beta-distribution-have-a-conjugate-prior) question, the user 'probabilityislogic' says "No, not MCMC this thing! Quadrature this thing! only 2 parameters - quadrature is the "gold standard" for small dimensional posteriors, both for time and accuracy".
However I haven't been able to anything describing how to draw random variables using quadrature. How does one do it?
|
Drawing random numbers with quadrature
|
CC BY-SA 4.0
| null |
2023-04-20T22:05:19.393
|
2023-04-20T22:05:19.393
| null | null |
161943
|
[
"random-generation"
] |
613622
|
2
| null |
317856
|
1
| null |
It seems to me we can use what we already know, provided we have heard of distribution functions and conditional probabilities. Thus, the following remarks offer nothing new, but I hope that in making them the basic simplicity and familiarity of the situation will become apparent.
---
When you have any real-valued random variable $X$ and an event $\mathcal E$ (defined on the same probability space, of course), then you can extend the definition of a (cumulative) distribution function in the most natural way possible: namely, for any number $x,$ define
$$F_X(x;\mathcal E) = \Pr(X\le x\mid \mathcal E).$$
When $\mathcal E$ has positive probability you can even avoid all technicalities and apply the elementary formula for conditional probability,
>
$$\Pr(X\le x\mid \mathcal E) = \frac{\Pr(X\le x\,\cap\,\mathcal E)}{\Pr(\mathcal E)}.$$
The numerator, which might look strange to the mathematically sophisticated reader, is the probability of the intersection of two events. The conventional shorthand "$X\le x$" stands for the set of outcomes where $X$ does not exceed $x:$ $\{\omega\in\Omega\mid X(\omega)\le x\}.$
This extends the usual distribution function very nicely in the sense that when $\Omega$ is the universal event (that is, the underlying set of all outcomes in the probability space), then since $(X\le x)\subseteq \Omega$ and $\Pr(\Omega)=1,$
$$F_X(x)= \Pr(X\le x) = \frac{\Pr(X\le x\,\cap\,\Omega)}{1} = \frac{\Pr(X\le x\,\cap\,\Omega)}{\Pr(\Omega)}= F_X(x;\Omega) .$$
---
### Comments
Note that only one random variable $X$ is needed, showing that the concept of conditional distribution does not depend on a joint distribution. As a simple example, the right-truncated Normal distribution studied at [Expected value of x in a normal distribution, GIVEN that it is below a certain value](https://stats.stackexchange.com/questions/166273) is determined by a Normally-distributed random variable $X$ and the event $X\le T$ (for the fixed truncation limit $T$).
Another example, just to make these distinctions very clear, models a population of people where we are interested in their sex and age (at a specified time, because both these properties can change!). By agreeing on a unit of measure of age (seconds, say), and (for simplicity) focusing on those people with a definite sex, we may take the sample space to be
$$\Omega = \{\text{male}, \text{female}\}\times [0,\infty).$$
Elements of $\Omega$ represent people. A sample from $\Omega$ could be represented by rows in a two-column table: one for sex, the other for age. That's what the Cartesian product $\times$ in the definition of $\Omega$ means.
The probabilities of interest will attach to intervals of ages for each sex separately (or combined). Thus, relevant events will be composed out of age intervals of the form $\{\text{male}\}\times (a,b]$ (for lower and upper ages $a$ and $b$ of males) and $\{\text{female}\}\times (a,b]$ (an interval of ages for females).
As a shorthand, "$\{\text{male}\}$" is the event $\{\text{male}\}\times [0,\infty) = \{(\text{male},x)\mid x \ge 0\},$ and similarly for "$\{\text{female}\}.$" By definition, these are both events -- or "subpopulations" if you like.
>
Let $X$ be the random variable giving the age of a person rounded to the nearest year. Then (for instance) we might be interested in $F_X$ (the distribution of all ages), of $F_X(\ \mid \{\text{male}\})$ (the distribution of male ages), or of $F_X(\ \mid \{\text{female}\}).$
This nice example shows that the conditioning event $\mathcal E$ (the sex) needn't have anything to do with $X$ (the age).
Clearly, this formulation of conditional distributions does not require us to define a random variable to condition on a characteristic like sex in the example. We could have done it that way, and there are some analytical and computational advantages to doing so, but conceptually such a construct would be artificial and superfluous.
When there are multiple random variables $(X,Y)$ (and $Y$ can be vector-valued), nothing new emerges because conditioning on $Y$ means conditioning on the events it defines.
| null |
CC BY-SA 4.0
| null |
2023-04-20T22:05:25.243
|
2023-04-20T22:05:25.243
| null | null |
919
| null |
613623
|
2
| null |
613594
|
0
| null |
If you are coming from a pure mathematical background, it might be good to read the History section of the Kalman filtering page as the origins in control theory are good to put it in context. Reading the pages for state-space model, [state-observer](https://en.m.wikipedia.org/wiki/State_observer), LQR control and LQG control might be good background if they aren't familiar already.
For the difference between smoothing in stochastic processes vs. Other areas, the main issue I think they are getting at is that the data for time series should (probably) be [causal](https://en.m.wikipedia.org/wiki/Causal_system), i.e. the present can't depend on the future. On the other hand in spatial problems, for instance, data is free to be related to any other data you like.
| null |
CC BY-SA 4.0
| null |
2023-04-20T22:06:09.457
|
2023-04-20T22:06:09.457
| null | null |
338358
| null |
613624
|
1
| null | null |
0
|
23
|
Suppose I extract $n$ samples $x_i$ from a Bernoulli distribution
$$x_i \sim Bern(p)$$
Based on the samples, I want to estimate the probability that $p$ is below a certain threshold $t$
$$P(p<t \space | \space x_1,\dots,x_n)$$
How do I go about it?
|
Estimate $p$ of a Bernoulli after $n$ samples
|
CC BY-SA 4.0
| null |
2023-04-20T22:09:40.930
|
2023-04-20T22:09:40.930
| null | null |
173505
|
[
"mean",
"binomial-distribution",
"estimators",
"bernoulli-distribution"
] |
613625
|
1
| null | null |
1
|
40
|
I've looked into the Delta method and it looks tedious. I understand the computational performance gain to help when the dataset is quite large. I only have 200 individuals contributing data, so I believe that the bootstrap approach should be viable.
Could I get a pseudo-code approach to defining a power analysis function for ratio metrics?
Edit:
Adding more details.
Ratio metric where I have 200 individuals who will be divided evenly between treatment and control groups. Each individual will produce their own number of trials, some of which will be successes.
My null hypothesis is that the success rate is equal in groups. My alternative hypothesis is that the success rate is not equal.
For the experiment to have any practical significance, the effect size would need to be at least 2%.
Ideally to know how to know how to use observed data (individuals and the successes & trials that they produce) to determine the power of an experiment where the expected effect is 2% delta in ratio.
Alternatively, I'd like to know what's the minimum detectable effect given power of 80%.
(Both scenarios assume alpha of 5%).
|
How can I execute power analysis for a ratio metric?
|
CC BY-SA 4.0
| null |
2023-04-20T22:18:12.047
|
2023-04-20T22:44:27.603
|
2023-04-20T22:44:27.603
|
288172
|
288172
|
[
"bootstrap",
"statistical-power",
"ratio"
] |
613626
|
2
| null |
447725
|
0
| null |
The answer is that, except for the case of a constant treatment effect where the covariance is zero, the covariance is negative.
First, note that
$$
Var(X - Y) = Var(X) + Var(Y) - 2 \cdot Cov(X, Y)
$$
It turns out the covariance term is negative. However, under Neyman's finite sampling approach, you also have the finite population correction terms so you end up with
$$
\begin{aligned}
Var(\bar{t} - \bar{c}) &= \frac{S_t^2}{n_t} \cdot (1 - \frac{n_t}{n}) + \frac{S_c^2}{n_c} \cdot (1 - \frac{n_c}{n}) + \frac{2 }{n (n - 1)}\sum_{i=1}^n (t_i - \bar{t}) \cdot (c_i - \bar{c})
\end{aligned}
$$
The covariance can't be estimated because the same unit is never observed under the treatment and control regime (simultaneously). Thus, more algebra is need, and this get you back to $Var(\bar{t}) + Var(\bar{c})$ minus another term that can't be estimated but which is negative. For details see my derivation [here](https://stats.stackexchange.com/a/604522/362671).
Also note that for a super-population model, the Neyman estimate is conservative because the treatment and control groups can be assumed to be independent and thus there is no covariance term.
| null |
CC BY-SA 4.0
| null |
2023-04-20T22:20:31.300
|
2023-05-27T11:31:00.373
|
2023-05-27T11:31:00.373
|
362671
|
266571
| null |
613627
|
1
| null | null |
0
|
41
|
can you please explain in principal component analysis (PCA), why maximum variance means minimum information loss?
|
in pca, why maximum variance means minimum information loss
|
CC BY-SA 4.0
| null |
2023-04-20T22:23:07.167
|
2023-04-21T13:02:11.323
| null | null |
382257
|
[
"pca"
] |
613629
|
2
| null |
586202
|
1
| null |
OK, here's my take on "how should we interpret them?"
The p-value generally is the probability, assuming the null hypothesis (H0) were true, that we would observe something that is as far or further away from what can be expected under H0, than what we actually observed.
If the p-value is very small, we have observed something that indicates so strongly against H0 that it would very rarely happen, were H0 in fact true. This makes the H0 look incompatible with the data, and can in this way count as evidence against it (depending on how small the p-value actually is, we can differentiate between weak, moderate, strong, (...) evidence against it).
"As far or further away from what is expected" is defined in terms of the test statistic, i.e., the test statistic defines a direction of "critical deviation" from the H0. This has three implications.
Firstly, even though a certain test may not provide evidence against the H0, it is still possible that the H0 is wrong in different ways than captured by the test statistic, which could in principle be found by other tests (i.e., violations of other aspects of the probability model chosen as H0, usually framed as "violation of the model assumptions").
Secondly, probability models are idealisations and one should not think that they can be literally "true" in reality. This means that non-rejection should never be interpreted as indication that the H0 is true. At best, it indicates that the data look more or less like data generated by the H0 can be expected to look like, considering the specific aspect measured by the test statistic. As p-values do not measure effect size, it is advisable to look at a confidence interval (CI) to see how "close" to the H0 the data are in terms of the parameter of which the CI is considered (many tests correspond to CIs, so this can be very closely connected to the test statistic).
Thirdly, rejection of the H0 can be interpreted in terms of the direction encoded in the test statistic, i.e., a significant positive z-value in logistic regression (one-sided test) indicates (with the usual error probability; the stronger the smaller the p) that it is in fact more likely to observe a positive relation between explanatory variable and outcome than encoded in the H0 ("no relation"). Note that this statement avoids the implicit assumption that the model is true, which in reality won't be the case (see above).
The value of testing an H0 if we don't believe that models are true anyway is that we need models to formally encode our thinking so that we can compare it systematically with the data. Even though the H0 is not literally true, one could think about the situation as "variable x not having any impact on the outcome" (given other variables in the model if this applies), which is encoded by the H0, and not rejecting the H0 certainly means that data are compatible with this idea (for which one possible formalisation is the exact H0 we are testing).
| null |
CC BY-SA 4.0
| null |
2023-04-20T22:43:52.547
|
2023-04-20T22:43:52.547
| null | null |
247165
| null |
613630
|
2
| null |
613518
|
1
| null |
The error bars in your case will be given by:
$\sigma_z=\frac{1}{\log {(10)}}\frac{\sigma_y}{y}$
on the $\log_{10}$ scale. This is because the log scale is already showing the relative changes of the original data. Though, I have not yet come across a book on error analysis that discusses this issue thoroughly. I learned this as many people have through the notes of Dr. Eric Stuve that are floating around on the internet. Dr. Stuve cites a book by D. C. Baird called Experimentation: An Introduction to Measurement and Theory and Experiment Design, 3rd edition. Yet, I have not read the book too closely because it appears similar to other books I have read on the topic.
From experience, I have noticed when the uncertainty $\sigma_y$ is small enough (< 10% of the reported), you will get a similar result using:
$\log_{10} {(y + \sigma_y)}$
$\log_{10} {(y - \sigma_y)}$
which are the absolute error bars in Dr. Stuve's notes. However, the $\sigma_y$ you provided is ~5900% of your reported value. Per your comment, it sounds like your $\sigma_y$ was determined from statistical sampling. I would try to get the original data and see why $\sigma_y$ is so large.
| null |
CC BY-SA 4.0
| null |
2023-04-20T22:56:21.827
|
2023-04-20T22:56:21.827
| null | null |
141436
| null |
613631
|
2
| null |
315336
|
1
| null |
If you want the distribution of your sample to mirror that of the original population P, you can accomplish this even when the groups overlap. For N groups, G1...GN, divide the original population P into 2^N subsets labeled (g1, ..., gN), where gk is a boolean indicating inclusion/exclusion with respect to group Gk. You are guaranteed that none of these subsets overlap, and every sample belongs to exactly one of them. Then you apply stratified sampling to these disjoint subsets.
So, in your case, you have 5 groups, G1...G5. You would have 2^5 = 32 subsets. For each subset S, you have a vector of booleans (g1_S, g2_S, g3_S, g4_S, g5_S) associated with it, where g1_S indicates whether S is a subset of G1 or P - G1, g2_S indicates whether S is a subset of G2 or P - G2, g3_S indicates whether S is a subset of G3 or P - G3, etc. (Some of these 32 subsets may be empty. You can just drop them.) Applying stratification to these subsets will give you a sample set that mirrors the proportions of not only the groups, but their degree of overlap, in the original population.
| null |
CC BY-SA 4.0
| null |
2023-04-20T23:03:50.737
|
2023-04-20T23:03:50.737
| null | null |
136916
| null |
613632
|
1
| null | null |
1
|
25
|
I have a population data.
I observe two clusters in the distribution of one continuous variable from this population.
I can easily group my data into two clusters with a threshold on this variable.
I also have a hypothesis about these two clusters such that I expect another variable X to be significantly different between two clusters.
That’s indeed the case when I test it. However, I’m now confused whether I should also test the initial clustering step.
For example, maybe, I’m just lucky with my clustering and even two randomly selected subsets of data would be different in X.
Do I need to do another test for this or significant difference of X between the groups are already doing it?
|
Proper way to test threshold-based grouping of data
|
CC BY-SA 4.0
| null |
2023-04-20T23:08:26.170
|
2023-04-20T23:08:26.170
| null | null |
386187
|
[
"hypothesis-testing",
"permutation-test"
] |
613633
|
1
| null | null |
1
|
16
|
Does anyone know of any references that show a proof of Minimax Lower Bound on Mean Squared Error for the average treatment effect in a Bernoulli randomized design?
Take a population of $n$ units for which we apply a treatment $Z$ drawn from a Bernoulli distribution, independently for each unit ($Z_i \sim Ber(p)$). Then, for the average treatment effect defined as $ATE = \mathbb{E}[Y(1) - Y(0)]$, I would like to show that
$$\inf\limits_{\widehat{ATE}} \sup\limits_{M \in \mathcal{M}} \mathbb{E}_M[(\widehat{ATE} - ATE(M))^2] = \Omega(1/n),$$
where $\mathcal{M}$ denotes the set of viable instances. I'm trying to use LeCam's method (see [Chapter 7 in John Duchi's notes](https://anilkeshwani.github.io/files/John-Duchi-Statistics-311-Electrical-Engineering-377.pdf)), but am not seeing how to show this, even for a linear potential outcomes model. I'm trying to show a much simpler result than in [this paper](https://arxiv.org/pdf/2208.05553.pdf) (Theorem 2), but where there is no interference between subjects.
I appreciate any leads or references on this!
|
Minimax Lower Bound on Mean Squared Error for average treatment effect in a Bernoulli randomized design?
|
CC BY-SA 4.0
| null |
2023-04-20T23:24:16.830
|
2023-04-20T23:24:16.830
| null | null |
384581
|
[
"standard-error",
"estimators",
"treatment-effect",
"minimax"
] |
613634
|
2
| null |
613619
|
1
| null |
Here's some code from an effort to clean up some of your R syntax and show a model-comparison approach to using Poisson regression to address your count data problem:
```
df<-data.frame( Location = c(rep("Leighton Moss",6),rep("Eaves Wood",6)),
Period = rep(c(rep("Midday",3),rep("Afternoon",3)),2),
Activity = rep(c("Resting","Foraging","Travelling"),4),
Count = c(88,68,54,30,41,33,13,28,13,55,47,46))
df
#----------------------------
Location Period Activity Count
1 Leighton Moss Midday Resting 88
2 Leighton Moss Midday Foraging 68
3 Leighton Moss Midday Travelling 54
4 Leighton Moss Afternoon Resting 30
5 Leighton Moss Afternoon Foraging 41
6 Leighton Moss Afternoon Travelling 33
7 Eaves Wood Midday Resting 13
8 Eaves Wood Midday Foraging 28
9 Eaves Wood Midday Travelling 13
10 Eaves Wood Afternoon Resting 55
11 Eaves Wood Afternoon Foraging 47
12 Eaves Wood Afternoon Travelling 46
#--------------------------------
glm(Count ~ Location+Period+Activity, data=df, fam="poisson")
#------------------------------
Call: glm(formula = Count ~ Location + Period + Activity, family = "poisson",
data = df)
Coefficients:
(Intercept) LocationLeighton Moss PeriodMidday ActivityResting ActivityTravelling
3.56042 0.44113 0.04652 0.01081 -0.23133
Degrees of Freedom: 11 Total (i.e. Null); 7 Residual
Null Deviance: 125.4
Residual Deviance: 94.55 AIC: 170.1
> mod1 <- glm(Count ~ Location+Period+Activity, data=df, fam="poisson")
> summary(mod1)
Call:
glm(formula = Count ~ Location + Period + Activity, family = "poisson",
data = df)
Deviance Residuals:
Min 1Q Median 3Q Max
-4.5972 -2.2970 -0.1465 2.1749 3.6698
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 3.56042 0.10236 34.783 < 0.0000000000000002 ***
LocationLeighton Moss 0.44113 0.09020 4.891 0.000001 ***
PeriodMidday 0.04652 0.08807 0.528 0.5973
ActivityResting 0.01081 0.10398 0.104 0.9172
ActivityTravelling -0.23133 0.11083 -2.087 0.0369 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 125.41 on 11 degrees of freedom
Residual deviance: 94.55 on 7 degrees of freedom
AIC: 170.11
Number of Fisher Scoring iterations: 5
> mod_null <- glm(Count ~ 1, data=df, fam="poisson")
> anova(mod1,mod_null)
Analysis of Deviance Table
Model 1: Count ~ Location + Period + Activity
Model 2: Count ~ 1
Resid. Df Resid. Dev Df Deviance
1 7 94.55
2 11 125.41 -4 -30.857
> pchisq(30.857,4)
[1] 0.9999967
> mod_Loc <- glm(Count ~ Location, data=df, fam="poisson")
> anova(mod_Loc, mod1)
Analysis of Deviance Table
Model 1: Count ~ Location
Model 2: Count ~ Location + Period + Activity
Resid. Df Resid. Dev Df Deviance
1 10 100.90
2 7 94.55 3 6.352
> 1-pchisq(6.352,3)
[1] 0.0956856
```
Seems like most of the statistical "signal" is in the Location parameter. You can compare a couple of other reduced models and you find that the Activity variable is nominally "significant", but unless that was a preset hypothesis, you are really not taking into acount the multiple comparisons problem if you just report the p=0.048.
```
> mod_Loc_Per <- glm(Count ~ Location+Period, data=df, fam="poisson")
> anova(mod_Loc_Per, mod_Loc)
Analysis of Deviance Table
Model 1: Count ~ Location + Period
Model 2: Count ~ Location
Resid. Df Resid. Dev Df Deviance
1 9 100.62
2 10 100.90 -1 -0.27909
> mod_Loc_Act <- glm(Count ~ Location+Activity, data=df, fam="poisson")
> anova(mod_Loc_Act, mod_Loc)
Analysis of Deviance Table
Model 1: Count ~ Location + Activity
Model 2: Count ~ Location
Resid. Df Resid. Dev Df Deviance
1 8 94.829
2 10 100.902 -2 -6.073
> 1-pchisq(6.073,2)
[1] 0.0480026
```
In an earlier answer I cited a couple of references on using Poisson regression: [What statistical test do I use to check percentage differences?](https://stats.stackexchange.com/questions/613135/what-statistical-test-do-i-use-to-check-percentage-differences/613162#613162) and on an even earier answer I gave a description of how to interpret such regression coefficients: [How to interpret coefficients in a Poisson regression?](https://stats.stackexchange.com/questions/11096/how-to-interpret-coefficients-in-a-poisson-regression)
| null |
CC BY-SA 4.0
| null |
2023-04-20T23:59:45.163
|
2023-04-21T00:14:24.510
|
2023-04-21T00:14:24.510
|
2129
|
2129
| null |
613635
|
1
|
613668
| null |
3
|
45
|
I am analyzing adverse events rate in two groups, treatment and control group. There are about 10 different adverse events. Most of the rates are not significantly different between these two groups, however, the rates are systematically higher in the treatment group. What does that mean? Is it related to sample size?
[](https://i.stack.imgur.com/pf0uy.png)
|
P value is not significant, but most rates in treatment group are systematic higher than that in control group
|
CC BY-SA 4.0
| null |
2023-04-21T00:09:32.817
|
2023-04-21T09:50:34.240
| null | null |
386190
|
[
"p-value",
"sample-size"
] |
613636
|
1
| null | null |
0
|
21
|
I know that we want to have uncorrelated variables at the end if we apply the PCA. But, can you please explain why we want to still have the maximum variance?
|
why do we want to keep the maximum variance for principal component analysis?
|
CC BY-SA 4.0
| null |
2023-04-21T00:14:37.047
|
2023-04-21T00:20:24.707
|
2023-04-21T00:20:24.707
|
805
|
382257
|
[
"pca"
] |
613637
|
1
| null | null |
1
|
24
|
I want to classify 120k customers into 5-6 clusters basis the product usage, say, hundreds of product features clicked and hundreds of product pages viewed. The data will be like a customer_id has clicked feature_1: 500 times, feature_2: 300 times, etc., viewed page_1: 200 times, page_2: 300 times, and so on in a timeframe of 1 year. How can I reduce hundreds of features into a few relevant ones which can be fed in a clustering technique? What could be a few possible data pre-processing steps here? Can a sample (30k?) from 120k customers be representative of the population for the clustering exercise? Thanks!
|
Clustering on thousands of product feature clicks and pages viewed
|
CC BY-SA 4.0
| null |
2023-04-21T00:18:07.313
|
2023-04-21T00:18:07.313
| null | null |
49560
|
[
"clustering",
"dimensionality-reduction",
"large-data",
"data-preprocessing"
] |
613638
|
2
| null |
134811
|
0
| null |
I am reproducing @[amoeba's](https://stats.stackexchange.com/users/28666/amoeba) comment as an answer:
>
The answer to your first question is yes (and you do realize that $a_1^\top a_1 = 1$ means that the vector $a_1$ has length $1$, right?). To answer your second question, denote the centered data matrix $X$ (rows are samples, columns are variables). Then projection of the data onto the vector $a_1$ is given by $Xa_1$. The variance of this projection is given by $$
\begin{align}\frac{1}{n-1}(Xa_1)^\top (Xa_1) &= \frac{1}{n-1}a_1^\top X^\top X a_1\\
&= a_1^\top \left(\frac{X^\top X}{n-1}\right)a_1 \\
&= a_1^\top \Sigma a_1,
\end{align}$$ QED.
| null |
CC BY-SA 4.0
| null |
2023-04-21T01:10:05.457
|
2023-04-21T01:10:05.457
| null | null |
22311
| null |
613639
|
2
| null |
613606
|
2
| null |
>
Can I conclude that extraversion significantly predicts women's outcomes, but not for men?
Yes and no. Yes it is significant in one subgroup. But, is the association significantly different between men and women? No - the interaction is not significant. What is going on? You didn't tell us the main effect of extraversion, but probably it is also significant? The fact that you don't have a significant interaction means that this association doesn't differ between men and women. Probably it is just non-significant in men due to the loss of power, because the sample is smaller.
| null |
CC BY-SA 4.0
| null |
2023-04-21T01:16:25.663
|
2023-04-21T01:16:25.663
| null | null |
288142
| null |
613640
|
1
| null | null |
0
|
22
|
I would like to ask it dose not require to consider the correlation of variables if using Gaussian mixture model for clustering, because covariance has already been considered in the model. So the further question is that MFA (Multivariate Factor Analysis) adds rotation based on PCA (Principal Component Analysis), which can make variables after dimension reduced easier to explain but lose independence. Therefore, if the Gaussian mixture model does not need to consider the correlation of variables, can it indicate that the dataset is more suitable to use factor analysis instead of PCA before clustering?
|
Clustering correlated variables in Gaussian Mixture Models
|
CC BY-SA 4.0
| null |
2023-04-21T02:10:34.460
|
2023-04-21T02:10:34.460
| null | null |
386193
|
[
"correlation",
"pca",
"gaussian-mixture-distribution"
] |
613641
|
1
|
613642
| null |
3
|
76
|
One can find the distribution of $X+Y$ where $X$ and $Y$ are independent random variables using this formula
$$f_{X+Y}(a)=\int_{-\infty}^\infty f_X(a-y) f_Y(y) dy$$
I'm wondering how to adapt this formula to find the distribution of $X-Y$. Is that correct?
$$f_{X-Y}(a)=\int_{-\infty}^\infty f_X(a+y) f_Y(-y) dy$$
|
Calculating the distribution of $X-Y$
|
CC BY-SA 4.0
| null |
2023-04-21T02:27:27.457
|
2023-04-21T04:38:06.027
|
2023-04-21T03:16:14.517
|
362671
|
386194
|
[
"probability",
"distributions",
"convolution"
] |
613642
|
2
| null |
613641
|
5
| null |
Unfortunately, this is incorrect. The correct formula should be
\begin{align}
f_{X - Y}(a) = \int_{-\infty}^\infty f_X(a + y)f_Y(y)dy, \tag{1}
\end{align}
which is the consequence of
\begin{align}
P[X - Y \leq a] = \int_{-\infty}^\infty P[X \leq a + y]f_Y(y)dy =
\int_{-\infty}^{\infty}F_X(a + y)f_Y(y)dy
\end{align}
and then taking derivative under the integral with respect to $a$ (this is also how the convolution formula for $f_{X + Y}$ is derived). Of course, when $f_Y$ is symmetric about $0$, then $(1)$ coincides with what you proposed.
Alternatively, suppose you already know the convolution formula
\begin{align}
f_{X + Y}(a) = \int_{-\infty}^\infty f_X(a - y)f_Y(y)dy \tag{2}
\end{align}
for $X + Y$ with independent $X$ and $Y$, and want to derive $f_{X - Y}$ directly from it without computing the CDF then taking derivative as above (that's probably what you really intended given how the question is presented). You can write $X - Y = X + (-Y)$, it then follows by $(2)$ that (note $X$ and $-Y$ are independent given $X$ and $Y$ are):
\begin{align}
f_{X - Y}(a) = \int_{-\infty}^\infty f_X(a - y)f_{-Y}(y)dy. \tag{3}
\end{align}
Since $f_{-Y}(y) = f_Y(-y)$ (to see this, consider the 1-1 transformation $g(y) = -y$ with Jacobian determinant $-1$), $(3)$ becomes
\begin{align}
f_{X - Y}(a) = \int_{-\infty}^\infty f_X(a - y)f_Y(-y)dy. \tag{4}
\end{align}
By variable substitution $-y = t$, $(4)$ and $(1)$ are identical.
| null |
CC BY-SA 4.0
| null |
2023-04-21T02:46:40.160
|
2023-04-21T04:38:06.027
|
2023-04-21T04:38:06.027
|
20519
|
20519
| null |
613644
|
2
| null |
613641
|
2
| null |
Retreat to the basics: consider $U:= X-Y, ~X, Y$ being independent. Consider the set $S:=\{(x,y)\mid x-y\leq u\}.$ Therefore the cdf of $U$ is
\begin{align}F_U(u)&=\Pr[U\leq u]\\&=\iint_S f(x, y) ~\mathrm dx\mathrm dy\\&=\int_{-\infty}^\infty\left(\int_{x-u}^\infty f(x, y) ~\mathrm dy\right) ~\mathrm dx\\&=\int_{-\infty}^\infty\left(\int_{-\infty}^u f(x, x-z) ~\mathrm dz\right) ~\mathrm dx, ~~~z:= x-y\\&= \int_{-\infty}^u \left( \int_{-\infty}^\infty f(x, x-z) ~\mathrm dx\right) ~\mathrm dz~~~\textrm{(Fubini's Theorem) };\tag 1\label 1\end{align} from $\eqref 1,$ the pdf of $U$ is \begin{align} f_U(u) &=\int_{-\infty}^\infty f(x, x-u) ~\mathrm dx \\&=\int_{-\infty}^\infty f(u+y, y) ~\mathrm dy\\&=\int_{-\infty}^\infty f_X(x) \cdot f_Y(x-u) ~\mathrm dx\\&= \int_{-\infty}^\infty f_X(u+y) \cdot f_Y(y) ~\mathrm dy. \end{align}
| null |
CC BY-SA 4.0
| null |
2023-04-21T03:07:08.400
|
2023-04-21T03:07:08.400
| null | null |
362671
| null |
613645
|
1
| null | null |
0
|
49
|
I am having confusion on how to evaluate and compare the quality of different regression models.
- For example, I understand that classification models are more straightforward to compare and evaluate as metrics such as F-Score, AUC/ROC, Confusion Matrix are all bounded between 0 and 1 .
- However, in regression models, comparison metrics such as RMSE, Cross Validation Error, AIC and BIC are all unbounded - if several regression models are compared, the model with the lowest RMSE, AIC, BIC still might be an overall bad model even though its better than all the other models! (e.g. a turtle is faster than a snail but both animals are still slow!)
This being said, is there any general advice on how to compare different regression models fit on the same dataset? For example, are really large values of AIC and BIC (e.g. over 1000000) an overall indicator of poor performance?
|
"Bounded" Metrics for Comparing Regression Models?
|
CC BY-SA 4.0
| null |
2023-04-21T03:08:54.067
|
2023-04-23T20:10:57.500
|
2023-04-21T04:09:03.153
|
247274
|
77179
|
[
"regression"
] |
613646
|
1
| null | null |
1
|
28
|
An energy-based model parametrized by $\theta$ is defined as
$$
p(x; \theta) \propto \exp(-f(x; \theta))
$$
For my specific case, it is that $f(x; A, y) = - \langle x, Ax \rangle - \langle x, y \rangle$ and $x$ is over the hypercube $\{ -1, +1 \}^N$.
I am looking for an efficient way of computing the expected value (barycenter) of this model without MCMC/sampling. Is it possible to do so?
|
Expectation of energy-based model
|
CC BY-SA 4.0
| null |
2023-04-21T03:32:21.600
|
2023-04-25T03:29:53.910
|
2023-04-25T03:29:53.910
|
310805
|
310805
|
[
"machine-learning",
"mathematical-statistics",
"generative-models"
] |
613648
|
1
| null | null |
0
|
10
|
let's say I have a project which involves predicting the likelihood of whether a specific project will meet their sales target or not.
So, I have 2000 project records from Jan 2020 to Apr 2023. Some of this have already met their sales target in different months of 2022 and some are still in progress. So, now I am in the process of extracting features for these projects and building a predictive model.
So, my question is, let's say I have a project_id = 2 which met the sales target on Dec 10th, 2022 and I have another project_id = 4 which is still ongoing (in progress) of meeting the sales target (has met only 40% of sales target).
So, now when I engineer features, should my features be made up of data till latest time point for both project_id = 2 and project_id = 4?
Or since project_id = 2 got over/met the target by Dec 10th 2022, should my values for features should only be till Dec 10th, 2022 for project_id=2? And for Project_id = 4, we can have values for features based on latest data till 21st Apr 2023.
But the problem is I have 2000 projects. So, for all these projects do I have to compute engineer features and their values based on the date when they met their target? Because each project could have met their target on different dates. So, I have to compute the values for each feature based on their outcome date/target met date? Am I right?
|
extract features till date of outcome or latest date
|
CC BY-SA 4.0
| null |
2023-04-21T04:41:49.777
|
2023-04-21T04:41:49.777
| null | null |
241460
|
[
"machine-learning",
"probability",
"classification",
"data-transformation",
"data-mining"
] |
613649
|
2
| null |
249874
|
1
| null |
Placing a regularizing restriction sometimes help prevent crossing.
[https://ieeexplore.ieee.org/document/9548806](https://ieeexplore.ieee.org/document/9548806)
| null |
CC BY-SA 4.0
| null |
2023-04-21T05:04:22.177
|
2023-04-21T05:04:22.177
| null | null |
386207
| null |
613650
|
1
| null | null |
0
|
9
|
I have time series data at weekly level with 4 levels.
- Level 0 - Time Week on week
- Level 1 Job type - Day,Night etc
- Level 2 Location - Location to which job belongs
- Level 3 County - County under which location falls
Population data is available at location level. unemployment data is available at county level.
I am trying to predict the Weekly applications (Y) based on wage rate , population in the location, and unemployment rate. There are other factors but not mentioning all .
I have created below model:
```
model = lmer((applications)~wage rate + population + unemployment_rate_+ (1+wage rate+ population +unemployment_rate |county/location/Jobtype) ,
REML = TRUE,
data = data4)
```
Would like to confirm if this structure of the model makes sense? .
I am also confused on how we calculate the prediction at week level. Couldn't find any example where such 4 levels of time series data Analysed. Most of the examples are math oriented I am unable to follow it .
Please advise.
[](https://i.stack.imgur.com/STr10.png)
|
Mixed model with 4 levels of data
|
CC BY-SA 4.0
| null |
2023-04-21T05:13:18.447
|
2023-04-21T05:13:18.447
| null | null |
386206
|
[
"r",
"machine-learning",
"hypothesis-testing",
"mixed-model",
"multilevel-analysis"
] |
613651
|
1
| null | null |
1
|
41
|
Suppose we have a table of a group of friend's preferences for food. Each person will receive a unique meal. What is the approach to sample from this preference table so that each time this group visits the restaurant, we satisfy their preferences based on these probabilities? The sums of the rows add up to 1, but since pizza and pasta are out of stock, I zero out their probabilities. I want this to be a sample based method (ie. Person3 will get bread most of the time, but very infrequently they might receive eggs. Person0 is likely to get Tuna, but Person1 and Person2 will probably get it quite often too, not as frequently as Person0). The frequency should reflect their own preferences and the trade offs for someone else receiving the meal. How would I approach this problem?
| |Tuna |Pizza |Bread |Pasta |Cake |Eggs |
||----|-----|-----|-----|----|----|
|Person0 |0.33 |0 |0.1 |0 |0.1 |0.01 |
|Person1 |0.3 |0 |0.2 |0 |0.1 |0.01 |
|Person2 |0.25 |0 |0.2 |0 |0.1 |0.01 |
|Person3 |0 |0 |0.9 |0 |0 |0.01 |
|
Sampling from a preference table
|
CC BY-SA 4.0
| null |
2023-04-21T06:07:54.567
|
2023-04-23T02:47:06.440
|
2023-04-23T02:47:06.440
|
386208
|
386208
|
[
"probability",
"distributions",
"multivariate-analysis",
"conditional-probability"
] |
613653
|
2
| null |
613609
|
1
| null |
The answer depends on what forecasts you actually want to compare.
- If you are interested in each of the 40 horizons $h=1,\dots,40$, you could do 40 separate Diebold-Mariano tests. E.g. each horizon's forecast leads to a separate decision, and consequences of each decision can be identified using a loss function defined on the forecast error. (The loss functions could be different for each horizon.) Thus you would get a specific answer for each horizon.
You may want to adjust the significance level to account for multiple testing.
You may also consider treating this as a test of a joint hypothesis.
- If you treat the 40 horizons as one multivariate forecast, you could do a single test. E.g. there is a single decision based on the 40 forecasts and you can define a loss function on the multivariate forecast error such as $L(e_1,\dots,e_{40})=\sum_{h=1}^{40} w_h|e_h|$ with nonnegative weights $w_h$ for $h=1,\dots,40$; here $e_h$ for $h=1,\dots,40$ are forecast errors. Thus you would get a single answer covering all the horizons.
The choice between these approaches should be guided by what you really are interested in from the subject-matter perspective.
| null |
CC BY-SA 4.0
| null |
2023-04-21T07:11:26.110
|
2023-04-21T07:11:26.110
| null | null |
53690
| null |
613655
|
1
| null | null |
1
|
24
|
While correlating two variables, which have different number of samples in each, how can I find the df?
Eg. Sample-A has 35 observations, Sample-B has 22 observations,
How to calculate df (N)-2?
Whether 35-2 or 22-2?
|
What would be the sample number if two variables have different number of samples?
|
CC BY-SA 4.0
| null |
2023-04-21T07:26:49.607
|
2023-04-21T07:26:49.607
| null | null |
386212
|
[
"correlation",
"statistical-significance",
"pearson-r",
"degrees-of-freedom"
] |
613656
|
1
| null | null |
1
|
20
|
One metric used to measure the efficiency of sampling in monte carlo sampling (given a dataset of size $n$) is the effective sample size $N_{eff}$.
*The efficiency of a sampling procedure depends partly in the effective sample size ESS for a raw set of N samples. The ESS is denoted by $N_{eff}$.
- In inverse transform sampling: all samples are direct samples from the target distribution so $N_{eff} = N$ (hence, the sampling process is inefficient)
-In rejection sampling, the raw set of samples are from the proposal distribution but $N_{eff}$ is explicit - it is simply the number of accepted samples.
- In importance sampling, the ESS is $N_{eff} = \frac{1}{\sum_{i}^{N}w_{i}^{2}}$*.
I struggle to understand the above. A sequence $S$ of $N$ samples with correlation between the sample leads to redundancies so that the sequence $S$ would have an effective size less than $N$.
Shouldn't this imply then that inverse sampling is efficient?
Edit: to provide more context.
|
Why is inverse - sampling inefficient?
|
CC BY-SA 4.0
| null |
2023-04-21T07:27:56.090
|
2023-04-21T08:08:58.247
|
2023-04-21T08:08:58.247
|
109101
|
109101
|
[
"importance-sampling"
] |
613657
|
1
| null | null |
0
|
19
|
Assuming a sample $X_1, X_2, ..., X_n$, the sample variance is calculated as
$s^2 = \frac{1}{n-1} \sum (X_i-\bar{X})^2$
The fact that there is $n-1$ in the denominator instead of $n$ is called the Bessel correction and ensures that $s^2$ is an unbiased estimator of the variance $\sigma_X^2 = \langle X^2 \rangle -\langle X\rangle^2$, that is
$\langle s^2 \rangle = \sigma^2_X$
However, during the derivation of the Bessel correction (e.g. [here](https://www.uio.no/studier/emner/matnat/math/MAT4010/data/forelesningsnotater/bessel-s-correction---wikipedia.pdf)), one makes use of the fact that the sample elements are independent, thus $\sigma^2_\bar{X} = \sigma_X^2/n$, that is, the sample mean variance is $1/n$ times the $X$ variance.
What happens if this is not the case and the sample elements are actually statistically dependent (correlated)? Since we only have a single sample, we do not actually know what the covariances $\langle X_i X_j\rangle$ would be. How would one estimate $\sigma^2_X$ then from the sample? Is $\langle s^2 \rangle$ still an unbiased estimator and if so, how can I see that?
|
Bessel correction for the variance of dependent sample
|
CC BY-SA 4.0
| null |
2023-04-21T07:32:18.607
|
2023-04-21T07:32:18.607
| null | null |
153176
|
[
"anova",
"covariance",
"covariance-matrix",
"sample",
"bessels-correction"
] |
613659
|
2
| null |
613575
|
6
| null |
Here is a possible solution to your problem. Essentially, the idea is to estimate nonparametrically a bivariate density function to the data at hand and then plot it by means of contour levels. In order to compare the distribution across different groups, you can pick some representative contour levels for each group and draw them on the same plot.
Needless to say, the choice of the estimation method is crucial and the sample size should be sufficiently large. In my answer, I'll use a kernel smoother which theory is explained in the book [Multivariate Kernel Smoothing and Its Applications](https://doi.org/10.1201/9780429485572) by ByJosé E. Chacón, Tarn Duong and implemented in the package `ks` of `R`. Other possible solutions are provided in packages `sm` and `KernSmooth`, but I'm not pursuing them further here.
```
# generate some data first
set.seed(12)
n = 5000
x_ = rgamma(n = n, shape=10, scale=0.01)
x = 1/x_
y = rnorm(n, 22, sd = sqrt(x))
library(ks)
# compute the kernel density estimate
# using the default parameters (playing with it
# a bit may give better solutions)
fhat = kde(cbind(x,y), compute.cont=TRUE)
# figure
plot(fhat,display="filled.contour",border=1,
alpha=0.8, lwd=1)
```
[](https://i.stack.imgur.com/XkOyE.png)
```
# another figure
plot(fhat, lwd= 2, col = 1)
points(x, y, pch=20, cex=0.1, col = 'gray')
plot(fhat, lwd=2, add = TRUE, col = 1)
```
[](https://i.stack.imgur.com/2XoQn.png)
The plotted curves are approximate probability contours. Here you see the contour levels of approximate probability content equal to 25, 50 and 75.
Now, since your aim is to compare different distributions, that is, distributions from different groups, I suggest picking a few contours for each distribution, e.g. 50 and 75 for each group and placing them on the same picture, using different colours.
Let's apply this to a real example. In particular, let's consider the `iris` dataset, and compute a kernel density estimate for each type of flower (e.g. setosa, versicolor, virginica) using the variables sepal length and sepal width. To show pictorially the estimated bivariate densities I've selected the contour plots with approximate probability coverage 0.25, 0.5, 0.75 and 0.95. Becuase the aim here is group comparison, we can compare one level at a time between the three groups. In the figure below, the contours with level 0.25 are shown in panel (a), the contours with level 0.50 are shown in panel (b) and so on.
We note some degree of separation between the three groups, especially between the setosa and the other two groups.
[](https://i.stack.imgur.com/Qfbhe.png)
| null |
CC BY-SA 4.0
| null |
2023-04-21T08:13:45.957
|
2023-04-24T20:50:47.510
|
2023-04-24T20:50:47.510
|
56940
|
56940
| null |
613660
|
2
| null |
575198
|
0
| null |
In autoencoding variational bayes and other variational methods, the reparametrization trick is used to sample from an approximate posterior in a way that is amenable to gradient based optimization.
Sampling directly from $p_\phi(z|x)= \mathcal N(\mu(x), \sigma(x))$ requires more complex automatic differentiation, while sampling like $z= \mathcal \mu(x) + \sigma(x) \cdot \epsilon$, $\epsilon \sim N(\mathbf 0, \mathbb I)$, uses only operations that can be easily differentiated.
Discrete latent variable, on the other hand, even if defined taking subderivatives into account, would lead to zero derivatives, thus making it impossible to optimise through them.
There is an approximate solution to that, however, in the form of the Straight-Through estimators, which basically treat the variables as continuous for the backward pass. See for example the Straight-Through Gumbel Softmax.
| null |
CC BY-SA 4.0
| null |
2023-04-21T08:48:16.403
|
2023-04-21T08:48:16.403
| null | null |
60613
| null |
613661
|
1
| null | null |
1
|
24
|
I am starting to conduct a meta-analysis on the effect of different raw materials on pellets quality (for bioenergy purposes) I have already the database of studies and my idea initially was to calculate standardized mean difference effect size using the value of "alternatve biomass" pellet (treatment) compared with the one of a "typical/commercial" pellet (control). However many studies in my database do not report any control pellet. Therefore I am thinking about using as control the values taken from the technical ISO regulation for pellets quality. In this way I will compare the value of the "alternative" pellet with the value from the standard... the problem is that the standard did not report any variance (SE, SD; CI), for example heating value has to be higher than XX MJ/kg... do you know how to manage this issue and calculate effect sizes that can be subsequently grouped by meta-analysis?
|
meta-analysis when the control value is taken from a standard (iso, en, etc...) and do has not any variance
|
CC BY-SA 4.0
| null |
2023-04-21T09:08:01.870
|
2023-04-21T09:08:01.870
| null | null |
346455
|
[
"meta-analysis",
"effect-size",
"standardized-mean-difference"
] |
613663
|
1
| null | null |
2
|
28
|
I am running a generalised mixed effect model with a binomial response for my thesis.
The model estimated the chance an a subject in a certain group is a female. My final goals is to compare the groups by doing pairwise comparisons between the estimated marginal means.
I use hunting season as a random effect since I assume the population structure differs over the seasons. Based on contingency tables, there is a relationship between sex structure (and age structure) and hunting season. If I add hunting seasons as a fixed effect, it also plays a significant role.
However if I run my model, it seems that the Random effect doesn't play a very big role, which sounds contra intuitive for me.
Code:
```
e = glmer(Sexb ~ `Age class` + group_d + `Age class`:group_d + (1| `Hunting season`),
data = modeldata,
family = binomial,
control=glmerControl(optimizer = "bobyqa")
)
summary(e)
```
Output:
```
Show in New Window
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: binomial ( logit )
Formula: Sexb ~ `Age class` + group_d + `Age class`:group_d + (1 | `Hunting season`)
Data: modeldata
Control: glmerControl(optimizer = "bobyqa")
AIC BIC logLik deviance df.resid
2616.8 2689.3 -1295.4 2590.8 1944
Scaled residuals:
Min 1Q Median 3Q Max
-4.9410 -0.8933 -0.6638 1.0896 1.5446
Random effects:
Groups Name Variance Std.Dev.
Hunting season (Intercept) 0.01505 0.1227
Number of obs: 1957, groups: Hunting season, 6
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.3214 0.2589 1.241 0.2146
`Age class`1 -0.9958 0.9021 -1.104 0.2697
`Age class`2 0.2525 0.6777 0.373 0.7095
`Age class`3+ 1.0863 0.5227 2.078 0.0377 *
group_dFamily Pack -0.3705 0.3095 -1.197 0.2313
group_dSolo -0.6988 0.2881 -2.425 0.0153 *
`Age class`1:group_dFamily Pack 14.6728 39.8115 0.369 0.7125
`Age class`2:group_dFamily Pack -13.7172 26.3269 -0.521 0.6023
`Age class`3+:group_dFamily Pack 2.2253 1.1592 1.920 0.0549 .
`Age class`1:group_dSolo 1.2148 0.9164 1.326 0.1850
`Age class`2:group_dSolo -0.6274 0.7002 -0.896 0.3703
`Age class`3+:group_dSolo -0.8991 0.5481 -1.640 0.1009
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
Following [https://data.library.virginia.edu/getting-started-with-binomial-generalized-linear-mixed-models/](https://data.library.virginia.edu/getting-started-with-binomial-generalized-linear-mixed-models/) I perform following steps to interpret my random effect:
Extract the estimated standard deviation:
```
VarCorr(e)
output:
Groups Name Std.Dev.
Hunting season (Intercept) 0.1227
```
As far as I understand this 0.12 is an estimate of the 0.015.
If a calculate confidence intervals I get the following:
```
2.5 % 97.5 %
sd_(Intercept)|Hunting season 0 0.3211868
```
0 is part of the confidence interval, suggesting that their may not be random variability between subjects (caused by differences in hunting season?). Which turns out to be contra intuitve for me, however, using normal generalised models would make life much more easier :)
|
Problems interpreting random effects
|
CC BY-SA 4.0
| null |
2023-04-21T09:19:15.673
|
2023-04-21T09:19:15.673
| null | null |
382882
|
[
"regression",
"mixed-model",
"random-variable",
"interpretation",
"binomial-distribution"
] |
613664
|
1
| null | null |
0
|
24
|
Good morning,
I am currently following a class on Biostatistics and am currently struggling on an exercise. I am given 2 graphs $G1$ and $G2$.
Then, assuming that $G1$ represents a NPSEM-IE, I have to either show that $G2$ also represents a NPSEM-IE or give a counter-example.
First of all I am not sure about the model assumptions implied by saying that $G1$ represents a NPSEM-IE. Because then I think I would need to show that the model assumptions for $G2$ are satisfied under $G1$?
Thank you in advance for any clarification you could give.
|
Given a graph representing a NPSEM-IE, how can I show that another graph also represents one?
|
CC BY-SA 4.0
| null |
2023-04-21T09:23:48.577
|
2023-04-21T19:01:38.403
|
2023-04-21T19:01:38.403
|
28500
|
386162
|
[
"nonparametric",
"biostatistics",
"graph-theory",
"causal-diagram"
] |
613666
|
1
|
613702
| null |
0
|
23
|
I have the dataset:
```
structure(list(n = c(0L, 3L, 0L, 0L, 1L, 1L, 4L, 1L, 0L, 1L,
0L, 1L, 2L, 0L, 2L, 2L, 1L, 1L, 2L, 0L, 1L, 1L, 1L, 2L, 3L, 1L,
0L, 2L, 5L, 1L, 0L, 1L, 4L, 0L, 0L, 3L, 0L, 2L, 2L, 1L, 2L, 2L,
1L, 4L, 1L, 0L, 1L, 2L, 0L, 1L, 1L, 3L, 0L, 0L, 1L, 0L, 0L, 1L,
0L, 0L, 2L, 0L, 0L, 2L, 0L, 2L, 3L, 1L, 2L, 3L, 2L, 2L, 1L, 0L,
1L, 1L, 1L, 1L, 2L, 1L, 2L, 2L, 4L, 2L, 2L, 1L, 2L, 0L, 0L, 0L,
0L, 0L, 0L, 5L, 1L, 2L, 0L, 0L, 1L, 1L, 1L, 0L, 0L, 2L, 3L, 0L,
1L, 3L, 0L, 0L, 1L, 1L, 0L, 3L, 4L, 2L, 4L, 1L, 0L, 3L, 0L, 0L,
2L, 1L, 2L, 1L), year = c(1979, 1980, 1981, 1982, 1983, 1984,
1985, 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995,
1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006,
2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017,
2018, 2019, 2020, 1979, 1980, 1981, 1982, 1983, 1984, 1985, 1986,
1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997,
1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008,
2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019,
2020, 1979, 1980, 1981, 1982, 1983, 1984, 1985, 1986, 1987, 1988,
1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999,
2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020),
location = c("London", "London", "London", "London", "London",
"London", "London", "London", "London", "London", "London",
"London", "London", "London", "London", "London", "London",
"London", "London", "London", "London", "London", "London",
"London", "London", "London", "London", "London", "London",
"London", "London", "London", "London", "London", "London",
"London", "London", "London", "London", "London", "London",
"London", "Liverpool", "Liverpool", "Liverpool", "Liverpool",
"Liverpool", "Liverpool", "Liverpool", "Liverpool", "Liverpool",
"Liverpool", "Liverpool", "Liverpool", "Liverpool", "Liverpool",
"Liverpool", "Liverpool", "Liverpool", "Liverpool", "Liverpool",
"Liverpool", "Liverpool", "Liverpool", "Liverpool", "Liverpool",
"Liverpool", "Liverpool", "Liverpool", "Liverpool", "Liverpool",
"Liverpool", "Liverpool", "Liverpool", "Liverpool", "Liverpool",
"Liverpool", "Liverpool", "Liverpool", "Liverpool", "Liverpool",
"Liverpool", "Liverpool", "Liverpool", "Cork", "Cork", "Cork",
"Cork", "Cork", "Cork", "Cork", "Cork", "Cork", "Cork", "Cork",
"Cork", "Cork", "Cork", "Cork", "Cork", "Cork", "Cork", "Cork",
"Cork", "Cork", "Cork", "Cork", "Cork", "Cork", "Cork", "Cork",
"Cork", "Cork", "Cork", "Cork", "Cork", "Cork", "Cork", "Cork",
"Cork", "Cork", "Cork", "Cork", "Cork", "Cork", "Cork")), row.names = c(NA,
-126L), class = c("tbl_df", "tbl", "data.frame"))
```
where I have applied Poisson Regression:
```
summary(p2 <- glm(n ~ location+year, family="poisson", data=test2))
```
and then lsmeans for post hoc analysis:
```
emmeans(p2, pairwise ~ location)
```
I have the following questions:
(1) In the results of lsmeans I see that in the contrast the degrees of freedom are Inf. Why does this happen?
(2) How the individual lsmeans are computed? Can you maybe give a formula? I know that lsmeans is the estimated expected values for an effect assuming all other covariates are at their average level. Does that mean that year is on the average level?
|
LSMEANS for Poisson Regression in R
|
CC BY-SA 4.0
| null |
2023-04-21T09:34:30.590
|
2023-04-21T14:45:15.953
| null | null |
355434
|
[
"r",
"poisson-regression",
"lsmeans"
] |
613667
|
1
| null | null |
1
|
26
|
What is the critical value for pearson correlation df=190? The total number of samples is 192, so df is N-2 = 190. The tables have critical values for df 100, 150, 300, 500, 1000. If the df (N-2) falls in between (e.g. 145, 395, etc., which critical value should be considered to judge whether it is significant or not. Thanks.
Analyzed correlation between two variables, which have 192 observations in each. I could not find critical value for the df 190.
|
What is the critical value for those df not listed in Critical value table for Pearson correlation?
|
CC BY-SA 4.0
| null |
2023-04-21T09:39:24.120
|
2023-04-21T09:39:24.120
| null | null |
386212
|
[
"correlation",
"degrees-of-freedom"
] |
613668
|
2
| null |
613635
|
0
| null |
The power of the test is based on sample size yes, but more importantly on the amount of events. All proportions of events are very low, (e.g., 0.35% of 4000 = 0.0035 * 4000 = 14 events. This considerably limits the power of your statistical test. Therefore, there is much uncertainty whether these differences are truly systematic.
| null |
CC BY-SA 4.0
| null |
2023-04-21T09:50:34.240
|
2023-04-21T09:50:34.240
| null | null |
385890
| null |
613669
|
1
| null | null |
1
|
26
|
I have the following model (simplified here for the description). The obsvered variable is $y_i$ which is a linear function of some random variable $\eta_i$ and a random error term $\epsilon_i$:
$y_{i} = \eta_i + \epsilon_i$
Now, $\eta_i$ is also a linear function of some random variable plus another random error term:
$\eta_{i} = b \cdot \tau_i + \delta_i$
and finally $\tau_i$ is something like
$\tau_i = c \tau_i + \xi_i$ <-> $\tau_i = c' \cdot \xi_i$
The goal is to estimate the coefficients $b$ and $c$, and the variances of $\epsilon_i$, $\delta_i$, and $\tau_i$ with a Bayesian approach. Say $\theta = (b,c,\sigma^2_{\epsilon},\sigma^2_{\tau}, \sigma^2_{\xi}$). The posterior is proportional to
$f(\theta, \eta_i, \tau_i | y_i ) \propto f(y|\eta_i,\theta)f(\eta_i|\tau_i,\theta)f(\tau_i|\theta)f(\theta)$
I am not an expert in Bayesian estimation, but my understanding is for the Gibbs sampler I would need the conditional distributions $f(\eta_i| \cdot)$ and $f(\tau_i| \cdot)$, that is I need to sample $\eta_i$ and $\tau_i$. I was wondering whether I really need this or whether there is a way - given the construction of the model - to circumvent the sampling of one of the two variables. For instance, I can easily derive the marginal distribution of $f(\eta_i|\theta)$ where $\tau_i$ is integrated out. Any hints - pointers to papers/books - are appreciated (and sorry when this is a silly problem/question).
Thanks
Stefan
|
Gibbs sampler: Conditional distribution with nested latent variable distributions
|
CC BY-SA 4.0
| null |
2023-04-21T09:57:55.517
|
2023-04-21T09:57:55.517
| null | null |
187586
|
[
"bayesian",
"sampling",
"gibbs"
] |
613670
|
1
| null | null |
2
|
95
|
Suppose I have 2 random variables:
$X\sim \textrm{Bin}(m,p_1)$ and $Y\sim \textrm{Bin}(n,p_2).$
I want to find the distribution of $S=X-Y$ using the probability generating function ($PGF$) treating $S$ as a [function of random variables](https://en.wikipedia.org/wiki/Probability-generating_function#Functions_of_independent_random_variables).
$$G_{S}(z) = G_X(z) G_Y(1/z)$$
Using the [PGF](https://en.wikipedia.org/wiki/Probability-generating_function#Functions_of_independent_random_variables) for a single binomially distributed random variable:
$$G_{S}(z) = {[1-p_1+p_1 z]}^m \space {\left[1-p_2+\frac{p_2}{z}\right]}^n$$
Then to find the probability mass function [PMF](https://en.wikipedia.org/wiki/Probability-generating_function#Power_series) of the resulting distribution I use:
$$p(k) = \Pr(S=k) = \frac{G_{S}^{(k)} (0)}{k!}$$
where the numerator is the $k$-th derivative of the $PGF$.
My question is how is it possible to evaluate this expression when $G_{S}^{(k)}$ contains $1/z$ terms that would be undefined at $0$ (this problem doesn't exist for sum of random variables).
(I'm aware there are other ways to find this distribution, but I just wanted to understand how to find it using the PGF).
|
Probability Generating Function for The Difference of Two Binomially Distributed Random Variables?
|
CC BY-SA 4.0
| null |
2023-04-21T10:41:12.403
|
2023-04-21T21:46:36.263
|
2023-04-21T15:08:48.967
|
375525
|
375525
|
[
"probability",
"random-variable",
"binomial-distribution",
"derivative",
"probability-generating-fn"
] |
613673
|
1
| null | null |
0
|
9
|
I'm trying to understand how a continuous Hidden Markov Model works and I am confused by the difference of the hidden states of the discrete latent variables and the mixture components in the emission probabilities. Initially, I was under the impression that the hidden states of the discrete variable corresponds to the mixture components of the Gaussian mixture but I discovered that this is wrong. I thought that when we transition from state i to state j, we move from mixture component i to mixture component j.
An explanation of their difference as well as how they are related would be highly appreciated. Moreover, given an observation sequence and suppose that we want to learn the parameters of the CHMM, how can we determine how many states and how many mixture components there are.
Any help would be very much appreciated.
|
What is the difference between hidden states and mixture components in a continuous Hidden Markov Model?
|
CC BY-SA 4.0
| null |
2023-04-21T11:03:57.077
|
2023-04-21T11:19:19.260
|
2023-04-21T11:19:19.260
|
362671
|
386227
|
[
"machine-learning",
"stochastic-processes",
"markov-process",
"gaussian-mixture-distribution",
"hidden-markov-model"
] |
613674
|
1
|
613685
| null |
2
|
42
|
I am trying to use mahalanobis transformation on a single variable that is autocorrelating with itself in a time series that goes from week to week over several years. The mahalanobis transformation is defined as this: "If S > 0 then S^(-1) has a unique symmetric positive definite square root S^(-1/2) The Mahalanobis transformation is defined by The Mahalanobis transformation is defined by z = S^(-1/2)(x - mean x) Then S = I, so that this transformation eliminates the correlation between the variables and standardizes the variance of each variable. Currently this is the steps I am doing:
- calculate the mean data
- calculate the covariance matrix
- calculate the inverse square root of the covariance matrix.
- Then calculate (x - mean x) * inverse square root of the covariance matrix.
My results give me the exactly same as standardizing the variable (I have 600 values if it's important) so the data is not decorrelated. Am I doing something obviously wrong or is it that the mahalanobis transformation just isn't suited for a singular variable?
I am using python for this calculation so if any of you that see this want to see the code to I can put it under:
```
def mahalanobis_transform(data):
# Calculate the mean of the data
mean_data = np.mean(data, axis=0)
# Calculate the covariance matrix
cov_matrix = np.cov(data, rowvar=False)
# Calculate the inverse square root of the covariance matrix
inv_sqrt_cov = 1 / np.sqrt(cov_matrix)
# Perform the Mahalanobis transformation for a scalar covariance matrix
transformed_data = (data - mean_data) * inv_sqrt_cov
return transformed_data
```
All answers appreciated.
|
Would using a mahalanobis transformation be the same as standardize the variable?
|
CC BY-SA 4.0
| null |
2023-04-21T11:19:24.063
|
2023-04-21T12:50:35.307
| null | null |
385918
|
[
"time-series",
"python",
"autocorrelation",
"mahalanobis"
] |
613676
|
1
| null | null |
0
|
15
|
Suppose we observe the vector-matrix pair $(y,\mathbf{X})\in\mathbb{R}^n\times\mathbb{R}^{n\times d} $ which is linked by the observation model:
\begin{equation}
y=\mathbf{X}\theta^*+\epsilon
\end{equation}
where $\epsilon\in \mathbb{R}^n$ is the noise vector. In this setting the constrained LASSO is expressed as follows:
\begin{equation}
\hat{\theta}=\min_{\theta \in \mathbb{R}^d}\left\{\frac{1}{2n}\lVert y-\mathbf{X}\theta\rVert_2^2 \right\},\quad\text{such that} \lVert \theta \rVert_1 \leq R
\end{equation}
for some radius $R>0$.
Since $\hat{\theta}$ is optimal, from the theory of M-estimators, we have $\frac{1}{2n}\lVert y-\mathbf{X}\hat{\theta}\rVert_2^2 \leq \frac{1}{2n}\lVert y-\mathbf{X}\theta^*\rVert_2^2$, which with some basic algebraic manipulation:
\begin{eqnarray}
\frac{1}{2n}\lVert y-\mathbf{X}\hat{\theta}\rVert_2^2 &\leq& \frac{1}{2n}\lVert y-\mathbf{X}\theta^*\rVert_2^2\\
\frac{1}{2n}\lVert y-\mathbf{X}\theta^*+\mathbf{X}\theta^*-\mathbf{X}\hat{\theta}\rVert _2^2&\leq& \frac{1}{2n}\lVert \epsilon\rVert_2^2\\
\frac{1}{2n}\lVert \epsilon-\mathbf{X}\hat{\Delta}\rVert_2^2&\leq& \frac{1}{2n}\lVert \epsilon\rVert_2^2\\
\frac{1}{2n}\lVert \epsilon\rVert_2^2-\frac{1}{n}\epsilon^{\top}\mathbf{X}\hat{\Delta}+\frac{1}{2n}\lVert \mathbf{X}\hat{\Delta}\rVert_2^2&\leq& \frac{1}{2n}\lVert \epsilon\rVert_2^2
\end{eqnarray}
which leads to the basic inequality:
\begin{equation}
\frac{\lVert\mathbf{X}\hat{\Delta}\rVert_2^2}{n}\leq\frac{2\epsilon^{\top}\mathbf{X}\hat{\Delta}}{n}
\end{equation}
Now suppose I have a matrix $\mathbf{X}^c \in \mathbb{R}^{(n-1)\times d}$, where now essentially
\begin{equation}
\lVert \mathbf{X}\hat{\Delta}\rVert=\lVert \mathbf{X}^c\hat{\Delta}\rVert+(x_n^{\top}\hat{\Delta})^2
\end{equation}
where $x_n$ is the last row of matrix $\mathbf{X}$. Given that $\hat{\Delta}:=\hat{\theta}-\theta^*$ and $\hat{\theta}$ is the optimal estimator for all $n$ observations, can under certain conditions and assumptions the below claim also be made?
\begin{equation}
\frac{\lVert\mathbf{X}^c\hat{\Delta}\rVert_2^2}{n-1}\underbrace{\leq}_{?}\frac{2\epsilon^{c\top}\mathbf{X}^c\hat{\Delta}}{n-1}
\end{equation}
where $\epsilon^c\in\mathbb{R}^{n-1}$.
|
Is there any theory pertaining to an assumption to would allow the following inequality to hold (constrained LASSO)
|
CC BY-SA 4.0
| null |
2023-04-21T11:23:31.153
|
2023-04-21T11:31:41.947
|
2023-04-21T11:31:41.947
|
53690
|
84309
|
[
"lasso",
"high-dimensional",
"inequality"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.