Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
613927
|
1
| null | null |
0
|
12
|
My issue is that Grad-CAM often highlights the wrong areas. My task is to perform weakly supervised semantic segmentation (WSSS) of lung malignant tumors using image-level labels. Although I achieved excellent performance on both the training and testing sets, with high accuracy, precision, recall, and F1 scores all close to 100%, the Grad-CAM results are not very accurate. I used the basic ResNet18 model and the pytorch-grad-cam library to generate the Grad-CAM visualizations. My dataset consists of around 1000 CT images, with 50% normal lungs and 50% malignant tumors, and I split the data into a 90:10 training-testing ratio. I suspect that the reason for the inaccurate Grad-CAM results is that the dataset may not be sufficient for the model to learn meaningful information.
The sample Grad-CAMs from my data are displayed below. As you can see, the Grad-CAM visualizations are significantly inaccurate.
[](https://i.stack.imgur.com/fP4jJ.png)
[](https://i.stack.imgur.com/mx1bn.png)
|
Research Problem about Weakly Supervised Learning for CT Image Semantic Segmentation
|
CC BY-SA 4.0
| null |
2023-04-24T11:06:58.157
|
2023-04-24T11:06:58.157
| null | null |
366439
|
[
"neural-networks",
"classification",
"computer-vision",
"image-segmentation"
] |
613928
|
1
|
613938
| null |
8
|
173
|
I have a dataset with some variables with MANY categories (one has about 20000, other about 2000, the third about 200). Need to make a multi-class prediction (not binary, but 3 values)
How can I manage them? If I would like to try, for example, a Random Forest, I think max categories are about 50. I suppose creating 200 dummy variables is not an option (2000 or 20000 even worse).
What would be a better idea, to try grouping them somehow, or try another algorithm?
EDIT: May be I need to indicate the NUMBER of observations. My data has about 100000 observations, so, a category with 20000 levels, may be too much (only 5 observations per level). Even 2000 levels (50 observations per level), could be too much?
|
How to manage categorical variable with MANY categories
|
CC BY-SA 4.0
| null |
2023-04-24T11:09:13.730
|
2023-05-03T07:59:19.153
|
2023-05-03T07:59:19.153
|
381118
|
381118
|
[
"r",
"classification",
"categorical-data",
"many-categories"
] |
613929
|
1
| null | null |
0
|
46
|
I have an outcome which I measure between two dates for three different cohorts. The treatment whose effect I want to measure is COVID.
My control group is observed in 2016 (t=0) and 2019 (t=1), my first treatment group is observed in 2017 (t=0) and 2020 (t=1), and my second control group in 2018 (t=0) and 2021 (t=1).
|Group |t=0 |t=1 |
|-----|---|---|
|Control |2016 |2019 |
|Treated 1 |2017 |2020 |
|Treated 2 |2018 |2021 |
I have a setting of difference of differences between the control group and each of the treatment groups, but do I have a setting of triple differences between these 3 groups, each observed twice? I think so, but I am not sure.
I tried to apply a triple difference regression on stata but it doesn't work.
```
clear
set obs 9999
egen id = seq(), from(1) to(9999) block(1)
gen outcome = 1 + rnormal()
generate year=2016 if id<=3333
replace year=2017 if id>3333 & id <=6666
replace year=2018 if id>6666 & id <=9999
generate t=0
save samplet0, replace
replace year=2019 if id<=3333
replace year=2020 if id>3333 & id <=6666
replace year=2021 if id>6666 & id <=9999
replace t=1
append using samplet0
generate treated1=0
replace treated1=1 if year==2020
generate treated2=0
replace treated2=1 if year==2021
reg outcome t##treated1##treated2
```
Does anyone have an idea?
Thank you in advance!
|
2 treatment groups and 1 control group: is my setting a triple difference?
|
CC BY-SA 4.0
| null |
2023-04-24T11:24:33.740
|
2023-04-24T11:24:33.740
| null | null |
18638
|
[
"econometrics",
"stata",
"difference-in-difference",
"covid-19"
] |
613930
|
2
| null |
185402
|
1
| null |
>
As I have heard and learned, feature selection is for decrease the complexity and improve the accuracy.
This is a common way to think that seems to follow the following logic.
- Including many features gives the model considerable flexibility.
- That flexibility puts us at risk of overfitting and achieving pitiful generalizability that keeps models from making useful predictions on new data (when we’re truly interested in the predictions).
- Therefore, reduce the feature count to reduce the overfitting potential.
It is true that reducing the feature count helps quell overfitting concerns. However, leaving out features deprives the model of the unique information contained in that feature, information that might be a critical determinant of the outcome. Thus, while your feature selection probably quells overfitting concerns, it does so at the risk of introducing underfitting concerns.
You seem to have underfit your data by depriving the model of useful features for making predictions, leading to the decrease in accuracy (setting aside the [known issues](https://stats.stackexchange.com/questions/603663/academic-reference-on-the-drawbacks-of-accuracy-f1-score-sensitivity-and-or-sp) with accuracy). Because this can happen, [not every statistician](https://stats.stackexchange.com/a/18245/247274) is such a fan of feature selection.
| null |
CC BY-SA 4.0
| null |
2023-04-24T11:28:50.557
|
2023-04-24T11:28:50.557
| null | null |
247274
| null |
613931
|
1
| null | null |
0
|
17
|
This is a general question, which has come up in two recent analyses. I have two datasets wherein a data point was very uncommon but ended up being significant on the analysis.
In one it was female sex that was tied to the outcome (blood clots) in a cohort that included about 500 males and 4 females. In the other it was use of a medication that was tied to the outcome (low bone density) but only 6% of patients were on this medication.
Is there an established rationale for excluding these variables when regression models include them as significant?
|
Should I remove rare variables from my multivariate analyses and how?
|
CC BY-SA 4.0
| null |
2023-04-24T11:29:14.830
|
2023-04-24T15:10:26.417
| null | null |
386404
|
[
"regression",
"feature-selection",
"biostatistics"
] |
613932
|
2
| null |
613922
|
0
| null |
Let's look at the probability for player 1:
$P(n = 1) = 4/40 $ (probability that player 1 wins in the 1st round)
$P(n = 2) = (36/40) \times (35/39) \times (34/38) \times (33/37) \times (4/36) $ (probability that player 1 wins in the 2nd round)
Generalizing for n = k {k=2,3,...9,10}
$$p(n = k) = \dfrac{36 \times 35 \times 34 \times \cdots (41 - 4k))}{40 \times 39 \times 38 \times \cdots(45-4k)} \times \dfrac{1}{11 - k}$$
Therefore, probability that player 1 wins is:
$$P(\text{player 1 wins}) = \left(\dfrac{1}{10} + \sum_{k=2}^{k=10} \dfrac{36 \times 35 \times 34 \times \cdots (41 - 4k)}{40 \times 39 \times 38 \times \cdots (45-4k)} \times \dfrac{1}{11 - k}\right)$$
Probabilities for other players can be calculated similarly.
| null |
CC BY-SA 4.0
| null |
2023-04-24T11:31:50.283
|
2023-04-29T03:26:59.127
|
2023-04-29T03:26:59.127
|
362671
|
369552
| null |
613933
|
2
| null |
602062
|
0
| null |
The decoder computes the probability of the next word given the words that have already been decoded. Given words $1, \ldots, n$, the model computes a categorical distribution over the output vocabulary and decides the next word. This word is then prepended to the input, so we have words $1, \ldots, n+1$, which can be used to predict the $(n + 2)$-nd word. This is repeated until the `[EOS]` token is generated. There are various strategies for how the next token is decided: the most common strategies are greedily taking the most probable word or using beam search.
At training time, this autoregressive self-feeding is simulated by providing the ground truth sentence as an input with prepended `[SOS]` symbol and the same sentence as the output with appended `[EOS]` symbols. Therefore, we know the decoder's entire input and output at training time. This way of training is called teacher forcing. This cause that for $(n-1)$-th input word, the probability of the $n$-th word is predicted. Additionally, at training time, there is a mask to prevent self-attention only to consider the left context.
| null |
CC BY-SA 4.0
| null |
2023-04-24T11:39:01.117
|
2023-04-24T11:39:01.117
| null | null |
249611
| null |
613934
|
2
| null |
351536
|
1
| null |
[Undersampling at the data-collection phase can make sense](https://stats.stackexchange.com/a/613921/247274), as you might be able to do more efficient data-collection. This is the topic of that King and Zeng paper discussed in the link.
However, once you have the data, unless you have computational constraints (cannot fit a huge dataset into memory, modeling is too slow even if you can, etc), undersampling makes no sense to me. I have my qualms with oversampling, too, but undersampling discards precious data. Since it turns out that [class imbalance is rarely a problem for proper statistical methods](https://stats.stackexchange.com/questions/357466/are-unbalanced-datasets-problematic-and-how-does-oversampling-purport-to-he), with those rare exceptions being outside the usual sense in many machine learning circles about why imbalance is a problem, discarding precious data to fix a non-problem seems like the worst idea of all.
Regarding oversampling, there probably is no need to do this. Proper statistical methods handle class imbalance just fine in most cases. However, at least you aren’t wasting data solving a non-problem when you oversample.
| null |
CC BY-SA 4.0
| null |
2023-04-24T11:42:42.503
|
2023-04-24T11:42:42.503
| null | null |
247274
| null |
613935
|
1
|
613936
| null |
0
|
32
|
In deriving the unbiasedness of OLS Estimators,
$$\hat{\beta_1} = \beta_1 + \frac {\sum_{i=1}^{n} (x_i - \bar{x})u_i}{\sum_{i=1}^n (x_i - \bar{x})^2}$$
My professor changes the above to:
$$\hat{\beta_1} = \beta_1 + \sum_{i=1}^{n} (\frac {(x_i - \bar{x})}{\sum_{i=1}^n (x_i - \bar{x})^2})u_i$$
How are these two equal?
|
Statistical Properties of OLS Estimators (Unbiasedness)
|
CC BY-SA 4.0
| null |
2023-04-24T11:46:18.087
|
2023-04-24T12:02:40.073
|
2023-04-24T12:02:40.073
|
362671
|
386405
|
[
"regression",
"least-squares",
"unbiased-estimator"
] |
613936
|
2
| null |
613935
|
1
| null |
Note: The denominator $ \sum_{i=1}^n (x_i - \bar{x})^2 := s$ is constant; it doesn't change when you are summing over $i,$ i.e. the summation would be $\sum_i \left(\frac{(x_i-\bar x)}{s}\right)u_i.$
| null |
CC BY-SA 4.0
| null |
2023-04-24T11:51:30.427
|
2023-04-24T11:51:30.427
| null | null |
362671
| null |
613937
|
2
| null |
573998
|
0
| null |
Yes, models with enough flexibility can learn this.
This is not conceptually different from any other kind of interaction. Yes, here you consider the specific interaction between some feature and time, but think in terms of interactions in general. In particular, think in terms of classifying the colors of the points below, knowing just their coordinates.
[](https://i.stack.imgur.com/YOiiw.png)
A sufficiently flexible model will figure out that being on the right suggests red when the point is down low but suggests blue when the point is up high. In that sense, the way that left/right (and up/down) is predictive changes according to the values of the other features.
You would be doing the same, just with time as one of those features.
```
# To make the plot
library(ggplot2)
library(MASS)
set.seed(2023)
N <- 1000
X0 <- MASS::mvrnorm(N, c(0,0), matrix(c(1, 0.95, 0.95, 1), 2, 2))
X1 <- MASS::mvrnorm(N, c(0,0), matrix(c(1, -0.95, -0.95, 1), 2, 2))
d0 <- data.frame(
x = X0[, 1],
y = X0[, 2],
Group = "Slash"
)
d1 <- data.frame(
x = X1[, 1],
y = X1[, 2],
Group = "Backslash"
)
d <- rbind(d0, d1)
ggplot(d, aes(x, y, col = Group)) +
geom_point()
```
| null |
CC BY-SA 4.0
| null |
2023-04-24T12:05:56.847
|
2023-05-03T22:22:38.003
|
2023-05-03T22:22:38.003
|
247274
|
247274
| null |
613938
|
2
| null |
613928
|
6
| null |
Until Breiman and the 21st c the historic barrier for working with massively categorical features was computational, for example in ANOVA, inverting a cross-products matrix with too many categories was infeasible.
That said, it's useful to distinguish between massively categorical features vs targets. It's true that random forests (RFs) have trouble modeling targets with more than a few dozen levels. This is not true for massively categorical features.
Breiman's intent with RFs was to redress criticisms of his original 'single iteration' approach to classification and regression trees as being unstable and inaccurate.
What Breiman didn't realize was that any multivariate modeling engine could be plugged into his RF framework, e.g., ANOVA, multiple regression, logistic regression, k-means, and so on, to arrive at an approximating, iterative solution.
Breiman did his work in the late 90s on a single CPU when massive data meant a few dozen gigs and a couple of thousand features processed over a couple of thousand iterations of bootstrapped resampling of observations and features. Each iteration built a mini-RF model, the predictions from which were aggregated into an ensemble prediction of the target.
Today there are dozens of workarounds to modeling massively categorical features which extend Breiman's approach to breaking a large model down to many bite-sized, smaller models, sometimes known as divide and conquer algorithms.
A paper by Chen and Xie discusses D&Cs, A Split-and-Conquer Approach for Analysis of Extraordinarily Large Data [https://www.jstor.org/stable/24310963](https://www.jstor.org/stable/24310963)
Another good review is McGinnis' Beyond One-Hot: an exploration of categorical variables [https://www.kdnuggets.com/2015/12/beyond-one-hot-exploration-categorical-variables.html](https://www.kdnuggets.com/2015/12/beyond-one-hot-exploration-categorical-variables.html)
Related to this is the suggestion of impact coding, e.g., Zumel's Modeling Trick: Impact Coding of Categorical Variables with Many Levels [https://win-vector.com/2012/07/23/modeling-trick-impact-coding-of-categorical-variables-with-many-levels/](https://win-vector.com/2012/07/23/modeling-trick-impact-coding-of-categorical-variables-with-many-levels/)
A completely different, non-frequentist approach was made in marketing science wrt hierarchical Bayesian modeling: Ainslie and Steenburgh's Massively Categorical Variables: Revealing the Information in Zip Codes. Their model is easily programmed in software such as STAN. [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=961571](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=961571)
Hope this helps address your query.
Afterthought FWIW...having poked around with a few of the approaches to massive categorical information including one-hot encoding, hierarchical bayes and impact coding, I came to the opinion that impact coding offered the best results in several nonsignificantly better ways: strongest holdout metrics wrt dependence, minimized metrics of dispersion and easiest to code.
| null |
CC BY-SA 4.0
| null |
2023-04-24T12:09:04.447
|
2023-04-26T12:24:32.410
|
2023-04-26T12:24:32.410
|
78229
|
78229
| null |
613939
|
2
| null |
613822
|
4
| null |
First question: given a vector $\lambda = (\lambda_1,\ldots,\lambda_n)$ of eigenvalues of a variance matrix $\Sigma,$ what are the possible eigenvalues $(\tau_1, \ldots, \tau_n)$ of a covariance matrix $P$ having correlation matrix $\Sigma$?
Writing the diagonal elements of $\Sigma$ as $\Sigma_{ii} = \sigma_i^2$ (for nonnegative square roots $\sigma_i$), we need to know that the relationship between the matrices is
$$\operatorname{Diag}(\sigma)P\operatorname{Diag}(\sigma) = \Sigma.\tag{*}$$
To interpret this question, I suppose you know the eigenvalues but you don't know $\Sigma.$ Thus, it's possible $\Sigma$ is diagonal, $\Sigma = \operatorname{Diag}(\lambda).$ Then all the matrices in $(*)$ are diagonal and it reduces to $n$ simultaneous equations
$$\sigma_i \tau_i \sigma_i = \lambda_i,$$
for which we find the unique solution
$$\sigma_i = \sqrt{\frac{\lambda_i}{\tau_i}}$$
when all the $\tau_i\gt 0.$ When $\tau_i=0,$ necessarily $\lambda_i=0,$ too, and $\sigma_i$ can be any positive number.
Consequently, the zeros of $\tau$ must match the zeros of $\lambda$ and otherwise the components of $\tau$ can be anything.
Second question: given an orthonormal frame $\mathbf e_1, \ldots, \mathbf e_n$ of eigenvectors of a covariance matrix $\Sigma,$ what are the possible frames of eigenvectors of its correlation matrix $P$?
Again, to interpret this reasonably we must suppose the frame is known but $\Sigma$ is not known.
A dimension-counting argument provides insight. There are only $n$ unknown parameters $\sigma_i$ connecting $\Sigma$ and $P.$ Consequently, the dimension of the manifold of frames of eigenvectors associated with $P$ can be at most $n.$ Because the dimension of the manifold of all orthonormal frames, $\mathbb F_n,$ is $(n-1)+(n-2)+\cdots+1=n(n-1)/2,$ when $n\gt 3$ the possible frames of $\Sigma$ cannot be all possible frames: there is a definite restriction.
There appears to be no simple way of characterizing that restriction, because the map
$$\phi_\Sigma : \mathbb R_+^n \to \mathbb F_n$$
that sends $\sigma$ to the frame of eigenvectors of $P(\sigma,\Sigma) = \operatorname{Diag}(\sigma^{-1})\Sigma \operatorname{Diag}(\sigma^{-1})$ depends on $\Sigma.$
| null |
CC BY-SA 4.0
| null |
2023-04-24T12:10:14.527
|
2023-04-24T12:10:14.527
| null | null |
919
| null |
613940
|
1
| null | null |
0
|
13
|
I am currently conducting a study where the outcome variable is measured only at the level 2/group level. The dependent variable is a measure of the proportion of all municipal spending that is being spent on a particular type of service. However, I would like to investigate to what extent this corresponds to the attitudes that municipal residents have about such spendings. For this purpose I make use of individual-level survey data. Unfortunately the survey data available to me is a national sample, and thus does not take the clustring of residents into municipalities into consideration. Meaning that municipalities with a small population also have a small number of respondents in the sample.
In total I have >250 observations of municipalities (level 2). However, the cluster size (the number of respondents for each municipality) ranges from 1 to 160.
My approach is to aggregate the individual-level data, using averages, to the municipal level and then run OLS regression (in accordance with guidelines provided by [Foster-Johnson and Kromey, 2018](https://link.springer.com/article/10.3758/s13428-018-1025-8)). My gut feeling is that even though the municipal averages in terms of attitudes will not be accurately asses for each specific municipality, the relationship between the municipal average attitude toward spending (on XYZ) and the municipal proportion of spending (on XYZ) should be accurate, considering the fair number of observations at the municipal level.
I build this thinking on the principle of reversion to mediocricy, applied to the true relationship between municipal average of attitudes and municipal proporition of spending. Am I wrong to assume this? And is there any litterature that adresses this particular problem, of aggregating from scarce level-1 data and then running a wholly macro-level analysis to predict a level-2 outcome? (I have searched for quite some time now, but I generally only find papers adressing sparseness of data in multilevel modeling).
|
Conducting macro-level OLS regression using aggregated data with few level 1 observations
|
CC BY-SA 4.0
| null |
2023-04-24T12:12:03.537
|
2023-04-24T13:19:00.977
|
2023-04-24T13:19:00.977
|
386406
|
386406
|
[
"least-squares",
"multilevel-analysis",
"aggregation"
] |
613942
|
2
| null |
605611
|
0
| null |
It is not clear that splitting the data like this is a good idea. In particular, it has been noted that [thousands of observations are usually required for such splitting to be stable](https://stats.stackexchange.com/a/86761/247274). Since you only have $700$ observations, such a split is likely to present problems.
Thus…
- From a statistical standpoint, no, you do not need to do an official train/validate/test split of your data.
- The venue hosting your data might require such a split, anyway, making it a requirement no matter what objections others might have. You are allowed for this requirement to be a dealbreaker, and they are allowed to tell you to host your data elsewhere if you do not want to play by their rules (no matter how bad you might think those rules are).
| null |
CC BY-SA 4.0
| null |
2023-04-24T12:14:12.000
|
2023-04-24T12:14:12.000
| null | null |
247274
| null |
613944
|
1
| null | null |
0
|
52
|
This is a problem that I have experienced in multiple domains but most recently in identifying bird species from audio data. I have 300, 5 minute audio files with corresponding species labels (10 species).
To build a classifier, I split the audio files in 10 second windows and build a CNN on the spectrograms. This achieves a 60% accuracy. To get an overall label for each 5 minute segment, I take the modal predicted label of all windows in the segment, which allows for an accuracy of about 70%.
My question; is there a 'smarter' way to combine the window labels into the overall label? And, is there a name for this problem, so that I can find research on it (it seems like such a common problem, I assume it is well researched).
Methods like HMM and RNN's work, but as far as I know they require viewing the entire segment as one bit of training data, which means I only have 300 samples which is not enough.
A simplistic approach I have tried is creating a second classifier that takes the predicted class of the closest n samples as features, but this doesn't increase performance.
|
How can I utilise known label consistency in classification
|
CC BY-SA 4.0
| null |
2023-04-24T12:19:05.403
|
2023-04-24T12:19:05.403
| null | null |
386409
|
[
"machine-learning",
"time-series",
"labeling"
] |
613945
|
1
| null | null |
0
|
17
|
I'm working with a dataset where all I have is the similarity matrix (with values 0 to 1, 0 being no similarity, 1 being identical). After I assign the labels, I loop through the matrix and, for each pair of points:
1a) if the points are in the same cluster, add (1-similarity(p1, p2)) to a tracker variable. 1b) if the points are in a different cluster, add (similarity(p1, p2)) to a tracker variable.
For 1a, if the points are similar, then the tracker is only incremented a little bit. For 1b, if the points are different, then the tracker is only incremented a little bit. (which means the cluster is doing good.)
However, if the points are different and in the same cluster OR they're similar and in different clusters, then the tracker gets increment by a lot.
In the end, the tracker is normalized by dividing out the size of the similarity matrix. Scores near 0 are "good," while scores near 1 are "bad"
Does this metric for measuring the effectiveness of clustering exist in the literature? Are there any faults with this metric such that it isn't useful in determining cluster effectiveness?
Thank you!
|
Does this metric exist for clustering data based off its similarity matrix?
|
CC BY-SA 4.0
| null |
2023-04-24T12:23:17.207
|
2023-04-24T12:23:17.207
| null | null |
386237
|
[
"clustering",
"similarities",
"metric"
] |
613946
|
2
| null |
611418
|
2
| null |
If you are content with linear small rotation angle approximation, then you have
$$ R \approx \mathbb{I} + d\theta \begin{pmatrix} 0 & -1 \\ 1 &0 \end{pmatrix}$$
so
$$ Rp \approx p + d\theta \begin{pmatrix} -y \\ x \end{pmatrix} .$$
Assuming that $d\theta$ has a normal distribution with variance $\sigma_\theta^2$, then the covariance matrix of $Rp$ is given by
$$ Cov(Rp) = \begin{pmatrix} -y \\ x \end{pmatrix} \sigma_\theta^2 \begin{pmatrix} -y & x \end{pmatrix} = \sigma_\theta^2 \begin{pmatrix} y^2 & -xy \\ -xy & x^2 \end{pmatrix}.$$
If you further assume that $t_x$ and $t_y$ are independent, having normal distribution with variance $\sigma_t^2$, then the covariance matrix of $t$ is $\sigma_t^2$ times the unit matrix. The covariance matrix of the sum is just the sum of covariance matrices:
$$Cov(p')=Cov(Rp)+Cov(t)= \sigma_\theta^2 \begin{pmatrix} y^2 & -xy \\ -xy & x^2 \end{pmatrix} + \sigma_t^2 \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}.$$
| null |
CC BY-SA 4.0
| null |
2023-04-24T12:49:42.650
|
2023-04-24T12:49:42.650
| null | null |
348492
| null |
613947
|
1
|
614264
| null |
2
|
51
|
I am using ROC curves for multi-label classification. I have a classifier that produces a score for each label, say a Logistic Regression that produces a probability. I understand that an ROC curve is parameterized by a discrimination threshold and assigns to a class the observations where a class with the highest probability is above the threshold.
If so, imagine these predictions for 5 observations with labels A or B:
```
Observation # Label Prob(A) Prob(B)
1 A 0.9 0.1
2 A 0.51 0.49
3 B 0.51 0.49
4 A 0.49 0.51
5 B 0.49 0.51
```
The first observation is a freebie. With a discrimination threshold of 0.9, we assign that observation correctly and no observation incorrectly. So True Positives are 1 and all others are zero (True Negatives, False Positives, False Negatives). The True Positive Rate is 1 and the False Positive Rate is 0, which is the ideal point at the top left in an ROC curve. We never see that point in an ROC curve, so I suspect my reasoning is wrong, or my concept of True/False Positive Rates is wrong.
Another possibility is to assign only observations with a probability above a threshold to the most likely class, and all others to the negative class. But that approach lumps together an observation that we are sure is in the negative class and one that we're not sure is in the positive class. A consequence is that it is not invariant under re-labeling (positive to negative and vice-versa).
How exactly does an ROC curve use the discrimination threshold?
|
ROC curve and thresholds: why does it never have the ideal point at the top left for observations close to certainty?
|
CC BY-SA 4.0
| null |
2023-04-24T12:51:28.127
|
2023-04-27T06:30:23.927
|
2023-04-26T07:32:05.423
|
241968
|
241968
|
[
"roc",
"threshold",
"true-positive-rate"
] |
613949
|
1
| null | null |
0
|
15
|
I am currently building a solution that can reduce the overall electric consumption of a household by remotely controlling some appliances (for a period of 60 minutes).
And to prove that it works I will run a pilot program where I will equip a few people with my solution, and test it over the course of a few months. More specifically, we plan to test it 20-30 times over the course of 5 months.
However, I am struggling to understand what should be the minimum size so that my test has scientific value.
Indeed, the only data I will have is the overall consumption of a household, on 30-minute periods, and our effect may be quite low compared to the overall consumption (for example if someone starts their oven during the time where we reduce the consumption of a few appliances we wouldn't be able to see the effect of our solution on that specific household)
As a result, how would you suggest determining the smallest sample to prove that what we do really works?
Also, depending on that size we might be able to recruit twice as much users and simply have a control group that will believe they have our solution but won't use it
Thank you very much and have a great day,
Matthieu
|
Determine sample size to evaluate the effect of my solution?
|
CC BY-SA 4.0
| null |
2023-04-24T12:57:24.943
|
2023-04-24T12:57:24.943
| null | null |
386411
|
[
"statistical-significance",
"confidence-interval",
"sample-size"
] |
613950
|
2
| null |
613799
|
3
| null |
No -- there are simple counterexamples.
I take it that $X:(\Omega,\mathfrak F, \mathbb P)\to (E,\mathcal E)$ is a (generalized) random variable, that $E$ is endowed with a partial order $\le,$ and that $f:(E,\le)\to(E,\le)$ is strictly monotone.
As an example, let $\Omega = \{a,b\},$ $\mathfrak F=\mathcal P(\Omega) = \{\emptyset, \{a\},\{b\},\Omega\}$ is the discrete algebra, and $\mathbb P(\{a\})=1/2$ determines the uniform probability distribution. Let $E = \mathcal P(\{0,1\}) = \{\emptyset, \{0\}, \{1\}, \{0,1\}\}$ and let $\le = \subseteq$ be the subset relation. In this partial order $\{0\}$ and $\{1\}$ are incomparable: neither is a subset of the other.
Let the random variable $X:\Omega\to E$ be given by
$$X(a)=\{0\},\ X(b) = \{1\}.$$
The sigma-algebra $\sigma(X)$ is $\mathfrak F$ itself.
We only need to define $f$ on the image of $X.$ (But see the remarks below for why even this is not necessary to make the counterexample work.) To this end, let $f:E\to E$ be the constant function $f(y)=\{0,1\}.$ It is vacuously strictly monotone, because there are no strict inequalities to check (due to the incomparability of all the elements of $X(\Omega)$).
Define the random variable $Y = f\circ X.$ It is constant, making it trivial to check the independence of $(X,Y).$
Thus, although neither $X$ nor $f$ are constant, all the conditions hold yet $X$ and $f\circ X$ are independent.
---
If you intended $f$ to be defined on all of $E$ and to be strictly monotone there, simply extend this example. For instance, let $E$ be the integers with $0$ doubled: $$E = \{(n,i)\in \mathbb Z\times \{0,1\} \mid n\ne 0 \implies i=0\}.$$
The usual order $\le$ on $\mathbb Z$ induces an order on $E.$ We use the usual order on the first components but we make $(0,0)$ and $(0,1)$ incomparable. This extends the previous example, which is isomorphic to the order on $\{(-1,0),(0,0),(0,1),(1,0)\}.$ The pairs $(0,0)$ and $(0,1)$ play the roles of $\{0\}$ and $\{1\}.$ Define $f$ as
$$f((n,i)) = (n+1,0)$$
for all $n\in\mathbb Z$ and $i\in\{0,1\}.$
| null |
CC BY-SA 4.0
| null |
2023-04-24T13:06:20.913
|
2023-04-24T13:06:20.913
| null | null |
919
| null |
613952
|
2
| null |
604567
|
0
| null |
I know this is old, but if you do not have an answer you could try like the Rank-Based Overlap described here: "A Similarity Measure for Indefinite Rankings" (Webber, Moffat, Zobol, 2010). Some features:
- it assigns higher weights to the higher ranked genes
- if the lists are disjoint (for example, you use your 0.01 cutoff so that the lists
contain potentially different genes), this is also allowed
If you only have the two lists, it may be hard to interpret the size of the value, but you could probably get a p-value for example by permutations of one of the lists.
| null |
CC BY-SA 4.0
| null |
2023-04-24T13:12:56.547
|
2023-04-24T13:12:56.547
| null | null |
72174
| null |
613953
|
1
| null | null |
0
|
10
|
I am studying returns on advertising, here defined as [Customer LifeTime Value]/[Customer Acquisition Cost]. I have obtained values and respective CIs for both variables and now i have to combine them to form a ratio and its CIs.
Customer lifetime value is an average of 10k+ observations and it is a log-normally distributed variable.
The average Customer acquisition cost is obtained from a GAM regression where 'ad spend' explains changes in Signup volumes. The reciprocal of the regression coefficient 'ad spend' represents Customer acquisition cost. (The GAM regression is as follows: signups = ad spend + 's(controls)', family=nb())
Any ideas of how to obtain the CIs for the ratio? Afaik The EC Fieller method requires the variables to be normally distributed
|
Confidence intervals for a ratio of a log-normally distributed variable and a regression estimator
|
CC BY-SA 4.0
| null |
2023-04-24T13:16:55.017
|
2023-04-24T13:16:55.017
| null | null |
386417
|
[
"confidence-interval",
"regression-coefficients",
"nonlinear-regression"
] |
613954
|
1
|
613958
| null |
0
|
26
|
I'm looking for help clarifying the concept of parameters in statistics. Mainly, I'm having trouble understanding the commonality or difference between distribution parameter and model parameter.
When describing distributions, (e.g., normal distribution, binomial distribution, etc) we use the word 'parameters' for normal - mean and standard deviation, and binomial - trial number and probability.
And simultaneously, when we talk about the model, we also use the word, "parameters" For linear regression, slope and intercept are the ones. And these days, more frequently, model parameters of the neural net would be the other example.
The reason why I'm feeling confused is the targets that each parameter describes seem to be different in these two cases. The former want seems to be more focused on describing "distribution" and the latter one on "model". For example, we consider this in the neural net classifier, the input data can have their own distribution (e.g., normal distribution parameterized by their sample mean and standard deviation) while the model can have their own parameter which forms themselves (e.g., weights that form each layer and bias matrix)
Are they essentially identical or just share the same terminology?
|
Clarifications on parameters in models and distribution
|
CC BY-SA 4.0
| null |
2023-04-24T13:17:01.573
|
2023-04-24T13:49:15.917
| null | null |
311146
|
[
"regression",
"distributions",
"estimation"
] |
613955
|
1
| null | null |
0
|
33
|
I have encountered a situation where someone has run many simple linear regressions, each on a separate subset of data, to test for changes in a slope in some time-series data. Each month, a new data value is added to the end of the dataset and data point at the start of the dataset is removed. The idea with some sample data looks like the following:
```
library(ggplot2)
df = data.frame(Month = rep(1:12,3),
Value = c(rep(50:39,2),runif(n=12,min=40.98,max=50.02)),
Attribute = rep(c("Var1","Var2","Var3"),each=12))
df$Value = ifelse(df$Attribute == "Var1",df$Value + rnorm(n=12,mean=0,sd=2),
ifelse(df$Attribute == "Var2", df$Value + rnorm(n=12,mean=0,sd=5),
df$Value))
ggplot(data=df, aes(x = Month, y = Value))+
geom_point()+
geom_smooth(method='lm',se=F)+
facet_wrap(~Attribute,ncol=3,scales="free_y")
#> `geom_smooth()` using formula 'y ~ x'
```

Where the results of simple linear models are:
```
summary(lm(Value~Month,data = subset(df,df$Attribute=="Var1")))
#>
#> Call:
#> lm(formula = Value ~ Month, data = subset(df, df$Attribute ==
#> "Var1"))
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -5.0317 -1.0829 0.1094 0.9651 4.4806
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 49.1870 1.6113 30.526 3.34e-11 ***
#> Month -0.7823 0.2189 -3.573 0.00507 **
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 2.618 on 10 degrees of freedom
#> Multiple R-squared: 0.5608, Adjusted R-squared: 0.5169
#> F-statistic: 12.77 on 1 and 10 DF, p-value: 0.005068
```
```
summary(lm(Value~Month,data = subset(df,df$Attribute=="Var2")))
#>
#> Call:
#> lm(formula = Value ~ Month, data = subset(df, df$Attribute ==
#> "Var2"))
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -7.1860 -2.9841 -0.2995 2.9611 8.8702
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 54.8803 3.0021 18.281 5.16e-09 ***
#> Month -1.4304 0.4079 -3.507 0.00566 **
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 4.878 on 10 degrees of freedom
#> Multiple R-squared: 0.5515, Adjusted R-squared: 0.5067
#> F-statistic: 12.3 on 1 and 10 DF, p-value: 0.005663
```
```
summary(lm(Value~Month,data = subset(df,df$Attribute=="Var3")))
#>
#> Call:
#> lm(formula = Value ~ Month, data = subset(df, df$Attribute ==
#> "Var3"))
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -3.1464 -1.5691 0.7905 1.3470 2.8691
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 45.706605 1.297132 35.237 8.04e-12 ***
#> Month -0.006244 0.176246 -0.035 0.972
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 2.108 on 10 degrees of freedom
#> Multiple R-squared: 0.0001255, Adjusted R-squared: -0.09986
#> F-statistic: 0.001255 on 1 and 10 DF, p-value: 0.9724
```
Here, we see that the relationship between `Value` and `Month` does not have a significant slope within the `Var3` subset. Though this example includes only 3 simple linear regressions, the real situation uses many more.
Question: Do the p-values associated with each of these simple linear regression require adjustment for multiple comparisons?
It seems that this is a point of discussion, but generally pertaining to the use of multiple regression, rather than say, a t-test. However, using simple linear regression in this way seems to behave like a hypothesis test.
|
Do many simple linear regressions require adjustment for multiple comparisons?
|
CC BY-SA 4.0
| null |
2023-04-24T13:22:45.827
|
2023-04-24T13:52:02.923
| null | null |
267286
|
[
"regression",
"hypothesis-testing",
"multiple-comparisons"
] |
613956
|
1
| null | null |
1
|
17
|
Suppose I have a data set with $(Y,X)$ where $Y$ has some missing entries(say 20% missing). From $p(Y,X|\lambda)$ and prior $p(\lambda)$, I can conduct MCMC by data augmentation to impute missing $Y$'s.
Assume I have $n$ such data points. If I had no missing entries in $Y$, then according to Bernstein-von Mises theorem, $\lambda$ should be asymptotically normal as $n\to \infty$.
During data augmentation, I have to impute $Y$ by simulating corresponding $p(Y_{missing}|Y_{ob},X_{ob},\lambda)$, where $Y_{ob},X_{ob}$ are observed and $\lambda$ is given in a MCMC iteration. It seems that in this case $Y_{missing}$(missing entry) becomes a parameter to be estimated. To apply Bernstein-von Mises theorem, I need number of parameter fixed.
$Q:$ What are parameters in data augmentation? Is missing entry counted as a parameter?
|
Were missing values in data augmentation procedure parameters?
|
CC BY-SA 4.0
| null |
2023-04-24T13:37:56.277
|
2023-04-24T13:37:56.277
| null | null |
79469
|
[
"bayesian",
"markov-chain-montecarlo",
"missing-data",
"asymptotics",
"data-augmentation"
] |
613957
|
1
|
613986
| null |
1
|
79
|
I am having difficulty finding the correct location and scale parameters for a PDF diagram that I need to validate my data. [](https://i.stack.imgur.com/yvHna.png)
I have already calculated the location parameter to be -1.01, but I am unsure about the scale parameter.
I would appreciate any guidance or correction you could provide.
I am currently conducting a simulation that involves population balance modeling to analyze particle size distribution. In my research, I have found that most of the reference papers describe the size distribution using a PDF diagram that appears to resemble a log-normal distribution, given its skewed nature to the right.
This diagram is not mine, but it belongs to Huang, 2019 and I am trying to run a simulation in order to obtain the same results as shown in the diagram.
I have previously attempted to adjust the location and scale parameter values that I obtained from the graph, but the results I obtained were always different from the graph. Thank you for the comments.
|
Difficulties finding location and scale parameters from PDF
|
CC BY-SA 4.0
| null |
2023-04-24T13:38:23.823
|
2023-04-26T05:20:40.867
|
2023-04-25T11:37:14.613
|
386218
|
386218
|
[
"density-function",
"lognormal-distribution"
] |
613958
|
2
| null |
613954
|
1
| null |
One way to reconcile the notions of model and of distribution is to see they are both ultimately used to compute the probability of an observation.
For instance, let's assume that an observation $y$ was generated from a normal distribution with parameters $\theta = [\mu,\sigma]$. Hence, the probability of the observation given the parameters $p(y|\theta)$ is the density function of the normal distribution:
$$
p(y|\theta) = \frac{1}{\sqrt{2\pi}\sigma} exp(-\frac{(y-\mu)^2}{2\sigma^2})
$$
Now, let us assume a linear regression model with slope and origin $\theta = [a,b]$. Similarly, it is possible to compute the probability of an observation $(x,y)$ (where $x$ is the independent and $y$ is the dependent) given the parameter $\theta$, i.e. $p(y|x,\theta)$. If you disregard noise in your model, then $p(y|x,\theta) = 1$ if $y = ax+b$, and $0$ otherwise. If you assume a level of noise $\sigma$ in your linear model, then $p(y|x,\theta)$ is the density of a normal distribution with mean $ax+b$ and std $\sigma$.
In both cases, the goal is the same: to compute $p(y|\theta)$. A distribution usually refers to [classical canonical distributions](https://en.wikipedia.org/wiki/List_of_probability_distributions), while a model usually has an ad-hoc probability function (e.g. can be a function of several distributions).
| null |
CC BY-SA 4.0
| null |
2023-04-24T13:49:15.917
|
2023-04-24T13:49:15.917
| null | null |
271601
| null |
613959
|
2
| null |
613955
|
1
| null |
Well...maybe.
It seems like you're estimating the slope for three groups within your data. No matter what test one performs (the associated tests for OLS coefficients are t-tests, so your point about t tests is moot) the probability of making at least one type one error will increase.
That being said, if you're interested in performing these tests, I wonder if just doing an F test would be more appropriate, but it would depend on what question you are trying to answer.
| null |
CC BY-SA 4.0
| null |
2023-04-24T13:52:02.923
|
2023-04-24T13:52:02.923
| null | null |
111259
| null |
613960
|
2
| null |
613898
|
0
| null |
>
Would I be able to apply a time dependent covariate only analyze the at risk population on that date at the time of the event.
That's what the counting-process data format for time-dependent covariates allows. With outcomes expressed as `Surv(startTime, stopTime, event)`, at each event time the software examines all the data rows and only picks out the rows representing cases that are at risk at that particular time. Technically, for each data row, the `startTime` is a left-truncation time, meaning that the data row provides no information about risks prior to that time. The `stopTime` is either an event time or a right-censoring time, depending on the `event` marker.
If your choice of a calendar-date time scale makes sense and your time-varying covariates are coded accordingly, then you can analyze such data with a Cox model.
| null |
CC BY-SA 4.0
| null |
2023-04-24T13:52:12.637
|
2023-04-24T13:52:12.637
| null | null |
28500
| null |
613961
|
1
| null | null |
1
|
32
|
Can I use predicted outcomes from one model as the dependent variable in another model to make causal claims? Put differently, is there something equivalent to the Frish-Waugh-Lovell theorem for working with predicted outcomes rather than residuals?
To explain and make things tangible, let's take something like the Boston housing data as an example and assume prices are actually only driven by the number of bathrooms and distance from nearest metro station, and I want to know the causal effect of an extra bathroom on prices, so the regression
$$price = \beta_0 + \beta_1 n\_bathrooms + \beta_2 distance$$
correctly identifies the causal effect $\beta_1$.
Now we know from FWL that running the regression on distance only first, and then using the residuals from that as the dependent variable in a univariate regression on $n\_bathrooms$ will recover the coefficient for $\beta_1$ from the bivariate regression.
Now let's assume I know that the linear model with $n\_bathrooms$ and $distance$ is the correct one, but I don't actually have access to distance data, only to predicted outcomes given a fixed distance. That is, someone else has run the "univariate" regression on distance already and has provided me with the predicted house prices for each house given a distance of 1 mile from the metro station. Now I want to run the regression
$$\hat{y} = \beta_0 + \beta_1 n\_bathrooms$$
and my question is: what can or can't I guarantee about the estimate $\beta_1$ given my assumptions?
I know the 2017 Mullainathan Spiess paper where they argue that one of the good use cases for ML methods in economics is 2SLS, where the first step is pure prediction and the estimator improves with the predictive power of the first step. It also seems to me conceptually that if I assume that the conditional independence assumption holds given distance and bathrooms, then I should be able to recover a causal effect given data on bathrooms and a distance-adjusted outcome measure (i.e. $\hat{y}$ in my example).
Some simple simulation studies I tried suggest that in simple cases the coefficient from the full regression is recoverable albeit not exactly (as there isn't a direct algebraic correspondence between the two like with the full and partial models in FWL), and I assume in general it depends on what the true model is, how well the partial model approximates it, whether there are interaction effects etc.
I haven't been able to find any references discussing this question, so any pointers would be appreciated.
|
Using predicted outcomes to adress selection bias in causal inference
|
CC BY-SA 4.0
| null |
2023-04-24T13:58:06.310
|
2023-04-26T09:58:19.767
| null | null |
149657
|
[
"predictive-models",
"econometrics",
"causality"
] |
613962
|
1
| null | null |
0
|
34
|
My book presents the following derivation of the variance of the mean estimator $\bar{X_n} = \frac{1}{n} \sum_{i=1}^{n} X_{i}$ for a stationary process $(X_t)_{t}$ with autocovariance function $\gamma(\cdot)$:
[](https://i.stack.imgur.com/D6uqD.png)
I do not understand the last equality in the calculation above. Could someone explain how they have rewritten the sum (from summing over $i,j$ to summing over $i-j$) and why there is factor $n-|i-j|$ multiplied with the autocovariance function?
|
Why is $\sum_{i=1}^{n} \sum_{j=1}^{n} Cov(X_i,X_j) = \sum_{i-j=-n}^{n} (n-|i-j|)\gamma(i-j)$ for a stationary time series
|
CC BY-SA 4.0
| null |
2023-04-24T13:59:31.643
|
2023-04-24T14:05:16.487
|
2023-04-24T14:05:16.487
|
20519
|
386415
|
[
"time-series",
"stochastic-processes",
"stationarity"
] |
613963
|
1
| null | null |
0
|
12
|
I am trying to interpret my results from decomposing my time series and the acf/pacf plot. The adf test gave a p-value less than 0.05 so is it ok to assume that my time series is stationary with no trend or seasonality? Is it ok in the ACF plot if the values are above the thresholds?
[](https://i.stack.imgur.com/ZLvpx.png)
[](https://i.stack.imgur.com/tX3cC.png)
[](https://i.stack.imgur.com/aVSaN.png)
|
interpret acf plot
|
CC BY-SA 4.0
| null |
2023-04-24T14:04:17.517
|
2023-04-24T14:04:17.517
| null | null |
386424
|
[
"arima",
"seasonality",
"trend",
"acf-pacf"
] |
613965
|
1
| null | null |
1
|
47
|
Assume we plot $f(x) = ax + b$, a linear regression for some data points.
We have already calculated the 95% confidence intervals for both $a$ and $b$. So in total we have 6 values given: $a, b, a_{lower}, a_{upper}, b_{lower}, b_{upper}$.
Given these values, how do we plot these alongside the linear function in this way:
[](https://i.stack.imgur.com/Iwv1d.png)
|
Plot confidence interval given confidence interval for parameters of linear function
|
CC BY-SA 4.0
| null |
2023-04-24T14:09:47.443
|
2023-04-24T19:57:27.237
| null | null |
153217
|
[
"regression",
"confidence-interval",
"data-visualization"
] |
613968
|
2
| null |
173501
|
0
| null |
If you did a more routine regression model, such as OLS linear regression, your approach would be a standard designed experiment. Your work would be described as a full-factorial ANOVA.
A nice aspect of (multinomial) logistic regression is that the way to think of the predictor variables (often called "features") does not really change from the linear regression case. In linear regression, you use the features to help you distinguish between values on a continuum. In binary logistic regression, you use the features to distinguish between two categories and determine their relative probabilities. In multinomial logistic regression, you use the features to distinguish between three (or more) categories and determine their relative probabilities.
Overall, it sounds like you have performed a full-factorial ANOVA with a categorical outcome and that you have done so correctly.
| null |
CC BY-SA 4.0
| null |
2023-04-24T14:24:52.747
|
2023-04-24T14:24:52.747
| null | null |
247274
| null |
613969
|
2
| null |
613761
|
1
| null |
That's indeed an error. The denominator in (2.10) should be $s_p$, that is, the square-root of $s_p^2$, which is the pooled variance (which is given correctly by 2.11).
| null |
CC BY-SA 4.0
| null |
2023-04-24T14:26:55.080
|
2023-04-24T14:26:55.080
| null | null |
1934
| null |
613970
|
2
| null |
85398
|
4
| null |
First, probably the best way to approach such predictions is as they are. That is, deal with the raw outputs of your model, and evaluate those predicted probabilities using proper scoring rules. See [Why is accuracy not the best measure for assessing classification models?](https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models) and [Academic reference on the drawbacks of accuracy, F1 score, sensitivity and/or specificity](https://stats.stackexchange.com/questions/603663/academic-reference-on-the-drawbacks-of-accuracy-f1-score-sensitivity-and-or-sp) for details.
However, if you insist on using hard classifications (there can be legitimate reasons), the idea that makes the most sense to me is to randomize. If you get a prediction that is right on the nose of $0.5$, randomly assign a label according to a $\text{Bernoulli}(0.5)$ distribution, such as via `numpy.random.binomial(1, 0.5, 1)` in Python. You might have to set `numpy.random.seed` earlier in your script, but handling the predictions of $0.5$ this way avoids biasing your categorical predictions toward either category. Under most circumstances, a probability right on the nose of $0.5$ should be rather rare, so this should not make much of a difference, but perhaps you can feel comfortable having covered such a scenario.
EDIT
If you decide to use a threshold $p$ other than $0.5$ (which you are allowed to do), you can randomize according to $\text{Bernoulli}(p)$ distribution, such as via `numpy.random.binomial(1, p, 1)` in Python.
| null |
CC BY-SA 4.0
| null |
2023-04-24T14:32:15.733
|
2023-04-24T16:02:07.907
|
2023-04-24T16:02:07.907
|
247274
|
247274
| null |
613971
|
1
| null | null |
0
|
31
|
I am trying to learn how to use r to perform an ordinal logistic regression, in order to look for correlation between survey questions with categorical answers and another question asking about level of confidence.
I am very new to this, so I would really appreciate any help so that I can understand how to do this correctly. Here is my code to extract the survey responses from a .csv file and make them into factors, then to run the model:
```
conf_levels <- c("Mycket stort", "Ganska stort", "Inte särskilt stort", "Inget alls")
cc_worry_levels <- c("1: Inte alls orolig", "2", "3", "4", "5", "6", "7", "8", "9", "10: Väldigt orolig")
pol_levels <- c("Instämmer helt", "Instämmer huvudsakligen", "Instämmer huvudsakligen inte")
sex_levels <- c("Kvinna", "Man", "Icke-binär")
infl_levels <- c("Mycket stort inflytande", "Ganska stort inflytande", "Litet inflytande")
M_dat3 <- M_dat %>%
dplyr::select(1, 59, 86, 10, 73, 52) %>%
mutate(AC_conf = factor(X14d, levels = conf_levels)) %>%
mutate(CC_worry = factor(X21a, levels = cc_worry_levels)) %>%
mutate(pol = factor(X13h, levels = pol_levels)) %>%
mutate(sex = factor(X5, levels = sex_levels)) %>%
mutate(infl = factor(X18c, levels = infl_levels))
m <- polr(AC_conf ~ CC_worry + pol + sex + infl, data = M_dat3, Hess=TRUE)
ctable <- coef(summary(m))
p <- pnorm(abs(ctable[, "t value"]), lower.tail = FALSE) * 2
(ctable <- cbind(ctable, "p value" = p))
```
I then get the following warning message (which I do not understand):
```
Warning message:
In polr(AC_conf ~ CC_worry + pol + sex + infl, data = M_dat3, Hess = TRUE) :
design appears to be rank-deficient, so dropping some coefs
```
And this output:
```
Value Std. Error t value p value
CC_worry2 24.6311742 2.0687289 11.9064295 1.095694e-32
CC_worry3 19.7021684 1.7270242 11.4081600 3.806972e-30
CC_worry5 20.4673985 0.8801048 23.2556373 1.247464e-119
CC_worry6 19.7466790 1.0450020 18.8963065 1.223215e-79
CC_worry7 18.3371264 0.9931632 18.4633565 4.072300e-76
CC_worry8 19.0915938 0.7664600 24.9087942 5.974652e-137
CC_worry9 19.0753845 0.8396835 22.7173506 3.018750e-114
CC_worry10: Väldigt orolig 18.8678001 0.5643129 33.4349945 4.253030e-245
polInstämmer huvudsakligen -0.0770514 0.5740080 -0.1342340 8.932175e-01
polInstämmer huvudsakligen inte 0.1363698 0.9431398 0.1445913 8.850336e-01
sexMan -0.4852808 0.5830296 -0.8323434 4.052151e-01
sexIcke-binär 2.8706326 2.1272160 1.3494787 1.771833e-01
inflGanska stort inflytande 1.7455581 1.2145067 1.4372568 1.506450e-01
inflLitet inflytande 2.3363109 1.2635429 1.8490159 6.445552e-02
Mycket stort|Ganska stort 20.0629038 1.1448351 17.5247104 9.281299e-69
Ganska stort|Inte särskilt stort 22.8893452 1.1789329 19.4153087 5.729075e-84
Inte särskilt stort|Inget alls 25.6752159 1.5726172 16.3264242 6.402883e-60
```
I'm not sure if I have the data in the right format, if I am running the model correctly, and then how to interpret the output. If anybody could make any suggestions, it would be much appreciated.
I am happy to create a reproducible example if that would be useful.
|
How to do ordinal logistic regression (OLR) on survey data with categorical answers?
|
CC BY-SA 4.0
| null |
2023-04-24T14:39:10.390
|
2023-04-26T09:26:35.333
|
2023-04-26T09:26:35.333
|
386431
|
386431
|
[
"r",
"categorical-data",
"survey",
"ordered-logit",
"polr"
] |
613972
|
1
| null | null |
2
|
30
|
O'Brien (1988) has shown that a strong method for doing multivariate testing is to reverse the problem. That is, instead of seeing if the category impacts the measured values, see how the measured values impact that category. These are logically equivalent notions, [and approaching the problem this way, such as with a logistic regression, has advantages over, say, Hotelling's $T^2$ test](https://stats.stackexchange.com/a/66422/247274).
In that link, Harrell writes:
>
If there is not just a difference in means but a difference in variance for a response across the groups, you include a square term in the logistic model for that response. I suppose that if skewness differs you could include a cube term.
The comments about squaring and cubing make sense to me, and I suppose raising a feature to a power would correspond to testing for differences in the moment corresponding to that power (though perhaps not central moments).
Is this thinking correct? What would be the interpretation of other basis functions of the original features, such as $x_1x_2$ or $x_1^2x_2^3x_3?$ Can we test for particular differences in copulas by examining interactions like these?
REFERENCE
O'Brien, Peter C. "Comparing two samples: extensions of the t, rank-sum, and log-rank tests." Journal of the American Statistical Association 83.401 (1988): 52-61.
|
Interpretation of basis functions in a logistic regression: can we test for univariate and multivariate/copula differences between the categories?
|
CC BY-SA 4.0
| null |
2023-04-24T14:42:44.540
|
2023-04-24T15:27:35.223
|
2023-04-24T15:27:35.223
|
247274
|
247274
|
[
"regression",
"multivariate-analysis",
"regression-coefficients",
"copula",
"basis-function"
] |
613974
|
1
| null | null |
0
|
24
|
I'm trying to run a beta regression to predict my dependent variable Consistency, which has values between 0 and 1.
Here is the distribution of Consistency values in my dataset:
[](https://i.stack.imgur.com/v9RW7.png)
I originally tried linear mixed models with different transformations, but none of them seemed satisfactory (none of the Q-Q plots looked good), so I decided to try a beta regression, using the following command:
```
l_cons <- glmmTMB(as.formula(paste(var_predicted, ' ~ ', paste(effets_model, collapse = ' + '), '+ ', nom_random, sep = '')), data = d, family=beta_family())
```
(the model is quite massive with a huge number of predictor variables and interactions, hence this construction rather than adding each predictor variable manually).
Now, I wanted to test if overdispersion might be a problem. First of all, by calling
```
summary(l_cons)
```
I get
```
Data: d
AIC BIC logLik deviance df.resid
-6861.3 -6465.1 3499.7 -6999.3 2235
Random effects:
Conditional model:
Groups Name Variance Std.Dev.
Sujet (Intercept) 0.04931 0.2221
Number of obs: 2304, groups: Sujet, 72
Dispersion parameter for beta family (): 26.5
```
Moreover by calling (from the DHARMa package):
```
simulateResiduals(l_cons, plot = T)
```
I get the following output:
[](https://i.stack.imgur.com/smAmM.png)
as well as
```
testDispersion(simulateResiduals(l_cons, plot = T))
```
with the following output:
[](https://i.stack.imgur.com/OJiKh.png)
Can somebody explain to me what this means? Does it mean that the distribution of my data is altogether unfit for a beta regression? Are there some transformations that should be done? If the conclusion from this information is that there is indeed overdispersion, what does that mean exactly?
|
Overdispersion in a beta regression? (DHARMa package)
|
CC BY-SA 4.0
| null |
2023-04-24T15:01:30.807
|
2023-04-24T15:01:30.807
| null | null |
386430
|
[
"r",
"mixed-model",
"overdispersion",
"beta-regression",
"glmmtmb"
] |
613975
|
2
| null |
613931
|
1
| null |
Frank Harrell's [Regression Modeling Strategies](https://hbiostat.org/rmsc/) provides a lot of guidance on how to design regression models. As @whuber nicely puts it in a comment, "Any general rule would clearly be incorrect in the wrong circumstances." This essentially comes down to the tradeoffs that you are willing to make. A few principles to apply come to mind.
First, omitting any outcome-associated predictor is typically not a good idea, as it leads to a risk of [omitted-variable bias](https://en.wikipedia.org/wiki/Omitted-variable_bias). In ordinary linear regression, that will occur if the omitted variable is correlated with the included predictors. In survival analysis and binary regression, omitting any outcome-associated predictor can lead to downward bias in the magnitudes of coefficient estimates for included predictors.
Second, although the first point would suggest including all potentially outcome-associated predictors, as you increase the number of predictors you run a risk of overfitting: getting a model that very well predicts your particular data set but doesn't generalize well to other data samples. I might even worry a bit that, with some very rare predictors "being significant on the analysis," you might already be overfitting your data. Make sure that you have done careful validation of the model before you put too much faith in those results.
Third, the formula for the estimated variance of the estimated linear regression coefficient for predictor $j$, $\hat \beta_j$, provides some clues. From [Wikipedia](https://en.wikipedia.org/wiki/Variance_inflation_factor#Definition):
$$ \widehat{\operatorname{var}}(\hat{\beta}_j) = \frac{s^2}{(n-1)\widehat{\operatorname{var}}(X_j)}\cdot \frac{1}{1-R_j^2},$$
where $\widehat{\operatorname{var}}(X_j)$, is the estimated variance in predictor $j$, $R_j^2$ is the multiple $R^2$ of the regression of predictor $j$ on the other predictors, $n$ is the number of observations, and $s^2$ is the mean squared residual error.
If you have a rare binary predictor, $\widehat{\operatorname{var}}(X_j)$ is necessarily very small. For a fraction $f_j$ having the rare value of predictor $j$, it's $f_j(1-f_j)$. In your example of 4 females and 500 males, that's only 0.008. That's much less than the maximum variance at a 50/50 distribution, 0.25. The formula means that if you have a rare predictor you will only find a "significant" association with outcome if the magnitude of its association is very large and its correlation with other predictors is small.
That helps define the tradeoffs. Do you want to miss a potentially large association of predictor $j$ with outcome by omitting it from the model? Or do you want to risk overfitting by including more predictors in the model than the size of your data set can reasonably allow?
| null |
CC BY-SA 4.0
| null |
2023-04-24T15:10:26.417
|
2023-04-24T15:10:26.417
| null | null |
28500
| null |
613976
|
1
| null | null |
1
|
17
|
Let $X$ be a matrix with $n$ rows and $d$ columns. We know that there exists matrices $U, S, V$, with $U$ of dimensions $(n, d)$, $S$ of dimensions $(d, d)$ and $V$ of dimensions $(d, d)$, which form the singular value decomposition of $X$, such that:
- $X = USV$
- $S$ is a diagonal matrix. Assume for this problem that S has full rank.
- $U^T U = V^TV = VV^T = I_d$.
We also know the formula for multiple linear regression, if we use a response variable $Y$ of $n$ rows and 1 column: $\widehat{\beta} = (X^T\cdot X)^{-1}X^T Y$.
Further, we know the formula for the prediction from this linear regression: $X\widehat{\beta} = X\cdot (X^T\cdot X)^{-1}X^T Y$.
This formula can be rewritten in terms of $U, S, V$: $X\widehat{\beta} = U\cdot U^TY$.
My question is the following:
Say that I want to run a very similar regression, but now, I am trying to predict Y by using a new matrix $X'$, which has the same observations as $X$ except it also has a new column, which is made entirely of 1s. From this regression we obtain a new vector of coefficients, and let's call it $\widehat{\beta}'$.
Is there a formula expressing this new prediction, $ X'\widehat{\beta}'$ in terms of the original values U, S, V from the SVD decomposition of X? Thank you!
|
Prediction of Multiple Linear Regression With Constant
|
CC BY-SA 4.0
| null |
2023-04-24T15:21:02.177
|
2023-04-24T15:21:02.177
| null | null |
74056
|
[
"multiple-regression",
"least-squares",
"regression-coefficients",
"svd",
"matrix-decomposition"
] |
613979
|
1
| null | null |
1
|
53
|
I have run an Interrupted Time Series Analysis based upon the below code:
```
glm(`Subject Total` ~ Quarter + int2 + time_since_intervention2 ,
df, family = "poisson")
```
I have used the `emmeans` package to estimate the pairwise difference between the counterfactual and point estimate and get the below output:
```
contrast estimate SE df z.ratio p.value
Quarter20 int21 time_since_intervention24 - Quarter20 int20 time_since_intervention20 -0.341 0.160 Inf -2.140 0.1406
```
The above estimate (0.341) cross-checks against manually derived outcomes. However I had a question about the p-value included within the above output and the manually derived equivalent [undertaken as a means of checking]. The p-value in the above is 0.146 [non-significant]. However, when calculated directly from the z-ratio (`2*pnorm(-2.140)`) I get a p-value of 0.03. Is the `emmeans` output correct (am I doing something wrong by assuming `pnorm`?) or is the manually calculated more likely to be accurate?
UPDATE:
The data frame is as below. Quarters represent time. Subject Total the outcome. Int2 a dummy variable to identify the point of intervention (0 pre/1 post). Time_since_intervention2 another dummy variable 0 prior to intervention 1:8 after.
```
> df[,c(1,2,9,11)]
Quarter Subject Total int2 time_since_intervention2
1 1 33 0 0
2 2 32 0 0
3 3 35 0 0
4 4 34 0 0
5 5 23 0 0
6 6 34 0 0
7 7 33 0 0
8 8 24 0 0
9 9 31 0 0
10 10 32 0 0
11 11 21 0 0
12 12 26 0 0
13 13 22 0 0
14 14 28 0 0
15 15 27 0 0
16 16 22 0 0
17 17 14 1 1
18 18 16 1 2
19 19 20 1 3
20 20 19 1 4
21 21 13 1 5
22 22 15 1 6
23 23 16 1 7
24 24 8 1 8
```
```
Call:
glm(formula = `Subject Total` ~ Quarter + int2 + time_since_intervention2,
family = "poisson", data = df)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.4769 -0.5111 0.1240 0.6103 0.9128
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 3.54584 0.09396 37.737 <0.0000000000000002 ***
Quarter -0.02348 0.01018 -2.306 0.0211 *
int2 -0.23652 0.21356 -1.108 0.2681
time_since_intervention2 -0.02624 0.04112 -0.638 0.5234
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 63.602 on 23 degrees of freedom
Residual deviance: 13.368 on 20 degrees of freedom
AIC: 140.54
Number of Fisher Scoring iterations: 4
```
The above suggests that the level change at the intervention point was non-significant.
We want to report the difference between the point estimate at Quarter 20 with the corresponding counterfactual (extrapolation of pre-policy trends/patterns) at the same point. At the moment I have done that using emmeans pairwise comparison.
```
emmeans(
object = fit1a,
specs = c("Quarter", "int2", "time_since_intervention2"),
at = list(
Quarter = c(20),
int2 = c(0, 1),
time_since_intervention2 = c(0,4)
)
) |>
contrast(method = "revpairwise")
contrast(method = "revpairwise")
contrast estimate SE df
Quarter20 int21 time_since_intervention20 - Quarter20 int20 time_since_intervention20 -0.237 0.214 Inf
Quarter20 int20 time_since_intervention24 - Quarter20 int20 time_since_intervention20 -0.105 0.164 Inf
Quarter20 int20 time_since_intervention24 - Quarter20 int21 time_since_intervention20 0.132 0.346 Inf
Quarter20 int21 time_since_intervention24 - Quarter20 int20 time_since_intervention20 -0.341 0.160 Inf
Quarter20 int21 time_since_intervention24 - Quarter20 int21 time_since_intervention20 -0.105 0.164 Inf
Quarter20 int21 time_since_intervention24 - Quarter20 int20 time_since_intervention24 -0.237 0.214 Inf
z.ratio p.value
-1.108 0.6849
-0.638 0.9197
0.380 0.9813
-2.140 0.1406
-0.638 0.9197
-1.108 0.6849
```
The only outcome that can exist is Quarter20 int21 time_since_intervention24 - Quarter20 int20 time_since_intervention20.
Not sure I'm doing this the right way.
|
p-Value from Z-ratio
|
CC BY-SA 4.0
| null |
2023-04-24T14:57:08.120
|
2023-04-25T08:00:52.323
|
2023-04-25T08:00:52.323
|
343051
|
343051
|
[
"r",
"p-value",
"lsmeans"
] |
613980
|
1
| null | null |
0
|
23
|
Could you please help me understand why the following reasoning is wrong:
let's say there are two random independent samples:
x1=56, n1=250, so that phat1=0.224, and
x2=60, n2=300, so that phat2=0.2.
Suppose I want a 95% confidence interval for the difference in population proportions. Using the standard technique, I would get phat1-phat2=0.024, se=0.0351, and the interval is (-0.045, 0.093).
However, if I constructed two separate 95% confidence intervals for p1 and p2, I would get (0.172, 0.276) for p1, and (0.155, 0.245) for p2, so that (p1-p2) has to be in (-0.073, 0.121).
What is the problem with the second approach? I understand the math part: the second approach adds up the standard errors, which is wrong. But why is the logic wrong? If p1 is in (x1, y1), and p2 is in (x2, y2), shouldn't the difference be in (x1-y2, y1-x2)?
|
Confidence Interval for the difference in two proportions
|
CC BY-SA 4.0
| null |
2023-04-24T15:31:00.737
|
2023-04-24T15:31:00.737
| null | null |
241576
|
[
"confidence-interval",
"proportion"
] |
613981
|
2
| null |
85398
|
2
| null |
The question's focus on 0.5 conceals an important fact: each and every threshold applied to a continuous prediction implies some number of errors (false positives or false negatives). The question "How do a I set a threshold?" is not answerable in a vacuum, but instead depends on the application & the cost of errors. It is important to consider the cost of an error alongside the probability of the error -- amputating a limb is dramatically different from administering an unnecessary dose of antibiotics.
Even if you are compelled to choose a cutoff for some reason, it is worthwhile to consider what error rates you can tolerate. Receiver Operating Characteristic ([roc](/questions/tagged/roc)) curves are a partial answer to that question, framing the choice of a cutoff as achieving a higher (lower) true positive rate at the cost of a higher (lower) false positive rate. That said, deciding on the appropriate TPR/FPR tradeoff is also contextual & depends on the goals of the model and how it is applied.
| null |
CC BY-SA 4.0
| null |
2023-04-24T15:35:36.827
|
2023-04-24T17:45:39.753
|
2023-04-24T17:45:39.753
|
22311
|
22311
| null |
613982
|
2
| null |
613979
|
0
| null |
In general, since `emmeans` is a widely used package you should probably assume that whatever computation it's doing is being done correctly: the danger is that it might not be doing what you want, or you might not understand what it's doing (or both). The most obvious thing that occurs to me is that computing pairwise contrasts automatically invokes a multiple-comparisons correction. From the [vignette](https://cran.r-project.org/web/packages/emmeans/vignettes/comparisons.html#pairwise):
>
In its out-of-the-box configuration, pairs() sets two defaults for summary(): adjust = "tukey" (multiplicity adjustment), and infer = c(FALSE, TRUE) (test statistics, not confidence intervals). You may override these, of course, by calling summary() on the result with different values for these.
So if the contrast you're showing us is one of several pairwise contrasts, the p-value will be larger than that computed from a single Z-statistic ([google provides lots of good hits for "tukey pairwise contrasts correction" ](https://www.google.com/search?client=firefox-b-d&q=tukey+pairwise+contrasts+correction), I wasn't sure which one to pick).
| null |
CC BY-SA 4.0
| null |
2023-04-24T15:42:15.940
|
2023-04-24T15:42:15.940
| null | null |
2126
| null |
613983
|
1
| null | null |
0
|
27
|
PDF of a random variable $X$ is,
$$
\begin{equation}
f\left(x|\gamma\right)=
\begin{cases}
\frac{1}{\gamma} \exp(-\frac{x}{\gamma}) & x > 0 \\
0 & \text{otherwise.}
\end{cases}
\end{equation}
$$
I got the estimator using negative log-likelihood function and equal it with zero:
$$\hat{\gamma}_{\text{MLE}}=\frac{\sum_{i=1}^n x_i}{n}$$
But can I get an estimator with better MSE (i.e smaller MSE) compared to the $\hat{\gamma}_{\text{MLE}}$? And what will be the general procedure to come up with another estimator because the mathematical framework will alway give me the fixed $\hat{\gamma}_{\text{MLE}}$ here.
|
What is the general procedure to come up with different estimator with smaller MSE?
|
CC BY-SA 4.0
| null |
2023-04-24T15:50:40.330
|
2023-04-24T16:20:40.057
|
2023-04-24T16:20:40.057
|
386294
|
386294
|
[
"maximum-likelihood",
"estimation",
"sampling",
"inference",
"mse"
] |
613984
|
2
| null |
85398
|
6
| null |
The two very standard things you can do are (i) to assign to one class of the probability is greater than or equal to 0.5 (or whatever threshold is appropriate for your task) and the other class if the probability is less than 0.5; and (ii) have some zone of probability where the uncertainty is too great to make a decision on that basis, i.e. to have a "reject" option (for multi-class problems, reject if the difference in probability between the most probable and the second most probable class is below some cut-off value).
I have to say I disagree with those who argue against having a threshold. It depends on the needs of the application, it isn't a statistical issue. In some applications you have to make a decision, and the quality of that decision may be something we need to measure. In some applications it is acceptable to have a reject option (for instance it may be a screening test to triage cases sent for a more expensive evaluation) and some it isn't. In some applications, where perhaps the operational class frequencies or misclassification costs are unknown or are variable, in which case we are better off focussing on probability estimation in a way that is independent of the threshold (because we don't know the appropriate value of the threshold). Unfortunately there are cases where a probabilistic models give worse decisions (for a fixed threshold) than purely discriminative "hard" classifiers, such as the SVM, so we can't assume that probabilistic classifiers are a panacea - they aren't. To make the correct modelling and evaluation decisions, you need to think about the needs of the particular application, and make the choices that meet the requirements.
Having said which, I am very much in favour of probabilistic models and proper scoring rules, it is just that they are not the (full) answer to every classification problem (and neither are SVMs or DNN).
| null |
CC BY-SA 4.0
| null |
2023-04-24T15:52:19.583
|
2023-04-25T15:54:39.990
|
2023-04-25T15:54:39.990
|
887
|
887
| null |
613985
|
1
| null | null |
0
|
43
|
In microbial research, a common way to check growth rates of bacteria is by performing a dilution of the bacterial population and then plating the resulting dilution on a petri dish. After some time, the cells on the petri dish grow into visible colonies which you can then count to arrive at an estimate of the original population size ('colony forming unit' or CFU counts). For this method, it is important to use the right dilution so that your plates contain somewhere between 30-300 cells upon plating, which ensures that your counts are not too low (so that they are too affected by the stochasticity of your dilution process), nor too high (in which case the colonies are too close together, making it difficult to count them).
My question is quite fundamental: what is an appropriate statistical method to compare CFU counts? Say I have inoculated bacterial populations in test tubes and grown them under two experimental conditions (say, 25 and 30 degrees Celcius). How do I estimate (and test) the impact of temperature on bacterial growth? I have counts, so a Poisson GLM may be appropriate. However, what I measured (CFUs) is only a proxy for ACTUAL population size, for which I have no accurate measure. Also, even though I have counts, it is not quite clear to me whether they are generated by a Poisson process, where events happens with a certain constant probability per unit time. Cell divisions do happen with some probabilty per unit time, but every division generates more cells that can again divide - it intuitively seems to me that this exponential relationship complicates things.
Any insights on whether these considerations render a Poisson GLM problematic, and if so, what a better approach might be?
|
How to appropriately compare colony-forming units (CFUs)?
|
CC BY-SA 4.0
| null |
2023-04-24T15:53:33.903
|
2023-04-24T17:47:34.120
| null | null |
128527
|
[
"generalized-linear-model",
"poisson-process"
] |
613986
|
2
| null |
613957
|
1
| null |
First up, just worth pointing out that is a terrible plot. The labelling on the y-axis is completely disingenuous and outright wrong. The smooth density (lognormal apparently) shown is surely some scaled version of the density. We know this because how could the density shown sum to 1 across its support?
The histogram bars seem to represent the probability mass for intervals incrementing by 0.1 from 0 to 1. Nothing is shown for (0, 0.1]* for some reason.
In any case, we can ignore the density and simply utilize the histogram bars to solve for the parameters of the distribution. Take any two bars. Accurately(?) read off the probability masses occurring in those interval. Also note the intervals themselves. If we define the masses as
$$(m_{1},m_{2})$$
and the corresponding two intervals as
$$(a_{1},b_{1})\quad\text{and}\quad(a_{2},b_{2})$$
and further define
$$G(\mu, \sigma;a,b)=F(b;\mu,\sigma)-F(a;\mu,\sigma)$$
where $F(x;\mu, \sigma)$ is the cumulative distribution function of a lognormal distribution.
Your problem reduces to solving the system of equations
$$\begin{align}
G(\mu, \sigma;a_{1},b_{1})=m_{1}\\
G(\mu, \sigma;a_{2},b_{2})=m_{2}
\end{align}$$
for $(\mu, \sigma)$.
In any case, this is what I replicated:
[](https://i.stack.imgur.com/Z9IPB.png)
giving $(\mu, \sigma)=(-1.2950867,0.3694235)$. Of course your exact results will depend on the values you read off the plot.
---
- Or is it [0, 0.1) or (0, 0.1) or [0, 0.1]? Who knows.
| null |
CC BY-SA 4.0
| null |
2023-04-24T15:56:37.253
|
2023-04-26T05:20:40.867
|
2023-04-26T05:20:40.867
|
102399
|
102399
| null |
613987
|
2
| null |
613249
|
2
| null |
Given that both $X$ and $Y$ are integer valued, e.g. with a finite number of values, a likely trick in deriving the marginals of $X$ and $Y$ stands in writing enough linear relations between $\mathbb P(X=x)$ and $\mathbb P(Y=y)$ to identify the marginals. (After writing the derivation below, I checked [the referenced book](https://amzn.to/3HbY1Ed) and found out that the authors reach the same conclusion in eqn (7.38).)
For instance, assume that $S(X) =S(Y) = \{0,1,2,...,n\}$. Then, for $0\le \imath,\jmath\le n$,
$$ \mathbb P(Y=\jmath|X=\imath) = \frac{\mathbb P(X=\imath|Y=\jmath)\mathbb P(Y=\jmath)}{\mathbb P(X=\imath)}=\frac{\jmath^\imath c(\imath)/c^*(\jmath)\overbrace{\mathbb P(Y=\jmath)}^{q(\jmath)}}{\underbrace{\mathbb P(X=\imath)}_{p(\imath)}}$$ Therefore, for $0\le \imath\le n$,
$$\psi(\imath) = \sum_{\jmath=0}^n \jmath\mathbb P(Y=\jmath|X=\imath)=\sum_{\jmath=0}^n \jmath^{\imath+1} \frac{c(\imath)q(\jmath)}{c^*(\jmath)p(\imath)} = \frac{c(\imath)}{p(\imath)} \sum_{\jmath=0}^n \jmath^{\imath+1} \frac{q(\jmath)}{c^*(\jmath)}$$
or
$$p(\imath)\frac{\psi(\imath)}{c(\imath)}=\sum_{\jmath=0}^n \jmath^{\imath+1} \frac{q(\jmath)}{c^*(\jmath)}\tag{1}$$
Furthermore,
$$p(\imath)=\sum_{\jmath=0}^n \mathbb P(X=\imath|Y=\jmath) \mathbb P(Y=\jmath)=\sum_{\jmath=0}^n \frac{\jmath^\imath c(\imath)}{c^*(\jmath)}q(\jmath)\tag{2}$$
Hence, merging (1) and (2),
$$\sum_{\jmath=0}^n \frac{\jmath^\imath c(\imath)}{c^*(\jmath)}q(j)\frac{\psi(\imath)}{c(\imath)}=\sum_{\jmath=0}^n \jmath^{i+1} \frac{q(\jmath)}{c^*(\jmath)}$$
or
$$\sum_{\jmath=0}^n \frac{\jmath^\imath q(\jmath)\psi(\imath)}{c^*(\jmath)}=\sum_{\jmath=0}^n \jmath^{i+1} \frac{q(\jmath)}{c^*(\jmath)}$$
or again
$$\sum_{\jmath=0}^n \left\{{\psi(\imath)}-{\jmath}\right\}\frac{\jmath^\imath}{c^*(\jmath)}q(\jmath)=0\tag{3}$$
which leads to a system of $n+1$ equations in $\mathbf q=(q(0),\ldots,q(n))$ and, along with the normalisation constraint
$$\sum_{\jmath=0}^n q(\jmath)=1$$
it should lead to a unique derivation of the marginal distribution of $Y$.
Now, this does not directly answer the question about an MCMC resolution since solving (3) produces a marginal distribution and hence a way to directly simulate from the joint. (An unsubstantiated suggestion is to move $\mathbf q$ at each MCMC iteration by one gradient descent step when starting from a arbitrary value.)
| null |
CC BY-SA 4.0
| null |
2023-04-24T16:05:00.270
|
2023-04-24T17:13:54.203
|
2023-04-24T17:13:54.203
|
7224
|
7224
| null |
613988
|
1
| null | null |
0
|
13
|
A concept I'm struggling with is the type 1 error and the level of a test, for me the type 1 error is P(Wn|H0)
where Wn
is the rejection zone. Is it not a way to measure Wn
given that H0
is true ? So why do we reject H0
when the level of a test is too low? ( In my opinion, if the level of the test is too low so is the measure of Wn
under P(.|H0)
so we have less chance to fall into the rejection zone).
Thank you
|
Critical Region and level of a test
|
CC BY-SA 4.0
| null |
2023-04-24T16:06:12.287
|
2023-04-24T16:06:12.287
| null | null |
386423
|
[
"hypothesis-testing",
"mathematical-statistics",
"sampling",
"inference"
] |
613989
|
1
| null | null |
3
|
36
|
I am using the normalized confusion matrix to aid in quantifying the uncertainty in related observations over time. More specifically, I have a classifier that returns the confidence of each class via the softmax function. I can build a confusion matrix by taking the arg max of the predictions. When I see an event, I use my classifier to produce a confidence distribution over the various classes. I then use the arg max to get the most probable class and then use the column associated with that class of the normalized confusion matrix to get the likelihood that the object is actually seen. This follows the method outlined in Tracking with Classification-Aided Multiframe Data Association (2005, Bar-Shalom).
Is there a standard way to consider the confidence of the predictions, both when building the confusion matrix and when updating the uncertainty?
One thought I had was to add the probabilities instead of the threshold value when building the confusion matrix.
The standard way of doing it
Build the normalized confusion matrix you make a matrix size CxC, where C is the number of classes. For each prediction, you take the arg max and add a vector with all zeros except for the entry from the arg max set to 1 to the column of the confusion matrix that corresponds to the correct class. You then normalize the matrix by normalizing all columns to equal 1.
The update step takes the arg max of the predictions and take that column of the confusion matrix as the prediction probability.
Proposed way
When building the confusion matrix, add the confidence vector to the column in the confusion matrix instead of the vector with a single non-zero value. When making a prediction, multiply the confusion matrix by the confidence vector to get the probability of detection.
|
Confidence Informed Confusion Matrix (Threshold Free)
|
CC BY-SA 4.0
| null |
2023-04-24T16:08:04.597
|
2023-04-24T16:39:34.150
|
2023-04-24T16:39:34.150
|
386444
|
386443
|
[
"uncertainty",
"confusion-matrix"
] |
613990
|
1
| null | null |
0
|
17
|
I have a list of genes and I tested if these genes are associated with a disease by using the genome-wide association summary statistics dataset. Chi-square statistics was used and after performing permutations, I could calculate the empirical P value (number of times the simulated statistic value exceeds the calculated chi-square statistics) and found the 14 genes were significantly associated with disease. I need advice if there is a way to find if these 14 genes are more than expected by chance. Can hypergeometric test be used to find if these 14 genes are more than expected by chance?
I used the UCLA calculator [https://systems.crump.ucla.edu/hypergeometric/](https://systems.crump.ucla.edu/hypergeometric/) as shown in the snapshot by assuming the values as follows
```
Number of genes in the genome-wide association summary statistics dataset = 18820
Number of genes significantly associated with disease = 416
Number of genes (from my list) to be tested= 49
Number of genes (from my list) found to be associated in the genome-wide association summary statistics dataset =14
Expected value = 49*416/18820
```
I am not sure if this is right way to do. I assume that this expected value which is 1.083 is lower than observed value which is 14. Can someone explain how this could be calculated and interpreted?
Thank you
|
Test to find expected outcome
|
CC BY-SA 4.0
| null |
2023-04-24T16:12:48.963
|
2023-04-24T16:30:42.247
|
2023-04-24T16:30:42.247
|
56940
|
386442
|
[
"chi-squared-test",
"permutation-test",
"hypergeometric-distribution",
"gwas",
"empirical-likelihood"
] |
613992
|
2
| null |
249399
|
0
| null |
It would probably make more sense to predict either the probability of CD purchase or at least the relative rankings of probability to purchase.
If you have the relative rankings, you know (or predict) the top $N$ people most likely to make a purchase. If you have a certain budget to spend on advertising, you can spend it on the customers most likely to make a purchase. Harrell discusses that on his [blog](https://www.fharrell.com/post/classification/) when he refers to a "lift curve". An advantage of using the rankings over the probabilities is that you do not have to waste resources trying to pin down the exact probabilities.
If you have the probabilities, you can do this ranking. However, it gives additional information. For instance, if there is a precipitous drop in probability to make a purchase before you have exhausted your budget, you can choose to save money instead of advertising to people who are unlikely to respond. Conversely, you can produce evidence that you should have a larger budget if you have people with a high predicted probability to respond yet not enough budget to advertise to them. Finally, if you get a result that no one is likely to make a purchase, it seems valuable to know that an upcoming release is likely to be a flop.
Neither the rankings nor the full probability predictions are heavily affected by this kind of class imbalance. Consequently, discarding precious data is, probably, unwise. Your result of no one being predicted to buy an Adele CD comes from the fact that, under the hood, your model is applying a threshold, likely requiring a predicted probability of at least $0.5$ to be classified as making a purchase. When you model the probability of purchase (or at least the relative rankings), there is no such threshold.
| null |
CC BY-SA 4.0
| null |
2023-04-24T16:43:05.013
|
2023-04-24T16:43:05.013
| null | null |
247274
| null |
613993
|
2
| null |
557974
|
0
| null |
Pearson's correlation, at least its magnitude, between two numerical variables is equivalent to taking the square root of the $R^2$ from an ordinary least squares (OLS) linear regression regression of one variable on the other. If you can develop a regression with categorical variables and calculate something like $R^2$ for that regression, you are set.
Fortunately, such regressions and metrics do exist. Logistic regression (binary outcome) and multinomial logistic regression ($3+$ categories in the outcome) are reasonable analogues of OLS linear regression regression with numerical variables. Further, both models can use categorical or numerical variables. From such regressions, [pseudo $R^2$ values](https://stats.oarc.ucla.edu/other/mult-pkg/faq/general/faq-what-are-pseudo-r-squareds/) can be calculated. My favorite from the link would be the McFadden $R^2$ that uses the likelihood (in the technical sense) of the model compared to the likelihood of a model that always predicts the pooled ([prior](https://stats.stackexchange.com/questions/229968/the-usage-of-word-prior-in-logistic-regression-with-intercept-only/583115#583115)) probability. You could then take the square root of the McFadden $R^2$ to quantify the strength of the relationship between your feature and categorical outcome.
[Note, however, that univariate variable screening presents problems.](https://stats.stackexchange.com/a/473059/247274)
| null |
CC BY-SA 4.0
| null |
2023-04-24T16:55:16.653
|
2023-04-24T16:55:16.653
| null | null |
247274
| null |
613994
|
2
| null |
613718
|
0
| null |
In general, problems like this can be solved by writing out the likelihood function: the likelihood of seeing a particular set of observations as a function of the parameters of the CDF, which I presume is Gaussian with parameters $\mu$ and $\sigma$.
In R, this would look as follows
```
likelihood_function = function(parameters, asking_price, number_asked){
mu = parameters[1]
sigma = parameters[2]
# Probability of any one person accepting the asking price
p_accept = pnorm(asking_price, mu, sigma, lower.tail = T)
# Number of people asked BEFORE someone accepts
# follows geometric distribution (pdf = p_accept * (1 - p_accept)^number),
number_of_rejections = number_asked - 1
loglikelihood = dgeom(number_of_rejections, p_accept, log = T)
loglikelihood_adj = loglikelihood + 1e-6 # Add tiny correction to prevent underflow
sum(loglikelihood_adj)
}
```
You can then use optimisation to find the parameters that maximise this function: the maximum-likelihood parameters.
```
starting_values = c(mu = 50, sigma = 100)
result = optim(starting_values, likelihood_function,
asking_price = data$asking_price,
number_asked = data$number_asked,
control = list(fnscale = -1)) # Find max, not min
result$par
```
You can find plenty of information on this site on how to calculate the uncertainty around these estimates, e.g. using Fisher information, or Bayesian methods.
While we're here, the code below simulates data from the model you describe.
```
library(tidyverse)
true_pars = c(mu = 10, sigma = 10)
simulate_trials = function(n_trials, mu, sigma){
# Distribution of asking prices is uniform [0, 100]
asking_price = round(runif(n_trials, 0, 100))
# Probability of any one person accepting the asking price
p_accept = pnorm(asking_price, mu, sigma, lower.tail = T)
# Number of people asked BEFORE someone accepts
# follows geometric distribution (pdf = p_accept * (1 - p_accept)^number),
number_of_rejections = rgeom(n_trials, prob = p_accept)
# so total number asked is that +1
number_asked = number_of_rejections + 1
data.frame(asking_price, p_accept, number_asked) %>%
arrange(asking_price)
}
data = simulate_trials(1000, true_pars[1], true_pars[2])
# ggplot(data, aes(asking_price, p_accept)) + geom_path()
ggplot(data, aes(asking_price, number_asked)) +
stat_smooth() +
scale_y_log10()
```
[](https://i.stack.imgur.com/ILtrM.png)
| null |
CC BY-SA 4.0
| null |
2023-04-24T16:56:51.960
|
2023-04-24T16:56:51.960
| null | null |
42952
| null |
613995
|
2
| null |
423576
|
0
| null |
There are a number of different aspects to this question.
- "Can Kruskal-Wallis test be used in groups of different size?"
Absolutely. The test does not require the sample sizes to be the same. ref
- "Would I be right to argue then that although the sample is not representative of the underlying population, there does not appear to be a difference in responses from the 3 cities?"
This depends on the conclusion trying to be drawn. If the conclusion seeks to generalize the survey to a broader population or to apply the survey results to the experience of a customer population, then weighting the sample might be appropriate. If the conclusion is just about the differences between the three cities, then weighting would not be required.
There was a statement made in the answer from @BruceET that "A Kruskal-Wallis test in R does not require sample sizes to be the same in all groups. That said, if you have resources to use $kn$ subjects in $k$ groups, power is generally greater if each group has nearly $n$ subjects."
This statement is true in that it is always better to have more sample, and always better to balance, if possible, in the sample design. The statement should not be interpreted to mean that the test would have higher power if the larger group was sub-sampled to be the same size as the smaller group. A procedure like that does not have more power than the original sample.
Simulation below to show that result for a continuous distribution and for the discrete distribution used in that answer.
```
# does a smaller balanced sample have more power than a larger unbalanced sample?
set.seed(103893493)
X <- list()
n1 <- 70 # A
n2 <- 20 # B
n3 <- 10 # C
sdn <- 3 # standard deviation
alpha <- 0.05
# differences in medians of 1 unit each
X[[1]] <- rnorm(n1, 0, sdn)
X[[2]] <- rnorm(n2, 1, sdn)
X[[3]] <- rnorm(n3, 2, sdn)
kruskal.test(X)
#>
#> Kruskal-Wallis rank sum test
#>
#> data: X
#> Kruskal-Wallis chi-squared = 16.06, df = 2, p-value = 0.0003255
# Power to detect any difference (1 or 2 units) at full sample and alpha = 0.05
N <- 1000
p <- numeric(N)
for (i in 1:N)
{
X[[1]] <- rnorm(n1, 0, sdn)
X[[2]] <- rnorm(n2, 1, sdn)
X[[3]] <- rnorm(n3, 2, sdn)
p[i] <- kruskal.test(X)$p.value
}
length(which(p < alpha)) / N
#> [1] 0.47
# Power to detect at a reduced sample to create balance
p <- numeric(N)
for (i in 1:N)
{
X[[1]] <- rnorm(n3, 0, sdn)
X[[2]] <- rnorm(n3, 1, sdn)
X[[3]] <- rnorm(n3, 2, sdn)
p[i] <- kruskal.test(X)$p.value
}
length(which(p < alpha)) / N
#> [1] 0.19
# power is reduced at lower sample size. Creating balance in in the Kruskal Groups does not add power post-hoc
################################################################################
# Examples from the original answer
set.seed(825)
X <- list()
X[[1]] <- sample(1:5, 250, rep=T, p = c(1,1,2,3,4))
X[[2]] <- sample(1:5, 125, rep=T, p = c(1,1,3,3,3))
X[[3]] <- sample(1:5, 125, rep=T, p = c(1,2,2,3,3))
lapply(X, table)
#> [[1]]
#>
#> 1 2 3 4 5
#> 25 17 46 68 94
#>
#> [[2]]
#>
#> 1 2 3 4 5
#> 11 15 29 38 32
#>
#> [[3]]
#>
#> 1 2 3 4 5
#> 17 19 33 33 23
kruskal.test(X)
#>
#> Kruskal-Wallis rank sum test
#>
#> data: X
#> Kruskal-Wallis chi-squared = 17.697, df = 2, p-value = 0.0001436
wilcox.test(X[[1]],X[[2]])$p.val
#> [1] 0.03922562
wilcox.test(X[[2]],X[[3]])$p.val
#> [1] 0.05193156
wilcox.test(X[[1]],X[[3]])$p.val
#> [1] 4.034746e-05
# Power to detect any difference at full sample and alpha = 0.05
N <- 1000
p <- numeric(N)
for (i in 1:N)
{
X[[1]] <- sample(1:5, 250, rep=T, p = c(1,1,2,3,4))
X[[2]] <- sample(1:5, 125, rep=T, p = c(1,1,3,3,3))
X[[3]] <- sample(1:5, 125, rep=T, p = c(1,2,2,3,3))
p[i] <- kruskal.test(X)$p.value
}
length(which(p < alpha)) / N
#> [1] 0.518
# Power to detect at reduced sample and alpha = 0.05
p <- numeric(N)
for (i in 1:N)
{
X[[1]] <- sample(1:5, 125, rep=T, p = c(1,1,2,3,4))
X[[2]] <- sample(1:5, 125, rep=T, p = c(1,1,3,3,3))
X[[3]] <- sample(1:5, 125, rep=T, p = c(1,2,2,3,3))
p[i] <- kruskal.test(X)$p.value
}
length(which(p < alpha)) / N
#> [1] 0.373
```
Created on 2023-04-24 with [reprex v2.0.2](https://reprex.tidyverse.org)
| null |
CC BY-SA 4.0
| null |
2023-04-24T17:00:39.033
|
2023-04-24T17:00:39.033
| null | null |
212798
| null |
613996
|
2
| null |
613871
|
1
| null |
TLDR: Glen_b is correct. What is known is the sample standard deviation 's' and sample mean 'x bar'. The population standard deviation sigma and population mean Mu are unknown.
That is the basic difference between mathematics and statistics. In math we are calculating known and exact quantities. In statistics, we estimate things we call parameters (e.g. mean and standard deviation) of an inaccessible population that are of course unknown, based on an accessible sample. This is because for most problems in science, social science, economics, and especially behavioral and health sciences, population parameters are generally unknown and the data we need to find them are impractical to obtain. (Think the average weight of all men in a given country or average size of all stars in the universe or how often the average person uses profanity)? It is not practical to get the data we need to calculate that.
What is practical is to take fairly selected representative sample to estimate a range for where the population's parameters we are estimating probably falls and how likely that parameter is outside that range. We consider a few things to do this:
- The sample mean
- sample standard deviation (a measure of how inconsistent the data is). Low standard deviation means we may be able to put the parameter in a narrower range and / or have higher confidence the parameter is in that range.
- sample size (More samples may mean we can have a narrower range and / or higher confidence the parameter is in that range.)
- overall distribution of the data
Most ways to extrapolate a range and confidence for a population parameter that are taught in an introductory stats class will presume that the data are normally distributed, and if the data are not, it won't give you a correct range and confidence.
This means that examples equally above and below the mean are equally common, and most examples are close to the mean while examples far from the mean are unusual in the data you have.
For example, you may get incorrect extrapolations if you have a data set of the weights of citizens of a country where obesity is common. The data will be skewed to the right on on that bell curve. This means, in other words, examples above the mean will be more common than those below.
| null |
CC BY-SA 4.0
| null |
2023-04-24T17:01:57.180
|
2023-04-24T20:24:08.430
|
2023-04-24T20:24:08.430
|
134553
|
134553
| null |
613998
|
1
| null | null |
0
|
33
|
I was reading briefly about the field of EVT - extreme value theory, and the associated distributions that arise from modeling the maximum of a finite sample. It's not quite clear to me the nature of data and estimation that this field treats. Does it concern deriving the exact distribution of a maximum (or minimum), and/or does it concern analyzing samples that are maxima or minima from unobserved processes?
The practical need for this field is crystal clear. What I don't understand is why this should be considered a theoretically distinct field. On one hand, if the density of a distribution is known, one can trivially derive the density of a maximum or minimum. On the other, if one observes the extreme value of a sequence of independent process (like maximum daily rainfall over consecutive years in the Texas flats), intuitively, one would again use basic probability laws to identify a method for estimating the density function of a single observation, and again derive the density for an extreme for a given sample of size $n$.
Basically, I don't see an application of EVT that doesn't require an estimate or assumption regarding the density of the whole sample. Have I missed a scenario that should be considered?
|
Is modeling the extreme value of a distribution a basic probability result?
|
CC BY-SA 4.0
| null |
2023-04-24T17:13:17.563
|
2023-04-24T17:13:17.563
| null | null |
8013
|
[
"probability",
"estimation",
"extreme-value",
"order-statistics"
] |
614000
|
2
| null |
572329
|
1
| null |
[The sklearn documentation discusses calibration.](https://scikit-learn.org/stable/auto_examples/calibration/plot_calibration_curve.html#sphx-glr-auto-examples-calibration-plot-calibration-curve-py) One of the example models (in orange) is a naïve Bayes model that has a descending calibration curve, meaning that observations with larger estimated probabilities of occurrence actually occur less often.
[](https://i.stack.imgur.com/yv4od.png)
A member of this community has posted an [example](https://stats.stackexchange.com/questions/570095/how-to-calibrate-models-if-we-dont-have-enough-data) where the same phenomenon occurs but is even more visually extreme.
[](https://i.stack.imgur.com/ALnHt.png)
Those seem to be examples where the rankings are wrong.
| null |
CC BY-SA 4.0
| null |
2023-04-24T17:31:29.673
|
2023-04-24T18:08:17.597
|
2023-04-24T18:08:17.597
|
247274
|
247274
| null |
614001
|
2
| null |
570141
|
1
| null |
Brier score can be [decomposed](https://en.wikipedia.org/wiki/Brier_score#Decompositions) into measures of calibration and discrimination. Calibration describes the extent to which predicted probabilities align with true event occurrence. That is, if an event that is predicted to happen with probability $0.5$ actually happens $90\%$ of the time, the calibration is poor. Discrimination describes the extent to which model predictions for the two categories can be separated, and the Brier score does well here when the predicted distributions for the two categories are easy to separate (hence the relationship to the ROC AUC discussed in the link).
You have a poor Brier score despite good calibration. This must mean that the ability for a model to discriminate between the two categories is poor.
| null |
CC BY-SA 4.0
| null |
2023-04-24T17:38:22.427
|
2023-04-24T17:38:22.427
| null | null |
247274
| null |
614002
|
2
| null |
613985
|
2
| null |
>
what I measured (CFUs) is only a proxy for ACTUAL population size...
That's true of almost all technical measurements. A chemical concentration might be estimated by evaluating the optical absorbance at a particular wavelength. Expression of messenger RNA (mRNA) from a gene might be estimated by reverse-transcription into complementary DNA (cDNA) followed by a real-time quantitative polymerase chain reaction that is then monitored by the fluorescence of a probe.
In that respect, CFUs might be one of the assays closest in kind to what is most directly of interest. So long as the plating doesn't kill any bacteria, it provides a count that can be back-calculated to estimate the number of bacteria in the culture from which they were plated.
>
it is not quite clear to me whether they are generated by a Poisson process, where events happens with a certain constant probability per unit time...
The Poisson distribution is not limited to events occurring over time. It can be applied to any event that is rare over some extensive property: for example, few animals per unit area, or few cells per milliliter of volume.
The latter (few cells per milliliter, spread out to be a few cells per unit surface area) are how you think about using a Poisson distribution to describe your CFU data. You have taken a sample of the full bacterial culture and diluted it until it has only a small number of bacteria in the volume that you apply to the Petri dish. You then quickly spread out that volume over the entire surface of the dish so that no 2 bacteria are close to each other. Compared to the hour or so typical of a bacterial cell cycle, the few seconds needed for the dilution and spreading mean that any continuing replication of cells does not substantially affect the results.
The exponential growth in numbers over time comes into play after you have spread out the individual bacteria. You can't see a micrometer-scale single bacterium with the naked eye. After overnight culture on an adequately rich medium, the exponential growth means that each individual bacterium has produced a visible colony to count, separate from other colonies if you have the right dilution.
A Poisson model is thus an appropriate choice for such data. To get estimates for the original bacterial cultures from which you sampled, you should use an appropriate log-offset to represent the effective volume of original culture that you spread over the dish. [This page](https://stats.stackexchange.com/q/175349/28500) and its links explain offsets in the context of time as the extensive variable, but the principle applies similarly to volumes or other extensive variables.
| null |
CC BY-SA 4.0
| null |
2023-04-24T17:47:34.120
|
2023-04-24T17:47:34.120
| null | null |
28500
| null |
614003
|
1
| null | null |
2
|
117
|
If I generate a uniform distribution of $X$ ranging from $0$ to $2\pi$ (so $X\sim U(0, 2\pi)$), then the probability distribution of $1-\cos(X)$ appears to be this function:
[](https://i.stack.imgur.com/MkUlI.png)
Is this an analytical function that I can work with directly? And if so, how do I prove this?
|
Analytical form of a Histogram of $1-\cos(X)$, $X\sim U(0, 2\pi)$
|
CC BY-SA 4.0
| null |
2023-04-24T17:49:59.437
|
2023-04-25T14:53:20.787
|
2023-04-24T19:00:36.037
|
247274
|
89265
|
[
"distributions",
"histogram"
] |
614005
|
1
| null | null |
1
|
35
|
I am struggling to choose the correct test procedure for the following data and hypothesis:
Variable A: 3 levels, fixed effect, signifies the grammatical gender of a noun
Variable B: 2 levels, fixed effect, signifies gender of the participant who used that noun
Variable C: 30 unique values, random effect, signifies different nouns which each have a certain grammatical gender
response variable Y: ordinal values (1-5)
So there are 10 instances of C per level of A with various numbers of observations each.
The Null hypthesis are that A has no effect on Y, B has no effect on Y, and that there is no interaction between A and B.
Can I model this via ordinal regression with mixed effects, using random slopes for C, and testing for significance of A and B and the interaction? Or should I use some different hypthesis testing, maybe a non-parametric test?
Kind regards,
Josephine
|
Is ordinal regression with mixed effects in this design the correct choice?
|
CC BY-SA 4.0
| null |
2023-04-24T17:54:53.513
|
2023-04-24T17:54:53.513
| null | null |
386232
|
[
"regression",
"hypothesis-testing",
"ordinal-data"
] |
614006
|
1
| null | null |
1
|
25
|
I'm using the Granger test from package lmtest in R
This is the test
```
grangertest(dados$Publico_Total ~ dados$Classificacao)
```
I receive this result
```
Granger causality test
Model 1: dados$Publico_Total ~ Lags(dados$Publico_Total, 1:1) + Lags(dados$Classificacao, 1:1)
Model 2: dados$Publico_Total ~ Lags(dados$Publico_Total, 1:1)
Res.Df Df F Pr(>F)
1 64
2 65 -1 3.1352 0.08138 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
How I should interpret the results? I'm new doing this kind of est and not familiarized with the output
|
Help to understand the result in Granger Test
|
CC BY-SA 4.0
| null |
2023-04-24T18:07:42.753
|
2023-04-25T07:35:58.610
|
2023-04-25T06:19:57.037
|
53690
|
386454
|
[
"r",
"interpretation",
"granger-causality"
] |
614007
|
2
| null |
557974
|
1
| null |
Assumption: You are developing in Python.
In the true StackOverflow fashion, I suggest not using correlation and using [Mutual Information](https://www.kaggle.com/code/ryanholbrook/mutual-information) instead, as it captures both linear and non-linear relationships and is more general than correlation coefficients. Note that, Mutual information calculation is comparatively computationally intensive.
- Mutual Information:
Library: scikit-learn
mutual_info_score: Used for measuring mutual information between two categorical variables.
mutual_info_regression: Used for measuring mutual information between a continuous target variable and one or more continuous or categorical predictor variables, typically in the context of regression problems.
mutual_info_classif: Used for measuring mutual information between a categorical target variable and one or more continuous or categorical predictor variables, typically in the context of classification problems.
Nevertheless, here are some methods for detecting correlation/association/dependence between variables:
- Binary & Continuous: Point-biserial correlation coefficient -- a special case of Pearson's correlation coefficient, which measures the linear relationship's strength and direction.
Library: SciPy (pointbiserialr)
- Binary & Binary: Phi coefficient or Cramér's V -- based on the chi-squared statistic and measures the association between them.
Library: scikit-learn (matthews_corrcoef) and custom function for Cramér's V with pandas
- Categorical & Continuous: ANOVA -- tests if their mean is significantly different across the categories.
Library: SciPy (f_oneway)
- Categorical & Categorical: Chi-squared test or Cramér's V -- measure of association between them.
Library: SciPy (chi2_contingency) and custom function for Cramér's V with pandas
| null |
CC BY-SA 4.0
| null |
2023-04-24T18:09:51.463
|
2023-04-24T18:09:51.463
| null | null |
141021
| null |
614008
|
2
| null |
614003
|
2
| null |
You can get the result with the delta method: $p(z)=\int_R p(x) \delta(z-(1-cos(x)))dx$. [https://en.m.wikibooks.org/wiki/Probability/Transformation_of_Probability_Densities](https://en.m.wikibooks.org/wiki/Probability/Transformation_of_Probability_Densities)
| null |
CC BY-SA 4.0
| null |
2023-04-24T18:16:17.453
|
2023-04-24T18:16:17.453
| null | null |
298651
| null |
614010
|
2
| null |
613718
|
5
| null |
As you pointed out, each observation $(x_i,t_i)$ gives you information on the value of the cumulative distribution at the point $x_i$, however notice that the probability of this observation is $\Phi(x_i)^{t_i-1}(1-\Phi(x_i))$ (the first $t-1$ offers rejected and the last one accepted).
So, without additional assumption on $\Phi$ (such as smoothness), you can only directly estimate from the data the set of values $\{\Phi(x_i)\}$. Now there are many possible methods of statistical inference, but most are based on the likelihood function, so that's a good place to start. Denote $p_i = \Phi(x_i)$ the set of unknown parameters and assume that the $x_i$'s are sorted such that $0 \le p_1 \le p_2 \le ...\le p_n \le 1$. The likelihood function is then
$$\log \mathcal L(p_1,...,p_n) = \sum_{i=1}^n (t_i-1)\log p_i + \log(1-p_i).$$
The constraint on the $p_i$'s being monotonically increasing makes this a slightly less trivial problem, but we can still find the [maximum likelihood estimators](https://en.wikipedia.org/wiki/Maximum_likelihood_estimation) (MLE) with a bit of work. The unconstrained MLE's are easily found by setting the respective derivatives of the likelihood to zero:
$$ \frac{\partial \log \mathcal L}{\partial p_i} = \frac{t_i - 1}{p_i} - \frac{1}{1-p_i}=0 $$
which gives
$$ \hat p_i = \frac{t_i-1}{t_i}.$$
If this set of MLE's satisfy the constraint $0 \le \hat p_1 \le \hat p_2 \le ... \le \hat p_n \le 1$ then we are done. Otherwise, since there is only a single local maximum at the interior of the parameter space, the global maximum must be on the boundary, namely there must be some $i$ such that $\hat p_i = \hat p_{i+1}$. One can test all possible $n$ pairs, but it is quite clear the highest value of the likelihood will be achieved by applying this to a pair that is in the "wrong" order. Applying this additional constraint only affects the terms in the likelihood involving $p_i$ and $p_{i+1}$ so the equation for the MLE becomes:
$$ \frac{\partial \log \mathcal L}{\partial p_i} = \frac{t_i + t_{i+1}-2}{p_i} - \frac{2}{1-p_i}=0 $$
Which is, understandably, the same equation we got before just with the average $(t_i+t_{i+1})/2$ replacing $t_i$.
Repeating this procedure for all out of order pairs will give us the constrained MLE's. If there are three or more consecutive estimators that are out of order, the number of possibilities we need to test becomes larger. In general there are $2^n$ ways of choosing which consecutive pair of estimators are equal, so this might become non-feasible if all the estimators are completely out of order, but in that case we probably can't say much about $\Phi(x)$ anyway.
Finding the MLE's is just the tip of the iceberg. You may also want to estimate the uncertainty of the estimators, add assumptions on the shape on the distribution $\Phi$ and so on. Describing all methods of doing it will require a full course in statistics. The particular methods that are best suited to your case will depend on the nature of your assumptions and data, and what purpose you want to use it for.
UPDATE
As a simple example we can apply this to the same input data as given in @Ben's answer: $t=(2,7,6,12,15)$. There is one pair in the wrong order $(7,6)$ so we conclude that $p_2=p_3$, and we proceed by replacing those values with the average $6.5$ and simply calculating $(t-1)/t$ for the set $(2,6.5,6.5,12,15)$: this results in $\hat p = (0.5000, 0.8462, 0.8462, 0.9167, 0.9333)$, In complete agreement with @Ben's numerical calculation.
| null |
CC BY-SA 4.0
| null |
2023-04-24T18:24:15.813
|
2023-04-25T08:57:33.050
|
2023-04-25T08:57:33.050
|
348492
|
348492
| null |
614011
|
1
|
614016
| null |
1
|
30
|
I am working with a data set with the sequence identity (a value in [0,1] representing the conservation between sequences) of many genes for many bacterial strains. I would like to be able to draw conclusions about how statistically significant a value for one gene is by comparing it to the distribution of all genes for a given pair of strains. These distributions are strongly skewed towards 1.
While the sequences were identified using prediction algorithms that likely missed a few genes, we are operating under the assumption that almost all genes have been captured, and thus I have sequence identity values for the full population in question, not just a sample. In order to conclude that a given value is significant, are statistical tests needed here, or is comparing the value with the cumulative distribution to calculate the p-value sufficient?
|
Statistical significance in known population
|
CC BY-SA 4.0
| null |
2023-04-24T18:34:12.010
|
2023-04-24T19:36:16.513
| null | null |
386457
|
[
"p-value",
"population",
"bioinformatics",
"sequence-analysis"
] |
614012
|
1
| null | null |
2
|
65
|
Some studies report the regression results and when explaining it, they say something like " a one standard deviation in the x variable increases the y variable by something" How do we run this model in a statistical software? Do we simply to the Z-score transformation and re-run the original model again?
|
How do I calculate the change in the y variable for one std increase in the x variable?
|
CC BY-SA 4.0
| null |
2023-04-24T18:47:25.313
|
2023-04-24T19:28:48.060
|
2023-04-24T18:49:19.413
|
247274
|
355204
|
[
"regression",
"standard-deviation",
"z-score"
] |
614013
|
2
| null |
614012
|
1
| null |
The regression coefficient gives the change in $y$ for a one-unit change in the feature. That is, multiply the coefficient by $1$.
How many units is one standard deviation of that feature? Multiply by that value instead of $1$.
For instance, if you have a model $\hat y = 3 - 2x$, and $x$ has a standard deviation of $4$, when $x$ changes by one standard deviation, $x$ changes by $4$, so $\hat y$ changes by $2\times 4 = 8$.
| null |
CC BY-SA 4.0
| null |
2023-04-24T18:51:58.870
|
2023-04-24T18:51:58.870
| null | null |
247274
| null |
614015
|
2
| null |
614012
|
0
| null |
I see two easy ways:
- Standardization, i.e. z-transforming your explanatory variable
$x$ before running the regression will allow you to interpret its
coefficient as expected increase in $y$ if $x$ increases by one
standard deviation.
- Multiply the coefficient for (untransformed) $x$ by the standard deviation of $x$.
| null |
CC BY-SA 4.0
| null |
2023-04-24T19:28:48.060
|
2023-04-24T19:28:48.060
| null | null |
21054
| null |
614016
|
2
| null |
614011
|
2
| null |
I am not sure what the question is actually addressing but I think you have the sequence identity values for all the genes of interest, so there's no need for statistical tests to understand the significance of a specific gene value. Instead I guess, you could for example simply analyze the distribution of these values to see where a particular gene stands compared to the others.
To check if a gene's value is statistically significant, you can simply calculate the p-value by comparing it to the empirical cumulative distribution function (ECDF)
If the p-value is low it suggests that the gene's sequence identity value is significantly different from the rest. This could mean that the gene is more or less conserved between the bacterial strains than the other genes.
I hope this clears it.
| null |
CC BY-SA 4.0
| null |
2023-04-24T19:36:16.513
|
2023-04-24T19:36:16.513
| null | null |
375558
| null |
614017
|
2
| null |
614003
|
5
| null |
We can evaluate the distribution function $F(y)$ of $Y := 1 - \cos(X)$ directly as follows.
To begin with, note that as $X$ ranges over $[0, 2\pi]$, the range of $Y$ is from $0$ (achieved at $X = 0$ or $2\pi$) to $2$ (achieved at $\pi$). Therefore, for $y < 0$, $F(y) = 0$ and for $y \geq 2$, $F(y) = 1$.
For $y \in [0, 2)$ (it is helpful to draw the graph of $x \mapsto \cos(x)$ on $[0, 2\pi]$ to determine the region $\{x \in [0, 2\pi]: \cos(x) \geq 1 - y\}$. Also keep in mind that the domain of $x \mapsto \arccos(x)$ is $[-1, 1]$ with range $[0, \pi]$, so mirroring is needed for angle that is greater than $\pi$):
\begin{align}
& F(y) = P[1 - \cos(X) \leq y] = P[\cos(X) \geq 1 - y] \\
=& P[X \in [0, \arccos(1 - y)] \cup [2\pi - \arccos(1 - y), 2\pi]] \\
=& \frac{\arccos(1 - y)}{\pi}.
\end{align}
To summarize, the distribution of $Y$ is given by
\begin{align}
F(y) = \begin{cases}
0 & y < 0, \\
\frac{\arccos(1 - y)}{\pi} & 0 \leq y < 2, \\
1 & y \geq 2.
\end{cases}
\end{align}
Taking derivative of $F$ yields the pdf of $Y$:
\begin{align}
f(y) = \begin{cases}
\frac{1}{\pi\sqrt{1 - (1 - y)^2}} & 0 < y < 2, \\
0 & \text{ otherwise.}
\end{cases}
\end{align}
A graph of $f$ looks as follows, which matches the histogram you simulated. As @Sycorax pointed out in the comment, this is [Arcsine distribution](https://en.wikipedia.org/wiki/Arcsine_distribution) with support $(0, 2)$.
[](https://i.stack.imgur.com/7bA9F.png)
| null |
CC BY-SA 4.0
| null |
2023-04-24T19:40:39.567
|
2023-04-25T14:53:20.787
|
2023-04-25T14:53:20.787
|
20519
|
20519
| null |
614018
|
2
| null |
614006
|
2
| null |
If I look at the outputs generated , I can give some inference but do cross check just to be sure
- Residual degrees of freedom for Model 1 and Model 2. Model 1 has 64 residual degrees of freedom, while Model 2 has 65.The difference in residual degrees of freedom between Model 1 and Model 2. In this case, -1 (65-64).
- The F-statistic for the Granger causality test. The value is 3.1352, which represents the test statistic to compare the two models and the p-value is 0.08138.
- Pr(>F): The p-value associated with the F-statistic. In this case, the p-value is 0.08138. If I take the threshold to be 0.05 so we cannot reject the null hypothesis since there is not enough evidence to conclude that Classificacao Granger causes Total.
- Another thing I am worried about is marginal significance of 0.1 level at the p-value. This is generally considered a weak level of significance, and you should interpret it with caution.
| null |
CC BY-SA 4.0
| null |
2023-04-24T19:48:22.413
|
2023-04-25T07:35:58.610
|
2023-04-25T07:35:58.610
|
375558
|
375558
| null |
614019
|
2
| null |
613965
|
3
| null |
First, let's recall the necessary formulas (with a slight change in notation with respect to the post). As a bonus to your query, I'm considering also prediction intervals. Let $\mu_f = \beta_1 + \beta_2 x_f$ be the conditional average of $Y$ at $X = x_f$. An estimator for this is $\hat Y_f = \hat \beta_1 + \hat \beta_2 x_f$. Thus $\hat Y_f = \bar Y + \hat\beta_2(x_f-\bar x)$.
Under the linear regression assumptions, we have the pivot
$$
\frac{\hat Y_f-\mu_f}{S\sqrt{\frac{1}{n}+\frac{(x_f-\bar x)^2}{\sum_{i=1}^n(x_i-\bar x)^2}}}\sim t_{n-2},\tag{*}
$$
where $S^2 = \frac{1}{n-2}\sum (Y_i-\hat\beta_0-\hat\beta_1x_i)^2$ is the corrected residual variance.
From (*), we can build a confidence interval of $1-\alpha$ for $\mu_f$ as
$$\left[\hat Y_f \pm t_{n-2, 1-\alpha/2}S\sqrt{\frac{1}{n}+\frac{(x_f-\bar x)^2}{\sum_{i=1}^n(x_i-\bar x)^2}}\right].$$
On the other hand, the prediction interval for $\mu_f$ is given by
$$\left[\hat Y_f \pm t_{n-2, 1-\alpha/2}S\sqrt{1+\frac{1}{n}+\frac{(x_f-\bar x)^2}{\sum_{i=1}^n(x_i-\bar x)^2}}\right].$$
The following `R` code illustrates how to compute all these quantities and combine them to obtain the desired plot. Instead of computing the necessary quantities "by hand" using the above expression for the confidence/prediction interval, I'll show how to do it by taking advantage of suitable `R` functions, e.g. `predict`.
```
# fit the model
my_lm <- lm(mpg~wt, data = mtcars)
# generate some new data
new_data = data.frame(wt = seq(1,6,len=length(mtcars$wt)))
# compute the confidence interval for the regression line
ci <- predict(my_lm, newdata = new_data, interval = "confidence")
# compute the prediction interval
ci_pred <- predict(my_lm, newdata = new_data, interval = "prediction")
# combine everything
plot(mpg~wt, data = mtcars, xlim = range(new_data$wt),
ylim =range(ci_pred))
matlines(new_data$wt, ci, lty = c(1,2,2), col=c(1,1,1))
matlines(new_data$wt, ci_pred[,-1], lty = c(3,3), col=c(2,2))
```
[](https://i.stack.imgur.com/42HJw.png)
| null |
CC BY-SA 4.0
| null |
2023-04-24T19:50:47.787
|
2023-04-24T19:57:27.237
|
2023-04-24T19:57:27.237
|
56940
|
56940
| null |
614021
|
1
| null | null |
3
|
71
|
I have created a Poisson regression model with robust error variance ([https://academic.oup.com/aje/article/159/7/702/71883](https://academic.oup.com/aje/article/159/7/702/71883)) to calculate relative risks.
This is the Poisson regression model:
```
glm.poisson <- glm(new_anydiagnosis ~ gender + age + socioeco_status+dsdm_category+family_history+mental_before_hiv+relations+sexual_life+stigma_discrimination,
family = poisson(link=log),
data=finaldata)
```
To calculate the robust standard errors, I have used the package "sandwich" and "lmtest":
```
library("sandwich")
library("lmtest")
glm.robust <- coeftest(glm.poisson, vcov = sandwich)
```
And I get the following estimates:
```
z test of coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.6661719 0.6595846 -2.5261 0.011534 *
gender2 0.2785553 0.2568445 1.0845 0.278130
age -0.0039597 0.0087759 -0.4512 0.651846
socioeco_status2 -0.2993100 0.2941404 -1.0176 0.308880
socioeco_status3 -0.2157970 0.3404303 -0.6339 0.526149
socioeco_status5 -14.2642777 0.5838599 -24.4310 < 2.2e-16 ***
dsdm_category2 0.4010045 0.2631345 1.5240 0.127521
dsdm_category3 0.3511796 0.5260824 0.6675 0.504429
dsdm_category4 0.6580096 0.3078882 2.1372 0.032584 *
family_history2 -0.5735175 0.2567557 -2.2337 0.025502 *
mental_before_hiv2 0.8308926 0.5205853 1.5961 0.110472
mental_before_hiv3 1.4173136 0.5843978 2.4253 0.015298 *
relations2 0.0348879 0.2586927 0.1349 0.892721
relations3 27.5722112 1.6086563 17.1399 < 2.2e-16 ***
sexual_life2 0.5890264 0.2753578 2.1391 0.032425 *
sexual_life3 -13.7536832 0.7330256 -18.7629 < 2.2e-16 ***
stigma_discrimination2 0.7728856 0.2483147 3.1125 0.001855 **
stigma_discrimination3 -13.9276696 1.0337112 -13.4735 < 2.2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
However, now I want to check if multicollinearity between the variables of the model exists. For this, I was considering using the variance inflation factor ("vif").
But when trying this, I do not get estimates of the "vif" for all of the parameters in the model. I only get one estimate instead of estimates of the vif for all variables of the model:
```
> vif(glm.robust)
Estimate Std. Error z value Pr(>|z|)
11.381583 2.220626 10.048533 1.309115
```
The problem (I think) is that my glm.robust model is not defined as a model, but regarded as a vector (?), since it is defined based on the "coeftest".
Do you know how I can test for multicollinearity in the Poisson regression model with robust error variance?
I was also considering finding the "vif" for the parameters of the original Poisson model (glm.poisson), but as this model does not use the robust error variance, I am not sure if it would be fine to do.
I am not at all experienced in using Poisson regression, so maybe I am missing something.
Thanks for your help!
|
How to check for multicollinearity in Poisson regression model with robust error variance
|
CC BY-SA 4.0
| null |
2023-04-24T13:07:43.447
|
2023-04-25T14:29:11.943
| null | null | null |
[
"r",
"robust",
"multicollinearity"
] |
614022
|
1
| null | null |
2
|
37
|
I am trying to use Akaike Information Criterion with the small sample correction (AICc) as method for determining how many data points to use in a linear approximation of a non-linear function; the goal is to make an extrapolation, with error bars, from the end of my data set. In working on this project I ran into a problem where the AICc does not seem to be invariant with respect to the scale of the y-variable, which I had expected it to be. By this, I mean that if I take some non-linear function, fit a series of linear regressions with different numbers of data points and plot the AICc scores for these regressions I get a different curve than if I take the same data set and multiply the values by say 10. Obviously, the magnitude of the two AICc curves should be different, but I was surprised that the minimums of the two curves are at different locations.
My thought in using AICc to determine the ideal number of data points to use in the linear approximation is that it could balance the information added by including more data points with the information lost by fitting a linear regression to an increasingly non-linear set of data points. Typically, AICc is used to determine how many parameters to include in a model to avoid the risk of overfitting, but the un-simplified equation for AICc for least squares suggests that it can also be used to pick a number of data points to include in the model.
Here I will define AICc as
$$
AICc=2k
+n\ln\left(2\pi\right)
+n\ln\left(\widehat{\sigma}^2\right)
+\frac{RSS}{\widehat{\sigma}^2}
+2\frac{k^2+k}{n-k-1}
$$
where $k$ is the number of parameters in the model, including the noise parameter (for 1st order OLS $k=3$); $n$ is the number of data points in the regression (for AICc $n\ge5$); $RSS$ is the residual sum of squares, $RSS=(\widehat{y}-y)^2$; and $\widehat{\sigma}^2$ is the reduced chi-squared statistic, $\widehat{\sigma}^2=RSS / (n-2)$ for 1st order OLS (I know some people just use $\widehat{\sigma}^2=RSS / n$, which simplifies the AICc equation).
In the code below, I fit the right half of a bell curve ($y=\exp{(-x^2)}$) with a series of linear regressions using a different number of data points, starting at the right tail and progressively looking at more data points with smaller $x$ values; this is the same manner in which I would consider linear approximations to my actual data set. To simplify sharing this MWE I have used the linear regression library from `statsmodels` which I believe uses the full AIC equation but does not include the small sample correction (AIC vs AICc). The code has the ability to include noise in the data (the real data I am working with has noise), but I have set the noise level to 0 so that the fundamental behavior is more obvious. I ran the same fitting sequence twice, once calculating the y-values with the base function, and a second time where I multiply these y-values by some arbitrary coefficient, in this case 10.
```
#preamble
import numpy as np
from numpy.random import default_rng
import matplotlib.pyplot as plt
import statsmodels.api as sm
%matplotlib widget
#%matplotlib inline
#set x range and step size
x_min=0
x_max=10
x_step=.01
#set y values for magnitude coefficient, steepness coefficient, and noise level
#uses random range with fixed seed for consistent results
y_coeff=0.1
steepness_coeff=.1
y_noise_level=0
rng = default_rng(0)
#creates range of x-values and evaluates the function
x=np.arange(x_min,x_max+np.spacing(x_max),x_step)
y=y_coeff * np.exp(-steepness_coeff * x ** 2) + rng.normal(loc=0,scale=y_noise_level,size=len(x))
#fits the data set with 1st order OLS models with varying number of data points
#y_mult_list is a multiplier that is put on the y-value right before the model is evaluated to it includes noise
#store as a dictionary of lists of models where there is a list of models for each multiplier value
#and each model in list has a different number of data points
#use statsmodel because it is well constained and has AIC calculation built-in
y_mult_list=[1,10]
min_ols_points=5 #start at 5 b/c that is min for AICc
max_ols_points=len(x)
stats_models_dict={y_mult_list_val: [sm.OLS(endog=y_mult_list_val*y[-ols_points:],
exog=sm.add_constant(x[-ols_points:])).fit()
for ols_points in range(min_ols_points,max_ols_points+1)]
for y_mult_list_val in y_mult_list}
#computes AIC (not AICc) for the models in the dictionary
#stores AIC values in same structure as
model_aic_dict={key: [model.aic for model in model_list]
for key, model_list in stats_models_dict.items()}
#creates figure and labels axes
fig1, ax1a = plt.subplots()
ax1a.set_title('AIC score for models with different number of data point')
ax1a.set_xlabel('number data points to look back at for OLS')
ax1a.set_ylabel('AIC score (y multiplier = %d)'%(y_mult_list[0]))
#creates secondary y-axis
ax1b = ax1a.twinx()
ax1b.set_ylabel('AIC score (y multiplier = %d)'%(y_mult_list[1]))
#plots AIC values for both series of models on the two axes
ax1a_obj=ax1a.scatter(range(min_ols_points,max_ols_points+1), model_aic_dict[y_mult_list[0]], color='b', label =y_mult_list[0])
ax1b_obj=ax1b.scatter(range(min_ols_points,max_ols_points+1), model_aic_dict[y_mult_list[1]], color='r', label =y_mult_list[1])
#merges legends for the different axes into a single legend
#https://stackoverflow.com/questions/14344063/single-legend-for-multiple-axes
axes_obj=[ax1a_obj, ax1b_obj]
ax1a.legend(axes_obj, [axis_obj.get_label() for axis_obj in axes_obj],
title='y multiplier value')
#reverse x-axis so that including more data points
#indicates you are looking back at more data
ax1a.set_xlim(ax1a.get_xlim()[::-1])
```
The output of this function is a plot that shows the AIC curves for the two sets of values. To highlight the fact that including more data involves looking further back into the data set I have reversed the x-axis.
[](https://i.stack.imgur.com/ZsQaj.png)
I assumed that AIC was scale-invariant, and that simply putting a coefficient in front of the input data would not materially affect the AIC curve. I was expecting these two curves to have different magnitudes, but I was surprised that the minimizer was not at the same number of data points. The AIC equation suggests that you can just as easily use it for determining the number of data points to include, $n$, as you can for the more traditional approach of ensuring your model is not overly complex, as measured by the number of fitted parameters $k$. Is this interpretation wrong? Am I violating some fundamental assumption of the AICc? Am I missing some scaling factor that is typically ignored in the more common application, but that is needed in the way I am using it?
I am also open to any other suggestions for an approach to selecting the number of data points to include in the linear approximation. I am working with projecting time series data, which an N-fold Cross-Validation approach is not typically suited for. I haven't tried a Generalized Cross-Validation (GCV) approach because I was not using regularization, but that would be fairly easy to implement (the Unbiased Predictive Risk Estimator (UPRE) would require an uncertainty estimate that I don't have). I could also try finding the minimum value of the reduced chi-squared statistic, but I haven't spent the time to consider if that is theoretically sound.
|
Is AIC scale invariant for problems concerning the number of data points in regression?
|
CC BY-SA 4.0
| null |
2023-04-24T20:04:25.717
|
2023-04-24T20:04:25.717
| null | null |
386446
|
[
"regression",
"machine-learning",
"model-selection",
"aic",
"scale-invariance"
] |
614023
|
2
| null |
613900
|
4
| null |
Mostly an expanded version of my comment. To
>
find if the mean difference between the current and starting salary of
employees is greater than 15,000
you have to apply a simple $t$-test for $H_0: \mu \leq 15k$ against $H_1:\mu>15k$ to the variable
`current income` - `starting income`
This would be a test where the rejection region is single-tailed and situated on the right tail.
>
I was told that paired t-test should always have a null of equal to
zero so I'm confused.
No, that's not right. Under the null goes whatever value is meaningful for the problem at hand$^{(*)}$.
(*) As per [Alexis](https://stats.stackexchange.com/users/44269/alexis)'s comment, the null and the alternative can also be written as
$H_0: \mu- 15k\leq 0 $ against $H_1:\mu-15k>0.$
| null |
CC BY-SA 4.0
| null |
2023-04-24T20:11:56.003
|
2023-04-24T21:04:48.320
|
2023-04-24T21:04:48.320
|
56940
|
56940
| null |
614024
|
1
| null | null |
1
|
29
|
I have a staggered adoption of a policy across states. I expected the mean of my outcome in post to increase for all treated states, however, I do not observe it consistently. Furthermore, the t-stat of the pre/post means for most states is not significant.
Does this imply that there is no effect?
|
Is it necessary to observe a significant mean difference in post/pre for the treated group in DiD?
|
CC BY-SA 4.0
| null |
2023-04-24T20:15:32.213
|
2023-04-24T20:15:32.213
| null | null |
321435
|
[
"t-test",
"causality",
"difference-in-difference"
] |
614026
|
1
| null | null |
0
|
22
|
I have a time series data where I have a variabel that shows me electricity price data week to week. The variabel has severe autocorrelation as it start in the high 90s for the first lag then slowly goes lower and lower. My question is, I have tried decorrelation/removing the autocorrelation and thought I had done it with a ARIMA model, but I then realized that what I was checking the autocorrelation on was the residuals and not the actual number I will have to use as that still showed quite a bit of autocorrelation. And even when I tried putting the lag to 52 so a year to year, it still showed severe autocorrelation. I have tried different types of transformation as Mahalanobis and Pre-whitening, but everything just gives the same. Where do I go wrong, or is it even possible to decorrelate data like this? Or hopefully, have I just mistunderstood things now and since my residuals now are decorrelated everything is good with the data?
|
Is the data actually decorreleted?
|
CC BY-SA 4.0
| null |
2023-04-24T21:01:55.897
|
2023-04-24T21:15:23.447
|
2023-04-24T21:15:23.447
|
385918
|
385918
|
[
"data-transformation",
"autocorrelation"
] |
614028
|
1
| null | null |
0
|
11
|
Assume I have a regression, where y and one of the dependent variables (may) share a predictor. For example, if I wanted to check for a correlation between occurrences of heart attacks (age-dependent) and occurrences of cancer (also age-dependent). My aim is to check if these are inherently connected, and (in this example) independently so of age.
Would the simple inclusion of age as a covariate be sufficient correction for this shared predictor? i.e.
`heart_attack ~ age + cancer`
or would it be in any way more correct to first run a regression of
`cancer ~ age`
and then use the residual in a second regression?
`heart_attack ~ age + cancer_residual`
For more context, I'm planning to run a multitude of regressions (both linear and logistic) and the common predictor may or may not apply for the independent variable.
Any insights are greatly appreciated.
|
Regression with common predictor for independent variable
|
CC BY-SA 4.0
| null |
2023-04-24T21:34:10.717
|
2023-04-24T21:34:10.717
| null | null |
386463
|
[
"regression",
"correlation",
"predictor"
] |
614029
|
1
| null | null |
2
|
16
|
Given 3 data points (-1,1), (0,0), and (1,1), I am asked to apply PCA and find the projected data points. Then calculate the variance of the projected data.
I applied eigenvalue decomposition and found the projected data points as -sqrt(2),0, sqrt(2) i.e., the 2d points are now projected onto 1d space. Should I use the population variance formula or the sample variance formula to calculate the variance?
|
PCA: Calculating variance of projected data - sample variance or population variance?
|
CC BY-SA 4.0
| null |
2023-04-24T21:36:55.710
|
2023-04-24T21:53:57.933
|
2023-04-24T21:53:57.933
|
386464
|
386464
|
[
"pca"
] |
614030
|
1
| null | null |
3
|
49
|
For a stochastics process $\{X_t\}_{t=0}$ with a filtration $\left\{\mathcal{F}_t\right\}_{t \geq 0}$ where $\cdots \mathcal{F}_{t-1} \subseteq \mathcal{F}_{t} \subseteq \cdots \subseteq \mathcal{F}_{\infty}$ and $\mathcal{F}_t = \sigma\left(X_s, s \leq t\right)$ (e.g. $\mathcal{F}_t$ is the smallest $\sigma$-algebra with respect to which all the variables ($X_s$, $s \leq t$) are measurable), let we consider a Gaussian process
\begin{equation}
Y_{t+1}=f(X_t, t+1)+\epsilon_{t+1}, \quad f(X_t,t+1) \sim \mathcal{G} \mathcal{P}\left(m_{t+1}(X_t), k_{t+1}\left(X_t, X^{\prime}_{t}\right)\right)
\end{equation}
e.g. consider a set of data $(Y_{t+1}, X_t)$ so that we would like to estimate mean function $f(X_t, t+1)$, in which $f(X_t, t+1)$ is the prediction at $t+1$ with input $X_t$.
I was wondering if the $f(X_t, t+1)$ is still a random under the filtration $\mathcal{F}_{t}$? e.g., can we have the conditional expectation $E(f(X_t, t+1) | \mathcal{F}_{t}) = m_{t+1}(X_t)$?
My guess is no as there is no uncertainty after conditioning the realization of $X_t$ at time $t$?
What makes me confusing is that, if I treat $f(X_t,t+1)$ as the prediction of the unknown mean function of $Y_t$, $$Y_{t+1} = g(X_{t}, t+1) + \epsilon_{t+1}$$
with $\hat{g}(X_{t}, t+1 | \mathcal{F}_{t} ) = f(X_t, t+1)$. In this case, can we have $E(f(X_t,t+1) | \mathcal{F}_{t}) = m_{t+1}(X_t)$?
|
Conditional Expectation for a Gaussian Process
|
CC BY-SA 4.0
| null |
2023-04-24T21:37:19.893
|
2023-04-25T12:41:13.480
|
2023-04-25T12:41:13.480
|
309576
|
309576
|
[
"conditional-probability",
"stochastic-processes",
"gaussian-process",
"conditional-expectation"
] |
614031
|
1
| null | null |
0
|
22
|
In a number of online articles (e.g. [1](https://towardsdatascience.com/understanding-gaussian-process-the-socratic-way-ba02369d804), [2](https://peterroelants.github.io/posts/gaussian-process-tutorial/)) I see the $x$ values over which a Gaussian Process is used written as $X$, e.g. the training points are $X$, test points are $X_*$, and (in the 1D case) the observed data or points to predict $Y = f(X)$.
However, my understanding is that we know the values of $x$ over which we have observed data and pre-define the $x$ values over which we want to predict $f(x)$. This makes me think $X$ is not a random variable, though of course $f(X)$ is.
Is it okay to write this these values with $x$ , $x_* $ and define $Y$ the random variable $Y = f(x), \ Y_* = f(x_*)$? This is clearer for me, but I don't want to have misunderstood something important.
|
Random variable notation in Gaussian Process
|
CC BY-SA 4.0
| null |
2023-04-24T22:12:47.587
|
2023-04-24T22:12:47.587
| null | null |
336682
|
[
"normal-distribution",
"random-variable",
"gaussian-process",
"notation"
] |
614032
|
1
| null | null |
1
|
24
|
I am given the question: "What is the probability that both the mean is in its confidence interval for confidence level a and the variance is in its confidence interval for confidence level a?"
Note that these CIs are the t-distribution interval for the mean and the chisq-distribution interval for the variance. I am very confused by how to begin as I am under the impression that CIs are not probabilistic.
|
Probability that both the mean and sample variance are both covered by their respective confidence intervals?
|
CC BY-SA 4.0
| null |
2023-04-24T22:20:52.630
|
2023-04-24T22:20:52.630
| null | null |
386465
|
[
"probability",
"confidence-interval",
"variance",
"mean",
"joint-distribution"
] |
614033
|
2
| null |
239898
|
1
| null |
To supplement [cbeleites' answer](https://stats.stackexchange.com/a/240063/337906), here is a simulation experiment which provides evidence for this claim:
>
Coming from an applied field with very low sample sizes, I have the experience that also unsupervised pre-processing steps can introduce severe bias. In my field that would be most frequently PCA for dimensionality reduction before a classifier is trained.
## Methodology
The experiment procedure:
- Generate a sample of features and labels according to a linear model. The rank of these features is an input.
- Split the sample into 3 subsamples:
train: 100 observations for supervised training
test: an inputted number of observations for testing (this will range from 25 to 450)
extra: same number of observations as the test subsample, for unsupervised training.
- Fit a PCA on test features, apply it to train features and test features, train a linear model on PCA'd train features and labels, and compute the RMSE of this model on PCA'd test features and labels. Call this RMSE $\text{error}_{\text{test}}$. The number of PCA components is less than the effective rank of the features generated in (1), as is usually the case in practice.
- Same as (3) except that the PCA is fit on extra features. Call this RMSE $\text{error}_{\text{extra}}$.
$\text{error}_{\text{test}}$ and $\text{error}_{\text{extra}}$ are paired, as the supervised training and test sets are identical. The only difference is the source (but not the size) of unsupervised training data. The sources of randomness are the particular splits which determine the subsamples, so the experiment procedure will be repeated 300 times.
$\text{error}_{\text{extra}}$ is clearly an unbiased estimator of out-of-sample RMSE, as it's never trained on features or labels which depend on test set features or labels. It's unclear whether $\text{error}_{\text{test}}$ is unbiased, as it's trained on test set features (but not labels). If The Elements of Statistical Learning1 is right—
>
initial unsupervised screening steps can be done before samples are left out . . . Since this filtering does not involve the class labels, it does not give the predictors an unfair advantage.
—then $\text{E}[\text{error}_{\text{extra}} - \text{error}_{\text{test}}] = 0$, i.e., there is no underestimation of out-of-sample RMSE despite (unsupervised) training on `test`.
## Results
(Code to reproduce the results of this experiment is [here](https://github.com/kddubey/stats-stackexchange/blob/main/train_on_test_features/train_on_test_features_pca.ipynb). If you don't have Python installed on your computer, but you have a Google account, then open [this link](https://drive.google.com/file/d/1Z-jRUTAtoDTjqSpgDn5_Ko4SJLgj-xke/view?usp=sharing) and click "Open in Google Colaboratory" at the top of the page.)
The first two plots demonstrate that the degree of underestimation may depend on the sample size: as the sample size increases, there's less underestimation.
[](https://i.stack.imgur.com/4WcAl.png)
[](https://i.stack.imgur.com/jAUY6.png)
Interestingly, there's no evidence of underestimation if the features are full rank (note that the scale of the y-axis is `1e-16`).
[](https://i.stack.imgur.com/HPctQ.png)
It's tempting to conclude that the amount of underestimation depends on how much the unsupervised training procedure helps. But [my question here](https://stats.stackexchange.com/q/611877/337906) provides some empirical evidence against that.
## References
- Hastie, Trevor, et al. The elements of statistical learning: data mining, inference, and prediction. Vol. 2. New York: springer, 2009.
| null |
CC BY-SA 4.0
| null |
2023-04-24T22:23:08.960
|
2023-04-25T07:01:58.313
|
2023-04-25T07:01:58.313
|
337906
|
337906
| null |
614034
|
2
| null |
184657
|
1
| null |
In the way I understood it, hope it helps :
On-policy learning updates the policy currently in use while off-policy learning updates a different policy using the data collected from a different policy.
- On-policy learning is a type of RL that updates the policy being used to take actions as the agent interacts with the environment. Specifically, the agent learns by following the current policy and then updates the policy based on the rewards received from those actions. (It is often used in situations where the agent's exploration of the environment is limited and the learning must be done with the current policy).
- Off-policy learning, updates a different policy than the one being used to take actions. This approach involves learning from the behavior of an "older" policy (or another one), while simultaneously interacting with the environment using a newer, improved policy (currently learned). (The main advantage of off-policy learning is that it allows for greater exploration of the environment, which can lead to better policies). Technically, the book is called buffer.
Example: an agent playing a game of chess.
With on-policy learning, the agent would learn by playing the game using its current policy and then update its policy based on the rewards it receives (experience is sampled from the updated policy). But with the off-policy learning, the agent might study a chess book to learn new strategies, and then incorporate those strategies into its policy while still playing games using its original policy (experience is sampled from the "book" policy here).
I recommend to read this following article : [https://medium.com/@sergey.levine/decisions-from-data-how-offline-reinforcement-learning-will-change-how-we-use-ml-24d98cb069b0](https://medium.com/@sergey.levine/decisions-from-data-how-offline-reinforcement-learning-will-change-how-we-use-ml-24d98cb069b0)
| null |
CC BY-SA 4.0
| null |
2023-04-24T22:46:31.343
|
2023-04-24T22:46:31.343
| null | null |
366703
| null |
614035
|
1
| null | null |
1
|
21
|
For simple estimators (e.g. sample mean, OLS), we typically establish asymptotic normality by appealing to some type of Central Limit Theorem
$$ \sqrt{n}(\hat{\beta} - \mathbb{E}[\hat{\beta}]) \to^d \mathcal{N}(0, \Sigma) $$
If the estimator is asymptotically unbiased s.t. $\mathbb{E}[\hat{\beta}] = \beta$ is the parameter we care about, it is then straightforward to construct confidence intervals.
However suppose that $\mathbb{E}[\hat{\beta}] = \beta + \nu$, where $\nu$ is some non-zero bias term that does not tend to zero as $n \to \infty$. How would we perform inference in such a case?
---
An example is the kernel regression estimator, where it can be shown that the optimal choice of bandwidth (in the squared error sense) has an asymptotic bias. The solution in the literature seems to be to "undersmooth" the estimator, by choosing a bandwidth smaller than the optimal one, thereby removing the bias but at the expense of increasing variance (loss of efficiency). Are there other ways to handle inference on biased estimators?
|
Inference on asymptotically biased estimators
|
CC BY-SA 4.0
| null |
2023-04-24T22:57:47.723
|
2023-04-24T22:57:47.723
| null | null |
269723
|
[
"mathematical-statistics",
"econometrics",
"asymptotics"
] |
614036
|
1
| null | null |
0
|
11
|
I'm familiar with inter-annotator agreement (or inter-rater reliability) metrics for data that has categorical annotations, but what about a set of data samples that are ranked by several annotators? Would coefficients like Cohen's kappa still apply here?
The specific context that I'm talking about is within machine learning (text generation to be precise). For a set of items, several machine learning models generate explanations about what the items are in textual form. A set of human annotators then rank the explanations made by the different models.
There seems to be some subjectivity and disagreement among the annotators, but I'm wondering if there's a way to quantify that.
|
Is there such thing as inter-annotator agreement for a set of rankings?
|
CC BY-SA 4.0
| null |
2023-04-24T23:26:30.323
|
2023-04-24T23:26:30.323
| null | null |
211707
|
[
"agreement-statistics"
] |
614037
|
1
|
614087
| null |
1
|
55
|
What is the purpose of the `lower` element in the `control` argument of the `factanal` function?
It says "The lower bound for uniquenesses during optimization" from the documentation. I realized that the loading scores from a factor (ex. 3rd factor) in the latent model can be extremely different depending on if lower is low (0.005) or high (0.6), so I have had trouble deciding how many latent factors I should include in my model. It seems that when `lower` is higher, the loading score of the latter factor gets lower, but I do not know why.
|
factanal() argument: lower
|
CC BY-SA 4.0
| null |
2023-04-25T00:15:12.780
|
2023-04-25T19:40:13.327
|
2023-04-25T10:57:28.087
|
56940
|
315073
|
[
"r",
"factor-analysis",
"dimensionality-reduction"
] |
614039
|
2
| null |
458197
|
0
| null |
This sounds like a job for an interaction term and a chunk test.
For two variables $x_1$ and $x_2$ in each regression, the math is as follows; extending to more than just two variables is straightforward. Let $g(p) = \log\left(\dfrac{p}{1 - p}\right)$. Let $x_3$ be an indicator variable that takes $0$ for one group and $1$ for the other.
$$
g\left(\mathbb E\left[y_i\right]\right) = \beta_0 + \beta_1x_{i1} + \beta_2x_{i2} + \beta_3x_{i3} +
\beta_4x_{i1} x_{i3} + \beta_5x_{i2} x_{i3}
$$
The model includes the original two features, an indicator variable for the group, and interactions between the group indicator and the original two variables.
Nested within such a model, by setting $\beta_3=\beta_4=\beta_5=0$, is the following model that you would have used for each group separately.
$$
g\left(\mathbb E\left[y_i\right]\right) = \beta_0 + \beta_1x_{i1} + \beta_2x_{i2} +
$$
You can fit the above model to each group, or you can fit the first model to both groups simultaneously. Then the $x_3$ variable acts like an on/off switch, showing how the regression parameters differ between the two groups.
To test if the groups differ in slope, test $\beta_3=0$. To test if the groups differ in $x_1$, test $\beta_4=0$. To test if the groups differ in $x_2$, test $\beta_5=0$. You also can test $\beta_3=\beta_4=\beta_5=0$ to see if the groups differ in their regressions at all. This testing is no different from any other testing of logistic regressions.
Below, I demonstrate how to do this in `R`. The final line of `lmtest::lrtest` uses a likelihood ratio test of nested models to calculate a p-value, either for one coefficient (giving the difference between the parameters for each group regressed separately) or for the three coefficients involving $x_3$ (testing if group membership affects the regression, what I have learned to call a [chunk test](https://stats.stackexchange.com/questions/27429/what-are-chunk-tests)).
```
library(lmtest)
set.seed(2023)
N <- 1000
x1 <- runif(N, 0, 1)
x2 <- runif(N, 0, 1)
x3 <- rbinom(N, 1, 0.5)
z <- x1 - x2 + x3 - x1*x3 + x2*x3
p <- 1/(1 + exp(-z))
y <- rbinom(N, 1, p)
L_full <- glm(y ~ x1 + x2 + x3 + x1:x3 + x2:x3, family = binomial)
L0 <- glm(y ~ x1 + x2, family = binomial)
L4 <- glm(y ~ x1 + x2 + x3 + x1:x3)
L5 <- glm(y ~ x1 + x2 + x3 + x2:x3)
L3 <- glm(y ~ x1 + x2 + x3)
lmtest::lrtest(L0, L_full) # I get p = 0.00181 for the full chunk test
lmtest::lrtest(L0, L4) # I get p = 0.03512 for testing beta 4
lmtest::lrtest(L0, L5) # I get p = 0.9272 for testing beta 5
lmtest::lrtest(L0, L3) # I get p = 0.5902 for testing beta 3
```
| null |
CC BY-SA 4.0
| null |
2023-04-25T01:54:49.727
|
2023-04-25T01:54:49.727
| null | null |
247274
| null |
614040
|
1
| null | null |
2
|
42
|
I have a set of forty predictors each of which are binary. I’m using elastic net logistic regression in addition to a random forest
Is there any reason why you could not use binary inputs for a Keras neural network?
I can’t find anything in the documentation that indicates that this is not an option.
|
Binary input variables for a Keras Neural Network
|
CC BY-SA 4.0
| null |
2023-04-25T02:16:22.890
|
2023-04-26T20:45:40.500
|
2023-04-25T02:22:19.887
|
247274
|
138931
|
[
"machine-learning",
"neural-networks",
"binary-data",
"keras"
] |
614041
|
2
| null |
614040
|
1
| null |
Nothing statistical keeps you from doing this.
[I do not see much value to the nonlinearity](https://stats.stackexchange.com/a/611903/247274) that a neural network will introduce. However, a neural network might do a good job of capturing variable interactions, and those interactions could be key. My example [here](https://stats.stackexchange.com/a/613937/247274), for instance, gives an example where you need the interaction to distinguish between outcomes.
| null |
CC BY-SA 4.0
| null |
2023-04-25T02:27:07.393
|
2023-04-26T20:45:40.500
|
2023-04-26T20:45:40.500
|
247274
|
247274
| null |
614044
|
1
|
614192
| null |
2
|
65
|
I was thinking about the following question today:
Suppose I visit a university and want to determine the average height of a student at this university. Let's say I spend the whole week at the university and measure the height of every student on the entire list of students at the university (i.e. population). By the end of the week, I have measured every student on this list - I then take the average height from all these measurements.
In this case, I have was lucky enough to access the population of students instead of a random sample of students. Therefore, in a theoretical sense, there should not be any "risk" or "uncertainty" associated with my average measurement.
However, in reality, there is likely always going to be some source of error. For instance, it's possible that I might have made some measurements incorrectly, I didn't notice that some students were wearing shoes thus adding to their height, perhaps in reality the list might only contain 99% of the students and some students were not on this list, etc.
Thus, in such instances, even though I believe I am dealing with the population, there still might be errors associated with my data - some of these errors might be related to sampling errors because I might be only dealing with 99% of the population whereas some of the errors might be caused by other reasons (e.g. experiment related, measurement error, etc.).
This leads me to my question: In such cases when you think you are dealing with the entire population, does it still make sense to calculate the Confidence Interval for what you believe to be the population estimate ... as doing this might serve to somehow add a useful level of uncertainty to your knowledge? Or would calculating a Confidence Interval in such an example still be meaningless and this sense of uncertainty would be both misleading and meaningless as Confidence Intervals do not "magically safeguard" your estimates from all possible sources of error?
Thanks!
References:
- Confidence interval for a proportion when sample = entire population?
|
Confidence Intervals for Population Estimates: Just In Case?
|
CC BY-SA 4.0
| null |
2023-04-25T03:41:54.893
|
2023-04-26T12:43:42.130
| null | null |
77179
|
[
"confidence-interval",
"mean"
] |
614045
|
2
| null |
570176
|
1
| null |
Because you are not splitting the data into training and testing. Bagging will always give a better in-sample fit because it included all variables while random forest gives better out-of-sample predictions.
| null |
CC BY-SA 4.0
| null |
2023-04-25T03:48:47.580
|
2023-04-25T03:48:47.580
| null | null |
386477
| null |
614046
|
1
| null | null |
1
|
35
|
I'm doing curve fitting, but my error is non-stationary. The variance decreases:

I'm looking for a signal in the noise (In this case at x=90, y=50).
I'd like to calculate the "standard error", and choose my signal be the point with the largest "standard error".
How do I do this when the variance is changing?
I'd appreciate someone providing the correct terminology/title/flags.
|
How to calculate heteroskedastic standard errors
|
CC BY-SA 4.0
| null |
2023-04-25T03:53:20.053
|
2023-04-25T06:35:33.673
|
2023-04-25T06:35:33.673
|
356970
|
356970
|
[
"confidence-interval",
"stationarity",
"heteroscedasticity",
"curve-fitting"
] |
614047
|
2
| null |
613718
|
6
| null |
To examine the estimator for $\Phi$ we will treat $\mathbf{x}$ as a control variable, meaning that the auction house sets their own prices for each round of auctions. Without loss of generality, let us suppose that the values in this control vector are in non-decreasing order and let $\Phi_i \equiv \Phi(x_i)$ denote the corresponding unknown CDF values at the chosen points, so that we have a set of ordered unknown points $0 < \Phi_1 \leqslant \cdots \leqslant \Phi_n < 1$.$^\dagger$ This gives us the likelihood function for the data is:
$$\begin{align}
L_{\mathbf{x},\mathbf{t}}(\Phi)
&= \prod_{i=1}^n \text{Geom}(t_i| 1-\Phi(x_i)) \\[6pt]
&= \prod_{i=1}^n \Phi(x_i)^{t_i-1}(1-\Phi(x_i)) \\[6pt]
&= \prod_{i=1}^n \Phi_i^{t_i-1} (1-\Phi_i) \\[6pt]
\end{align}$$
As you can see, the likelihood depends on the distribution $\Phi$ only through the specific points used for the control variable, so without any additional structural assumption on the distribution, you can only learn about the distribution through inference about the CDF at these points.
It is possible to find the MLE by solving the multivariate optimisation problem that maximises the above function over the constrained space $0 < \Phi_1 \leqslant \cdots \leqslant \Phi_n < 1$. This is a constrained multivariate optimisation problem, which will typically require some transformation and numerical methods. Below I show how you can construct an algorithm to compute the MLE by writing the CDF points as a transformation of an unconstrained parameter vector, thereby converting the problem to an unconstrained optimisation.
Solving to obtain the MLE gives us an estimator of the form $\hat{\boldsymbol{\Phi}} = (\hat{\Phi}_1,...,\hat{\Phi}_n)$, and with some further work we can obtain the estimated asymptotic variance matrix for this estimator (and therefore obtain the standard errors of each of the elements). To estimate the CDF at points other than those used as control variables you could use interpolation, noting that this entails some implicit assumptions about the structure of the distribution (e.g., if you were to estimate the median using linear interpolation from the MLE then this would involve an implicit assumption that the CDF is linear between the relevant control points used in the interpolation). It is also possible to use alternative modelling methods where you assume a particular parametric distributional form for $\Phi$ and then estimate the parameters of the distribution using the MLE or another estimator.
---
Computing the MLE: As noted above, the MLE involves a constrained multivariate optimisation problem, and this can be solved by conversion to an unconstrained mutivariate optimisation problem. To facilitate this analysis it is useful to write the parameters $\Phi_1,...,\Phi_n$ as transformations of an unconstrained parameter vector $\boldsymbol{\gamma} = (\gamma_1, ..., \gamma_n) \in \mathbb{R}^n$, given by:$^\dagger$
$$\Phi_i = \frac{\sum_{r=1}^i \exp(\gamma_r)}{1+\sum_{r=1}^n \exp(\gamma_r)}.$$
This transformation allows us to write the likelihood function in terms of the parameter $\boldsymbol{\gamma}$ and obtain the MLE through this parameter. If we let $\bar{t}_n \equiv \sum_{i=1}^n t_i$ then we can write the likelihood function as:
$$\begin{align}
L_{\mathbf{x},\mathbf{t}}(\boldsymbol{\gamma})
&= \prod_{i=1}^n \bigg( \frac{\sum_{r=1}^i \exp(\gamma_r)}{1+\sum_{r=1}^n \exp(\gamma_r)} \bigg)^{t_i-1} \bigg( 1 - \frac{\sum_{r=1}^i \exp(\gamma_r)}{1+\sum_{r=1}^n \exp(\gamma_r)} \bigg) \\[6pt]
&= \prod_{i=1}^n \bigg( \frac{\sum_{r=1}^i \exp(\gamma_r)}{1+\sum_{r=1}^n \exp(\gamma_r)} \bigg)^{t_i-1} \bigg( \frac{1+\sum_{r=i+1}^n \exp(\gamma_r)}{1+\sum_{r=1}^n \exp(\gamma_r)} \bigg) \\[6pt]
&= \frac{\prod_{i=1}^n (\sum_{r=1}^i \exp(\gamma_r))^{t_i-1} (1+\sum_{r=i+1}^n \exp(\gamma_r))}{(1+\sum_{r=1}^n \exp(\gamma_r))^{n \bar{t}_n}} \\[6pt]
\end{align}$$
and the corresponding log-likelihood is:
$$\begin{align}
\ell_{\mathbf{x},\mathbf{t}}(\boldsymbol{\gamma})
&= \sum_{i=1}^n (t_i-1) \log \bigg( \sum_{r=1}^i \exp(\gamma_r) \bigg)
+ \sum_{i=1}^n \log \bigg( 1 + \sum_{r=i+1}^n \exp(\gamma_r) \bigg) \\[6pt]
&\quad \quad \quad - n(1+\bar{t}_n) \log \bigg( 1+\sum_{r=1}^n \exp(\gamma_r) \bigg) \\[6pt]
&= \sum_{i=1}^n (t_i-1) \text{LSE}(\gamma_1,...,\gamma_i)
+ \sum_{i=1}^n \text{LSE}(0,\gamma_{i+1},...,\gamma_n) \\[6pt]
&\quad \quad \quad - n \bar{t}_n \text{LSE}(0, \gamma_1,...,\gamma_n). \\[6pt]
\end{align}$$
(The function $\text{LSE}$ here is the [logsumexp function](https://en.wikipedia.org/wiki/LogSumExp).) The partial derivatives of the log-likelihood (which are the elements of the score function) are:
$$\begin{align}
\frac{\partial \ell_{\mathbf{x},\mathbf{t}}}{\partial \gamma_k}(\boldsymbol{\gamma})
&= \sum_{i \geqslant k} \frac{(t_i-1) \exp(\gamma_k)}{\sum_{r=1}^i \exp(\gamma_r)}
+ \sum_{i < k} \frac{\exp(\gamma_k)}{1+\sum_{r=i+1}^n \exp(\gamma_r)}
- \frac{n \bar{t}_n \exp(\gamma_k)}{1+\sum_{r=1}^n \exp(\gamma_r)} \\[12pt]
&= \sum_{i \geqslant k} (t_i-1) \exp(\gamma_k - \text{LSE}(\gamma_1,...,\gamma_i))
+ \sum_{i < k} \exp(\gamma_k - \text{LSE}(0,\gamma_{i+1},...,\gamma_n)) \\[6pt]
&\quad \quad \quad - n \bar{t}_n \exp(\gamma_k - \text{LSE}(0, \gamma_1,...,\gamma_n)) \\[12pt]
&= \exp( \text{LSE}(\log(t_k-1) + \gamma_k - \text{LSE}(\gamma_1, ..., \gamma_k) , ..., \log(t_n-1) + \gamma_k - \text{LSE}(\gamma_1,...,\gamma_n)) ) \\[6pt]
&\quad \quad + \exp( \text{LSE}(\gamma_k - \text{LSE}(0,\gamma_{2},...,\gamma_n), ..., \gamma_k - \text{LSE}(0,\gamma_{k},...,\gamma_n)) ) \\[6pt]
&\quad \quad - n \bar{t}_n \exp(\gamma_k - \text{LSE}(0, \gamma_1,...,\gamma_n)) \\[6pt]
\end{align}$$
Setting all partial derivatives to zero and solving for $\boldsymbol{\gamma}$ gives a critical point of the log-likelhood function, which will give the MLE $\hat{\boldsymbol{\gamma}}$. From the invariance property of the MLE we can then easily obtain the MLE $\hat{\boldsymbol{\Phi}}$. In the code below we create a function `MLE.auction` which computes the MLE for any input vectors `x` and `t` (with some other arguments to control the optimisation). The function produces a list giving the MLE and some associated outputs relating to the optimisation.
```
MLE.auction <- function(x, t, gradtol = 1e-7, steptol = 1e-7, iterlim = 1000) {
#Check inputs x and t
if (!is.vector(x)) stop('Error: Input x should be a numeric vector')
if (!is.numeric(x)) stop('Error: Input x should be a numeric vector')
if (!is.vector(t)) stop('Error: Input t should be a numeric vector')
if (!is.numeric(t)) stop('Error: Input t should be a numeric vector')
if (any(t != as.integer(t))) stop('Error: Input t should contain only integers')
if (min(t) < 1) stop('Error: Input t should contain only positive integers')
if (length(x) != length(t)) stop('Error: Inputs x and t should have the same length')
#Check input gradtol
if (!is.vector(gradtol)) stop('Error: Input gradtol should be numeric')
if (!is.numeric(gradtol)) stop('Error: Input gradtol should be numeric')
if (length(gradtol) != 1) stop('Error: Input gradtol should be a single numeric value')
if (min(gradtol) <= 0) stop('Error: Input gradtol should be positive')
#Check input steptol
if (!is.vector(steptol)) stop('Error: Input steptol should be numeric')
if (!is.numeric(steptol)) stop('Error: Input steptol should be numeric')
if (length(steptol) != 1) stop('Error: Input steptol should be a single numeric value')
if (min(steptol) <= 0) stop('Error: Input steptol should be positive')
#Check input iterlim
if (!is.vector(iterlim)) stop('Error: Input iterlim should be numeric')
if (!is.numeric(iterlim)) stop('Error: Input iterlim should be numeric')
if (length(iterlim) != 1) stop('Error: Input iterlim should be a single numeric value')
if (iterlim != as.integer(iterlim)) stop('Error: Input iterlim should be an integer')
if (min(iterlim) <= 0) stop('Error: Input iterlim should be positive')
#Set preliminary quantities
ORD <- order(x)
x <- x[ORD]
t <- t[ORD]
n <- length(t)
tbar <- mean(t)
#Set negative log-likelihood function
NEGLOGLIKE <- function(gamma) {
TT <- matrixStats::logSumExp(c(0, gamma[1:n]))
T1 <- rep(0, n)
T2 <- rep(0, n)
for (i in 1:n) {
T1[i] <- matrixStats::logSumExp(gamma[1:i])
if (i < n) { T2[i] <- matrixStats::logSumExp(c(0, gamma[(i+1):n])) } }
NLL <- - sum((t-1)*T1) - sum(T2) + n*tbar*TT
#Set derivative
SS <- n*tbar*exp(gamma - TT)
S1 <- rep(0, n)
S2 <- rep(0, n)
for (k in 1:n) {
S1[k] <- exp(matrixStats::logSumExp(log(t[k:n]-1) + gamma[k] - T1[k:n]))
if (k > 1) { S2[k] <- exp(matrixStats::logSumExp(gamma[k] - T2[1:(k-1)])) } }
DERIV <- - S1 - S2 + SS
attr(NLL, 'gradient') <- DERIV
#Give output
NLL }
#Compute optima
OPT <- nlm(NEGLOGLIKE, p = rep(0, n),
gradtol = gradtol, steptol = steptol, iterlim = iterlim)
#Convert to MLE for phi
GAMMA.MLE <- OPT$estimate
MAXLOGLIKE <- -OPT$minimum
TTT <- matrixStats::logSumExp(c(0, GAMMA.MLE))
TT1 <- rep(0, n)
for (i in 1:n) { TT1[i] <- matrixStats::logSumExp(GAMMA.MLE[1:i]) }
PHI.MLE <- exp(TT1 - TTT)
MLE.OUT <- data.frame(x = x, t = t, MLE = PHI.MLE)
rownames(MLE.OUT) <- sprintf('Phi[%s]', 1:n)
#Give output
list(MLE = MLE.OUT, maxloglike = MAXLOGLIKE, mean.maxloglike = MAXLOGLIKE/n,
code = OPT$code, iterations = OPT$iterations,
gradtol = gradtol, steptol = steptol, iterlim = iterlim) }
```
We can test this function on the set of input data $\mathbf{x} = (4, 6, 12, 15, 21)$ and $\mathbf{t} = (2, 7, 6, 12, 15)$. As you can see from the output, the computed MLE respects the CDF ordering for the input values and it also tells you the maximised value of the log-likelihood function.
```
#Set data
x <- c(4, 6, 12, 15, 21)
t <- c(2, 7, 6, 12, 15)
#Compute MLE
MLE.auction(x = x, t = t)
$MLE
x t MLE
Phi[1] 4 2 0.5000001
Phi[2] 6 7 0.8461538
Phi[3] 12 6 0.8461538
Phi[4] 15 12 0.9166667
Phi[5] 21 15 0.9333334
$maxloglike
[1] -14.08348
$mean.maxloglike
[1] -2.816695
...
```
---
$^\dagger$ Here we use the simplifying assumption that $\Phi_1 > 0$ and $\Phi_n < 1$. It is simple to proceed without this simplifying assumption by allowing $\boldsymbol{\gamma} \in \bar{\mathbb{R}}^n$, which is the extended Euclidean space.
| null |
CC BY-SA 4.0
| null |
2023-04-25T03:54:09.250
|
2023-04-26T04:54:35.747
|
2023-04-26T04:54:35.747
|
173082
|
173082
| null |
614048
|
1
| null | null |
1
|
15
|
Let say we have a variable X1 that measure the percentage of C from total population and X2 as a variable that measure the growth rate of C in t,t+1 period. I wonder if is correct to use both variables in the same linear regression I am still studying statistics, so any explanation would be a real help for me! Thanks!
PS: for example x1 is mortality rate of men and x2 is annual rate growth of men mortality
|
Can't figure out if is correct to use this type of variables in linear regression
|
CC BY-SA 4.0
| null |
2023-04-25T04:00:38.087
|
2023-04-25T04:00:38.087
| null | null |
386480
|
[
"regression",
"linear"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.