Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
616839 | 1 | null | null | 0 | 19 |
#### Question
I was just reading the article [here](https://link.springer.com/article/10.1007/s11336-023-09910-z) regarding the use of generalized additive latent and mixed models (GALAMMs), which are purportedly a form of multi-level structural equation model (SEM) which allows more flexible nonlinear components. However, I also recently discovered piecewise SEMs, which are discussed in some detail [here](https://besjournals.onlinelibrary.wiley.com/doi/10.1111/2041-210X.12512). In a piecewise SEM, it seems possible to construct a structural equation model with both random effects as well as splines, using either `lme4` or `mgcv` fitted regressions into a `piecewiseSEM` model fit. So on the surface, it seems both of these methods, GALAMMs and piecewise SEMS, seem to be able to achieve the same thing (a GAMM-based SEM), albeit with different computational approaches.
Regarding piecewise SEMs, I am aware of some of the following pros and cons with this method:
- Relatively longer history of development/proofing
- Less need for larger samples than covariance-based SEM
- P-values from lme4 objects tend to be unstable, though nlme fits seem to circumvent that issue
- Lack of latent variable modeling, reciprocal relationships, or sophisticated correlation of errors
However, this seems to be my only exposure to GALAMMs and they are fairly new based off the article above. Some highlights about it's strengths from this article include:
- An algorithm that makes estimation of large and complicated data fairly quick
- Simultaneous confidence bands seem to have too "low coverage"
- An ability to incorporate latent variables (unlike piecewise SEM)
My main question is this: what other pros and cons may be associated with each method? Given how technical the GALAMMs article is, it is not immediately clear to me what issues may arise with their use, and their infancy leads me to believe there are still some kinks to be worked out before they are fully implemented (which is cautiously advised by the authors anyway).
#### Citations
- Lefcheck, J. S. (2016). PiecewiseSEM: Piecewise structural equation modelling in R for ecology, evolution, and systematics. Methods in Ecology and Evolution, 7(5), 573–579. https://doi.org/10.1111/2041-210X.12512
- Sørensen, Ø., Fjell, A. M., & Walhovd, K. B. (2023). Longitudinal modeling of age-dependent latent traits with generalized additive latent and mixed models. Psychometrika, 88(2), 456–486. https://doi.org/10.1007/s11336-023-09910-z
| What is the difference between a piecewise SEM versus a GALAMM for GAMM-based SEMs? | CC BY-SA 4.0 | null | 2023-05-24T22:04:23.307 | 2023-05-24T22:04:23.307 | null | null | 345611 | [
"regression",
"mixed-model",
"generalized-additive-model",
"piecewise-sem",
"galamm"
]
|
616841 | 5 | null | null | 0 | null |
#### Description
Piecewise structural equation modeling (SEM) uses a local estimation approach to SEM by estimating data points themselves rather than the covariance structure of the data (which has been historically applied to SEM models). In doing so, one can estimate a variety of flexible models which include spline-based methods as well as random effects in an automated way. More information about this method can be found in Chapter 8 of Rex Kline's book on structural equation modeling as well as an article from the author of `piecewiseSEM`, which is a package freely available in the R statistical programming language.
#### Citations
- Kline, R. B. (2023). Principles and practice of structural equation modeling (5th ed.). The Guilford Press.
- Lefcheck, J. S. (2016). PiecewiseSEM: Piecewise structural equation modelling in R for ecology, evolution, and systematics. Methods in Ecology and Evolution, 7(5), 573–579. https://doi.org/10.1111/2041-210X.12512
| null | CC BY-SA 4.0 | null | 2023-05-24T22:17:05.500 | 2023-05-25T02:31:25.983 | 2023-05-25T02:31:25.983 | 345611 | 345611 | null |
616842 | 4 | null | null | 0 | null | Piecewise structural equation models (SEMS) are SEM models which do not require a covariance-based approach to fitting, which allows more flexible estimation. For other topics related to SEM, please use other tags related to structural equation modeling. | null | CC BY-SA 4.0 | null | 2023-05-24T22:17:05.500 | 2023-05-25T02:31:02.893 | 2023-05-25T02:31:02.893 | 345611 | 345611 | null |
616843 | 5 | null | null | 0 | null |
#### Description
Generalized additive latent and mixed models (GALAMMs) are a semiparametric extension of GLLAMMs in which both the linear predictor and latent variables may depend smoothly on observed variables. This allows the incorporation of both nonlinear approaches to SEM model construction as well as the analysis of clustered data. As of present, this approach also differs from piecewise SEMs in that they may include latent variables rather than path analyses built with directly observed variables. For more information, see the below article from Sørensen et. al. 2023 on the methodology involved.
#### Citation
Sørensen, Ø., Fjell, A. M., & Walhovd, K. B. (2023). Longitudinal modeling of age-dependent latent traits with generalized additive latent and mixed models. Psychometrika, 88(2), 456–486. [https://doi.org/10.1007/s11336-023-09910-z](https://doi.org/10.1007/s11336-023-09910-z)
| null | CC BY-SA 4.0 | null | 2023-05-24T22:22:46.573 | 2023-05-25T03:38:49.713 | 2023-05-25T03:38:49.713 | 345611 | 345611 | null |
616844 | 4 | null | null | 0 | null | Generalized additive latent and mixed models (GALAMMs) are structural equation modeling (SEM) based models which freely include nonlinear and clustered data analysis. | null | CC BY-SA 4.0 | null | 2023-05-24T22:22:46.573 | 2023-05-25T03:38:43.217 | 2023-05-25T03:38:43.217 | 345611 | 345611 | null |
616845 | 1 | null | null | 0 | 7 | So I have a dataset with about 6x features as I have samples, which are balanced across 8 classes. I set out to figure out which features are important for each label. I've been approaching this using Logistic Regression in the multinomial setting. Initially I had used all ~300 features but knew there was strong colinearity in some of the features, and opted to combine them using some correlation cut off - this reduced things to ~150 features, but I'm still in a bind as it comes to 'feature importance'.
I'm currently using Nested Cross validation - an inner loop which tunes the C hyperparameter, and an outer loop for testing. After integrating a 'KBestFeatures' module into the pipeline, I noticed different 'KBestFeatures' are better for different classes - for example, lets say 30 features allows for reliable classification of Class 1, while all 150 features makes for strong classification of Class 6 while bringing Class 1 much closer to chance.
I feel a bit overwhelmed with all this. The whole reason I'd set up the nested CV was to avoiding the kind of tinkering I'm doing. Is there some more straight forward approach for arriving at the minimal feature set for each class I'm looking to identify? I also haven't gotten to the point of doing significance testing which will obviously be important. I've been looking for ISL and ESL and nothing is quite hitting me as the natural approach to this problem, outside of perhaps training individual models 'manually', but that seems wrong as well... Any tips would be great.
| Multi-class classification, KBestFeatures, different scores best for different labels - intelligent way to approach? | CC BY-SA 4.0 | null | 2023-05-24T22:40:13.283 | 2023-05-24T22:40:13.283 | null | null | 245715 | [
"logistic",
"scikit-learn",
"multi-class"
]
|
616846 | 2 | null | 616752 | 5 | null | $Z$ converges to point mass at zero. We know $M_n\stackrel{p}{\to}\infty$ since the Normal distribution is unbounded.
So, for any fixed $\epsilon$ and any $i$, $$P(|Z_i|>\epsilon)=P(|X_i/M_n|>\epsilon)\to 0$$
implying $Z_i\stackrel{p}{\to} 0$ and $Z_i\stackrel{d}{\to} 0$.
The convergence will be quite slow, because $M_n$ increases roughly as $\sqrt{\log n}$. It's going to be relatively hard to get convincing simulations for the Gaussian case.
| null | CC BY-SA 4.0 | null | 2023-05-24T22:59:05.380 | 2023-05-24T22:59:05.380 | null | null | 249135 | null |
616847 | 1 | 617195 | null | 5 | 273 | I’m trying to informally derive the chi-squared test statistic using a combination of basic geometry and algebra. I’m successfully able to obtain a system of equations that contain Karl Pearson’s chi-squared test statistic. But I need help showing that the test statistic = chi^2 from my equations.
My approach:
I have a 3-sided die.
I roll the die a number of times and record the frequency of each face.
This system has 2 degrees of freedom (we only need to know the frequencies of any 2 faces to infer the frequency of the remaining one).
Therefore, we can describe the distance, chi, between the observed and expected values as a formula with 2 dimensions via the Pythagorean theorem:
$$
\chi^2 \quad=\quad Z^2 \quad+\quad Z_{prime}^2
$$
...where Z is the (standardized) difference between the observed and expected values for any one face, and Z_prime is the remaining side of our triangle in 2D space (Z_prime also implies a transformation of the distribution of the 2nd face from a joint distribution into an independent distribution, making the combined distribution circular).
Note that:
$$
Z^2 \quad=\quad p.Z^2\quad+\quad(1-p).Z^2
$$
...and similarly:
$$
Z_{prime}^2 \quad=\quad p.Z_{prime}^2\quad+\quad(1-p).Z_{prime}^2
$$
...therefore:
$$
\chi^2 \quad= p.Z^2\quad+\quad(1-p).Z^2 + \quad p.Z_{prime}^2\quad+\quad(1-p).Z_{prime}^2
$$
So for all 3 faces (A, B, C) we have the following system of equations:
$$
\chi^2 \quad= p_{A}.Z_{A}^2\quad+\quad(1-p_{A}).Z_{A}^2 + \quad p_{A}.Z_{A.prime}^2\quad+\quad(1-p_{A}).Z_{A.prime}^2
\\
\chi^2 \quad= p_{B}.Z_{B}^2\quad+\quad(1-p_{B}).Z_{B}^2 + \quad p_{B}.Z_{B.prime}^2\quad+\quad(1-p_{B}).Z_{B.prime}^2
\\
\chi^2 \quad= p_{C}.Z_{C}^2\quad+\quad(1-p_{C}).Z_{C}^2 + \quad p_{C}.Z_{C.prime}^2\quad+\quad(1-p_{C}).Z_{C.prime}^2
\\
$$
[Equations 1-3]
Now, since:
$$
Z = \frac{(O-E)}{\sigma}
$$
…and:
$$
Z^2 = \frac{(O-E)^2}{np(1-p)}
$$
…then, if we multiply Z^2 by (1-p) we get:
$$
(1-p).\frac{(O-E)^2}{np(1-p)} = \frac{(O-E)^2}{np}
$$
Therefore, the sum of the 2nd column from Equations 1-3 is identical to Pearson's chi-square test statistic for 2 degrees of freedom, i.e.:
$$
(1-p_{A}).Z_{A}^2\quad +\quad (1-p_{B}).Z_{B}^2 \quad+ \quad(1-p_{C}).Z_{C}^2
\\=
\frac{(O_{A}-E_{A})^2}{E_{A}}\quad+\quad\frac{(O_{B}-E_{B})^2}{E_{B}}\quad+\quad\frac{(O_{C}-E_{C})^2}{E_{C}}
$$
My question is: how can I demonstrate from Equations 1-3 that:
$$
\chi^2\quad=\quad(1-p_{A}).Z_{A}^2\quad +\quad (1-p_{B}).Z_{B}^2 \quad+ \quad(1-p_{C}).Z_{C}^2
$$
[Equation 4]
By the way, it's already straightforward to demonstrate that:
$$
\chi^2 =
\\
p_{A}.Z_{A}^2\quad+\quad p_{A}.Z_{A.prime}^2 +
\\
p_{B}.Z_{B}^2\quad+\quad p_{B}.Z_{B.prime}^2 +
\\
p_{C}.Z_{C}^2\quad+\quad p_{C}.Z_{C.prime}^2
\\
$$
...since the sum of probabilities = 1.
Similarly, we can show that:
$$
2*\chi^2 =
\\
(1-p_{A}).Z_{A}^2\quad+\quad (1-p_{A}).Z_{A.prime}^2 +
\\
(1-p_{B}).Z_{B}^2\quad+\quad (1-p_{B}).Z_{B.prime}^2 +
\\
(1-p_{C}).Z_{C}^2\quad+\quad (1-p_{C}).Z_{C.prime}^2
\\
$$
...since the sum of (1-probabilities) = 2.
But I'm not sure that these identities help me arrive at Equation 4.
| Obtaining the chi-squared test statistic via geometry | CC BY-SA 4.0 | null | 2023-05-24T23:03:52.880 | 2023-05-30T22:59:55.647 | 2023-05-30T22:59:55.647 | 235662 | 235662 | [
"probability",
"chi-squared-test",
"multinomial-distribution"
]
|
616848 | 1 | null | null | 1 | 24 | Suppose we have a VAR(1) model:
$$
\mathbf{X}_t = \boldsymbol{\Phi} \mathbf{X}_{t-1} + \mathbf{Z}_t, \hspace{10mm} \mathbf{Z}_t \sim WN(0,\Sigma)
$$
If we can keep plugging that equation into itself, we get
$$
\mathbf{X}_t = \sum_{j=1}^{\infty} \boldsymbol{\Phi}^j Z_{t-j} \tag{2}
$$
as long as the eigenvalues of $\boldsymbol{\Phi}$ have a modulus of less than $1$.
[This textbook](https://link.springer.com/book/10.1007/978-3-319-29854-2?source=shoppingads&locale=en-us&gclid=CjwKCAjw67ajBhAVEiwA2g_jEKf_1qiyDaEgxdLOytGHj8Jz8C3igzcpeydVnSsFLdMLszCbcqYSgRoCkKQQAvD_BwE) mentions that this condition in bold is sufficient to guarantee that "the coefficients $\boldsymbol{\Phi}^j$ are absolutely summable." I assume the book really means "matrices with absolutely summable components" and uses "coefficients" to mean elements of the matrix, not the matrix itself.
In other words, I am thinking
$$
\sum_{j=0}^\infty |\left[\boldsymbol{\Phi}^j\right]_{kl}| < \infty \tag{1}
$$
I understand how this implies that each component of the [vector] $\sum_{j=0}^n \boldsymbol{\Phi}^j \mathbf{Z}_{t−j}$
converges in the mean square sense.
My question is why is (1) a consequence of the eigenvalues being between $-1$ and $1$?
---
The way I think about it appears to be incomplete. I suppose that $\boldsymbol{\Phi}$ is diagonalizable, so
$$
\boldsymbol{\Phi}^k = \mathbf{P} \boldsymbol{\Lambda}^k \mathbf{P}^{-1}
$$
where $\boldsymbol{\Lambda}$ is the diagonal matrix of eigenvalues, and $\mathbf{P}$ is an invertible matrix of eigenvectors arranged as columns. In this case,
$$
\boldsymbol{\Phi}^j \mathbf{Z}_{t−j} = \mathbf{P} \boldsymbol{\Lambda}^k \mathbf{P}^{-1} \mathbf{Z}_{t−j}
$$
and the right hand side goes to $0$ elementwise because $\boldsymbol{\Lambda}^k \mathbf{P}^{-1} \mathbf{Z}_{t−j}$ goes to $0$ by virtue of the fact that every element in
$\boldsymbol{\Lambda}^k$ vanishes.
Notice that my intuition a.) assumes diagonalization is possible, and b.) that it doesn't have anything to do with the individual components of the matrix of a power of $\boldsymbol{\Phi}$.
| Eigenvalues of VAR(1) coefficient matrix | CC BY-SA 4.0 | null | 2023-05-24T23:43:11.377 | 2023-05-25T02:49:31.160 | 2023-05-25T00:11:38.230 | 8336 | 8336 | [
"vector-autoregression",
"eigenvalues"
]
|
616850 | 1 | null | null | 0 | 38 | I'm still pretty new to modelling but essentially, the dataset is consists of the following (the first observation only):
```
SeedAttachment.Number Seed NO.seed Species Syndrome Mountain Individual Method
<dbl> <dbl> <dbl> <chr> <fct> <chr> <chr> <fct>
1 5 1 4 SpeciesC Intermediate Bluff Knoll 11690-000 Separate
```
So the data was collected and contains almost 1000+ observations. I want to determine whether different syndromes have different levels of production of seeds. The SeedAttachment.Number represents an appendage in the plant connecting to the "Seed" and sometimes, the Seed number is less than the SeedAttachment.Number. Syndrome is the pollination syndrome and is the suite of plant traits - it is either bee, bird or both. Mountain is the location of plant sample. Individual is the plant ID. Method can be ignored.
What is the best way to build out the formula for a Generalised Linear Mixed Effect model using a binomial distribution?
I have identified fixed effects as Syndrome. The random effect would probably be Mountain, Species and Individual.
```
model <- glmer(cbind(Seed, NO.seed) ~ Syndrome + (1|Species/Individual) + (1|Mountain),family = binomial, data=d1)
```
```
model2 <- glmer(cbind(Seed, NO.seed) ~ Syndrome + (1|Species / Individual), family = binomial, data = d1)
```
As I understand it; `(1|Species / Individual)` means that it's a nested random effect. ie. Individual is nested within Species cluster.
When I try fitting both models, can someone explain why I'm getting the following error?
```
boundary (singular) fit: see help('isSingular')
```
Also, how do I determine what is the best variables to fit? I don't understand the effect off leaving some variables off or on. Like, which one do I select to answer the research question above?
How do I plot the data so I can visualise the slopes within each random effect cluster and the fixed slope for the overall data?
| How do I determine which variables to pick in a binomial GLMM using lme4? | CC BY-SA 4.0 | null | 2023-05-25T00:07:00.527 | 2023-05-25T00:07:00.527 | null | null | 388777 | [
"r",
"mixed-model",
"generalized-linear-model",
"lme4-nlme"
]
|
616851 | 2 | null | 616741 | 0 | null | I am afraid you are out of luck here. Although you have extra variables sec, race and age, you cannot reconstruct how go_out and transportation relate to each other without making extra assumptions. The data were certainly not measured on the same persons.
Example: Denote the two variables that interest you as A and B. For sake of simplicity assume that A has three levels and B has two levels. To get the chance that an asian male over 25 does A and B, you would extract all asian males over 25. Let us say there are only 7 individuals, and for variable A you count number of outcomes in levels: #$A_1=4$, #$A_2=2$ and #$A_3=1$ and for B, $B_1=3$, $B_2=4$. Then you cannot find out how many of them had a particular combination $A_i,B_j$; there are many possible combinations of joint outcomes with same marginal distribution, for example
| |A1 |A2 |A3 | |sum |
||--|--|--||---|
|B1 |2 |1 |0 | |3 |
|B2 |2 |1 |1 | |4 |
| | | | | | |
|sum |4 |2 |1 | |7 |
or
| |A1 |A2 |A3 | |sum |
||--|--|--||---|
|B1 |0 |2 |1 | |3 |
|B2 |4 |0 |0 | |4 |
| | | | | | |
|sum |4 |2 |1 | |7 |
| null | CC BY-SA 4.0 | null | 2023-05-25T00:20:45.773 | 2023-05-25T00:20:45.773 | null | null | 237561 | null |
616853 | 1 | null | null | 0 | 13 | I want to apply the Mann-Kendall (MK) test to identify relevant trends in multiple time series, however literature on the matter tells me one hard requirement is for no serial correlation to exist in the data.
In its turn, conventional auto-correlation analysis using ACF/PACF plots require stationarity, which is usually dealt with by differentiating the time series.
My question is:
is this approach compatible with the requirements of the MK test (as differentiating removes the underlying trends)? Or rather, can I conclude the original series is not serially correlated from the analysis of its differentiated series?
| Serial correlation requirements for Mann-Kendall test | CC BY-SA 4.0 | null | 2023-05-25T00:44:15.357 | 2023-05-25T00:44:15.357 | null | null | 388119 | [
"autocorrelation",
"stationarity",
"trend"
]
|
616854 | 2 | null | 616848 | 0 | null |
- The eigenvalues of $\Phi^\intercal \Phi$ are the eigenvalues of $\Phi$ squared, and so also have moduli less than $1$. Proof here
- For square $\Phi$, $\begin{Vmatrix} \cdot \end{Vmatrix}_2$ is a submultiplicative norm and
$$
\begin{Vmatrix} \Phi \end{Vmatrix}_2 = \sqrt{\max \{|\lambda| : \lambda \text{ is an eigenvalue of }\Phi^\intercal \Phi\}} := c
$$
Proof here
- $\| \sum_{k=0}^n \Phi^k \|_2 \le \sum_{k=0}^n \|\Phi\|_2^k \le \frac{1}{1 - c} < \infty$ so $\left[\sum_{k=0}^n \Phi^k\right]$ is invertible and in particular has finite elements. Proof is here
| null | CC BY-SA 4.0 | null | 2023-05-25T02:49:31.160 | 2023-05-25T02:49:31.160 | null | null | 8336 | null |
616856 | 1 | null | null | 0 | 9 | I do statistics for a restaurant in which people arrive at random times and end up paying a random amount of money when they are finished. I am using a Compound Poisson Process to model this, but I have an issue with fixing the rates of our guests visits: people do not arrive at the same rates for each hour, does this mean I have to asume a Non Homogeneous PP? What would be the difference betweeen this and assuming it as homogeneous, analyzing and concluding for each different time interval? Also, is a pairwise multiple mean comparison hypothesis test for contiguous intervals adecuate for proving a difference in rates of arrival and thus the process non homogeneity for those intervals? The operation goes for 12 hours a day.
Thanks in advance. Perhaps this is equivalent to killing a fly with a bat haha
| Multiple mean comparison test for a Non homogeneous Poisson Process | CC BY-SA 4.0 | null | 2023-05-25T03:04:33.063 | 2023-05-25T03:04:33.063 | null | null | 388424 | [
"multiple-comparisons",
"poisson-process",
"timeseries-segmentation"
]
|
616860 | 1 | null | null | 0 | 10 | i've tried to find the equation for lq likelihood equation is:
- lq_likelihood <- ((f^(1-q))-1)/(1-q) and with f is pdf of lognormal distribution and q is distortion parameter(i'm using q=1-(1/n))
- define u <- logf(x) u' is first derivative of each parameter
- the first derivation of lq_likelihood: sum (f^(1-q) * u')
| how to estimate the lognormal distribution 3 parameters using maximum lq likelihood method in R/Rstudio | CC BY-SA 4.0 | null | 2023-05-25T05:08:46.193 | 2023-05-25T05:08:46.193 | null | null | 373818 | [
"r",
"maximum-likelihood",
"lognormal-distribution"
]
|
616861 | 1 | null | null | 4 | 71 | Usually there are clear advancements in ML methodologies in research especially, where I can say X method is essentially better than Y method for most datasets.
However, I recently accidentally stumbled upon SOMs, and they seem pretty cool, like an interpretable form of UMAP or tSNEs, with the ability to offer perhaps a little more topological consistency in a way that is trained in a non-error correction way. Seems like a neat idea!
But when I look around for research on this: I hardly ever see people talk about it, and there isn't too much in the way of python libraries. And I can't see some "clear" reason as to why this is so. Is it truly outclassed by competing methods? If so, why is it used competitively within the flow cytometry community?
Just wondering if anyone here can shed light on this apparent "drop-off" wrt SOMs in regards to usage, or general research directions/applications. It's just not so clear for me.
| Is there any reason why there appears to be not much modern research into self organizing maps (SOMs)? | CC BY-SA 4.0 | null | 2023-05-25T05:15:17.417 | 2023-05-27T05:50:02.030 | null | null | 117574 | [
"feature-selection",
"dimensionality-reduction",
"unsupervised-learning",
"self-organizing-maps",
"topological-data-analysis"
]
|
616862 | 1 | 616868 | null | 1 | 28 | I am learning the Gaussian process and feel confused about how three lines were generated in Fig 2.2(A) in the book "Gaussian Process For Machine Learning". As described by the author: "To see this, we can draw samples from the distribution of functions evaluated at any number of points; in detail, we choose a number of input points, $X_∗$, and write out the corresponding covariance matrix using eq. (2.16) elementwise. Then we generate a random Gaussian vector with this covariance matrix".
$$
\operatorname{cov}\big(f(\mathbf{x}_p), f(\mathbf{x}_q) \big) =
k(\mathbf{x}_p, \mathbf{x}_q) =
\exp\big(-\tfrac{1}{2} |\mathbf{x}_p- \mathbf{x}_q|^2 \big)
\tag{2.16}
$$
- What does it mean by drawing samples from the distribution of functions?
Does this function refer to $f(x)=W^Tx$ in the linear Bayesian regression, and we are sampling from prior of p(w)? But if using 2.16 and assuming the mean is zero, we don't need this distribution, isn't it?
- As shown in Fig-2.2(A), there are three samples corresponding to 3 lines, how each line was generated? Are they the "Gaussian vector" described in the text, in other words, at each time, a vector containing all f(x) values on selected x is generated and that is a line, and then generate another vector using the same joint distribution? Thank you
[](https://i.stack.imgur.com/lzBoi.png)
| Sampling from Gaussian Process | CC BY-SA 4.0 | null | 2023-05-25T05:27:46.943 | 2023-05-25T08:09:43.083 | 2023-05-25T08:09:43.083 | 35989 | 388783 | [
"normal-distribution",
"gaussian-process",
"random-generation"
]
|
616863 | 2 | null | 451473 | 1 | null | The group that has the mean that is in the "better" direction did better on average. The t-test lets you assess the strength of evidence against the null hypothesis, and so if you got a small enough p-value then it is reasonable (in some sort of proportion to the smallness of the p-value) to think that one group did better.
Seeing which group did better on average needs no statistics beyond the mean. Just graph the data. Never make inferences on the basis of data without examining the data in some sort of visual display.
| null | CC BY-SA 4.0 | null | 2023-05-25T06:29:01.783 | 2023-05-25T06:29:01.783 | null | null | 1679 | null |
616864 | 2 | null | 451473 | 1 | null | It helps to check the means and standard deviations of your groups as well as to visualize the differences. I really like the Datanovia approach to things, where you also just add test data onto plots. An example is given below using R. First we can load the required libraries
```
#### Load Libraries ####
library(datarium)
library(tidyverse)
library(rstatix)
#### Check Mean/SD of Groups ####
genderweight %>%
group_by(group) %>%
summarise(mean.weight = mean(weight),
sd.weight = sd(weight))
```
giving us these summary statistics
```
# A tibble: 2 × 3
group mean.weight sd.weight
<fct> <dbl> <dbl>
1 F 63.5 2.03
2 M 85.8 4.35
```
We can see from the descriptive data here that the female group has a lower weight on average and varies less than the male group, so if we have a significant t-test, we should expect that it is because the female group is lower on average than the male group. We can run a t-test to check quickly:
```
#### Run T-Test ####
t <- genderweight %>%
t_test(formula = weight ~ group)
```
And indeed this is true...
```
# A tibble: 1 × 8
.y. group1 group2 n1 n2 statistic df p
* <chr> <chr> <chr> <int> <int> <dbl> <dbl> <dbl>
1 weight F M 20 20 -20.8 26.9 4.30e-18
```
The negative sign indicates that the female group (the reference group here) has a lower mean than the male group. We can look at this difference directly with a boxplot and even add our t-test data to contextualize these differences:
```
#### Visualize Differences ####
genderweight %>%
ggplot(aes(x=group,
y=weight,
fill=group))+
geom_boxplot()+
labs(title = "Weight By Gender",
subtitle = get_test_label(t, detailed = T))
```
And now you can see the which group has a smaller and which a larger mean and that the female group has a much narrower box because of its lack of variation compared with males:
[](https://i.stack.imgur.com/soxjH.png)
#### Edit
Nick brings up a very compelling point about boxplots in the comments. While this gives some perspective on what the average weights are with each group, they are more based around the interquartile range and medians of the data, which are not directly comparable to means and standard deviations (especially in the case of skewed data). Some other visualizations may serve this purpose better (of which he mentions quantile plots with overlayed means), but at the minimum, inspecting the means and SDs as well as giving yourself graphical representations of the data help.
| null | CC BY-SA 4.0 | null | 2023-05-25T06:54:12.600 | 2023-05-25T07:45:36.530 | 2023-05-25T07:45:36.530 | 345611 | 345611 | null |
616865 | 2 | null | 268027 | 0 | null | LSTMs are not very good at extrapolating to unseen regions in data. The main problem is that a traditional LSTM relies a lot on sigmoidal functions (gates are typically logistic sigmoids and the activation function is $\tanh$ by default).
These functions tend to saturate when values become too small or too large.
As a result, the parameters will be optimised in such a way that large activations map to saturation on one side and small values map to saturation on the other side.
If the numerical values happen to be outside of the range of values seen during training, this will typically result in more (unforeseen) saturation of these sigmoid functions and therefore poor performance.
If you can get rid of the sigmoidal functions somehow, you might be able to get reasonable predictions for values outside of the training range.
It seems like the MC-LSTM architecture from [this paper](http://proceedings.mlr.press/v139/hoedt21a.html) aims to do exactly that.
| null | CC BY-SA 4.0 | null | 2023-05-25T06:59:49.853 | 2023-05-25T06:59:49.853 | null | null | 95000 | null |
616867 | 1 | null | null | 0 | 43 | Good morning all together,
as a statistics rookie, I'm facing the task how to test for a difference in the mean of two random variables that are bound between 0 and 1. Both their means are positive and in one case (potentially) autocorrelated as those random variables are time series.
In detail, these random variables are generated in two settings, where in setting 1 both variables are the result of a calibration and consequently time series of observations.
In the another setting, these variables are the result of a monte carlo simulation, where autocorrelation can be excluded by construction. In both settings, these random variables are possibly correlated (but not correlated by construction but by mere chance).
My question is: Which test can i use to check whether the mean of one random variable is larger than the other mean for either setting? In addition, how can i test this if only one random variable is autocorrelated and the other one is not?
Thank you very much for your suggestions,
Thomas
| How to test for a equality of the mean of two bounded random variables | CC BY-SA 4.0 | null | 2023-05-25T07:14:10.410 | 2023-05-25T10:50:12.093 | null | null | 357274 | [
"hypothesis-testing",
"statistical-significance",
"mean"
]
|
616868 | 2 | null | 616862 | 1 | null | Consider the definition from the [book by Rasmussen](http://gaussianprocess.org/gpml/)
>
Definition 2.1 A Gaussian process is a collection of random variables, any
finite number of which have a joint Gaussian distribution.
So Gaussian Process is a [distribution over functions](https://stats.stackexchange.com/questions/502531/elementary-explanation-of-gaussian-processes) $f(x_1), f(x_2), \dots, f(x_N)$ such that each of them is considered as a Gaussian random variable, where the relations between them are governed by the mean $m(\mathbf{x})$ and covariance $k(\mathbf{x}, \mathbf{x}')$ functions.
$$
\left[ {\begin{array}{c}
f(x_1) \\
f(x_2) \\
\vdots \\
f(x_N) \\
\end{array}} \right] \sim \mathcal{N}(\boldsymbol{\mu}, \boldsymbol{\Sigma})
= \mathcal{GP}\left(m(\mathbf{x}),\, k(\mathbf{x}, \mathbf{x}')\right)
$$
How the plot was generated is described on p. 14 of [the book](http://gaussianprocess.org/gpml/),
>
The specification of the covariance function implies a distribution over functions. To see this, we can draw samples from the distribution of functions evalu-
ated at any number of points; in detail, we choose a number of input points, $X_∗$
and write out the corresponding covariance matrix using eq. (2.16) elementwise.
Then we generate a random Gaussian vector with this covariance matrix
$$
\mathbf{f}_* \sim \mathcal{N}(\boldsymbol{0}, K(X_*, X_*)) \tag{2.17}
$$
So "drawing a sample from Gaussian Process" [means](https://math.stackexchange.com/questions/1218718/how-do-we-sample-from-a-gaussian-process) sampling from a [multivariate normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal) with mean being the output of the mean function and covariance being the output of the covariance function.
The example above describes how the samples were drawn from the Gaussian Process prior but in the next sections on p. 15-16 the book discusses how to draw the samples from the posterior. The difference is that you simply take the posterior mean and covariance functions there. If you have a non-zero mean function, you just use it in the place of $\boldsymbol{0}$ in 2.17.
The nice smooth lines on the plots are just points from the samples connected with interpolated lines.
| null | CC BY-SA 4.0 | null | 2023-05-25T07:30:47.880 | 2023-05-25T07:30:47.880 | null | null | 35989 | null |
616869 | 1 | null | null | 0 | 18 | I understand how sine is defined using triangles and/or the unit circle.
Sine is also defined by a power series that is found using a Taylor series and solving f(x)=c0+c1x+c2x2+... for the coefficients.
But how do we know that sine can be accurately represented by this formula? This what I don't understand. (I believe it, I just don't understand it.)
I understand how to solve the Taylor series for sine. I know how to use a point "a" and use "x-a" instead of "a". In other words, I don't need help with the math commonly used in this.
I'm just trying to figure out how we know that sine can be represented by a power series. The step from triangles to sine is easy. The step from sine to a power series is not so obvious to me.
| How do we know that the sine function can be represented as a solution to a Taylor series? | CC BY-SA 4.0 | null | 2023-05-25T07:45:22.740 | 2023-05-25T08:11:41.063 | null | null | 101761 | [
"taylor-series",
"trigonometry"
]
|
616871 | 2 | null | 616867 | 2 | null | The autocorrelation makes this difficult.
For the sample with no autocorrelation, the Central Limit Theorem applies, so the sample mean is approximately normally distributed. Comparing the means of two such samples is a standard problem.
But for the sample with potential autocorrelation, the mean could drift over time. The question you ask is only meaningful if the duration of the sample is much longer than the duration of the autocorrelation. So first, you need to model the autocorrelation, so you can satisfy yourself on this issue.
| null | CC BY-SA 4.0 | null | 2023-05-25T08:16:41.543 | 2023-05-25T10:50:12.093 | 2023-05-25T10:50:12.093 | 247165 | 188928 | null |
616872 | 1 | 617154 | null | 1 | 59 | As shown below and per the R code at the bottom, I plot a base survival curve for the `lung` dataset from the `survival` package using a fitted gamma model (plot red line) with `flexsurvreg()` and run simulations (plot blue lines) by drawing on random samples for the $W$ random error term of the standard gamma distribution equation $Y=logT=α+W$ per section 1.5 of the Rodríguez notes at [https://grodri.github.io/survival/ParametricSurvival.pdf](https://grodri.github.io/survival/ParametricSurvival.pdf). In drawing on random samples for $W$, I am trying to illustrate the fundamental variability imposed by fitting a model to the data by using a generalized standard minimum extreme value distribution per the Rodríguez notes, which states: "$W$ has a generalized extreme-value distribution with density...controlled by a parameter $k$ as excerpted below"
[](https://i.stack.imgur.com/rkaaV.png)
In the plot below, I run 5 simulations by repeatedly running the last line of code and as you can see the simulations are far from the fitted gamma curve in red. Those simulations should surround the fitted gamma, loosely following its shape. The best I've been able to do in generating random samples $W$ and loosely following the shape of the fitted gamma curve is by using `rgamma()` as shown in the below code and plotted below, and for getting closer to the fitted gamma curve although not following its curvature is `rexp()` (commented-out in the code) which is a bit nonsensical and neither of these two are generalized extreme values either. I also try `rgpd()` from the `evd` package (commented-out) for generating random samples from a generalized extreme value distribution, and those results are nonsensical. Similar bad outcomes using `evd::rgev()`.
What am I doing wrong in my interpretation of sampling from the generalized extreme-value distribution? And how do I get the simulations to form a broad band around the gamma curve?
[](https://i.stack.imgur.com/5UUAj.png)
Code:
```
library(evd)
library(flexsurv)
library(survival)
time <- seq(0, 1000, by = 1)
fit <- flexsurvreg(Surv(time, status) ~ 1, data = lung, dist = "gamma")
shape <- exp(fit$coef["shape"])
rate <- exp(fit$coef["rate"])
survival <- 1-pgamma(time, shape = shape, rate = rate)
# Generate random distribution parameter estimates for simulations
simFX <- function(){
W <- log(rgamma(100,shape = shape, scale = 1/rate))
# W <- log(rexp(100, rate))
# w <- log(evd::rgpd(100,shape = shape, scale = 1/rate))
newTimes <- exp(log(shape) + W)
newFit <- flexsurvreg(Surv(newTimes) ~ 1, data = lung, dist = "gamma")
newFitShape <- exp(newFit$coef["shape"])
newFitRate <- exp(newFit$coef["rate"])
return(1-pgamma(time, shape=newFitShape, rate=newFitRate))
}
plot(time,survival,type="n",xlab="Time",ylab="Survival Probability", main="Lung Survival (gamma)")
lines(survival, type = "l", col = "red", lwd = 3) # plot base fitted survival curve
lines(simFX(), col = "blue", lty = 2) # run this line repeatedly for SIMULATION
```
Edit: [There was previously an Edit 1 reflecting an error and Edit 2 correcting Edit 1. Edit 1 has been deleted and Edit 2 switched to the final "Edit"]. This Edit adds a Weibull example showing the simulation of $W$ when characterizing Weibull by $Y = log T = α + σW$,
where $W$ has the extreme value distribution, $α = − logλ$ and $p = 1/σ$. This example generates samples by: (1) randomizing $W$, (2) combining the randomized $W$ with the estimates of $α$ and $σ$ from the model fit to the `lung` data following the above equation form, (3) exponentiating the last item per the above equation form to get survival-time simulations (note in `newTimes <- exp(fit$icoef[1] + exp(fit$icoef[2])*W)` of the code below, how $σ$ (`fit$icoef[2]`) is exponentiated since in its native form it is in the log scale) and the whole thing $α+σW$ is exponentiated again as a unit), (4) running each new `survreg()` fit. Code:
```
library(survival)
time <- seq(0, 1000, by = 1)
fit <- survreg(Surv(time, status) ~ 1, data = lung, dist = "weibull")
weibCurve <- function(time, survregCoefs){exp(-(time/exp(survregCoefs[1]))^exp(-survregCoefs[2]))}
survival <- weibCurve(time, fit$icoef)
simFX <- function(){
W <- log(rexp(165))
newTimes <- exp(fit$icoef[1] + exp(fit$icoef[2])*W)
newFit <- survreg(Surv(newTimes)~1,dist="weibull")
params <- c(newFit$icoef[1],newFit$icoef[2])
return(weibCurve(time, params))
}
plot(time,survival,type="n",xlab="Time",ylab="Survival Probability",
main="Lung Survival (Weibull) by sampling from W extreme-value distribution")
invisible(replicate(500,lines(simFX(), col = "blue", lty = 2))) # run this line to add simulations to plot
lines(survival, type = "l", col = "yellow", lwd = 3) # plot base fitted survival curve
```
Illustration of running above `simFX()` 500 times (using `replicate()` function):
[](https://i.stack.imgur.com/gpbGP.png)
| How to simulate variability (errors) in fitting a gamma model to survival data by using a generalized minimum extreme value distribution in R? | CC BY-SA 4.0 | null | 2023-05-25T08:17:50.063 | 2023-05-30T14:47:33.133 | 2023-05-30T14:47:33.133 | 378347 | 378347 | [
"r",
"regression",
"survival",
"simulation",
"extreme-value"
]
|
616873 | 1 | null | null | 0 | 19 | How do you know when a SEM shows as unidentified not because of the path diagram but because of the actual variables that you put in?
The following model is well-identified according to AMOS, but when I simply replace variable X7 with variable X10 (which literally should change nothing), all of a sudden the model is unidentified and I am asked to add 1 constraint.
Also, when I remove X5 and X9 from the model to keep only 2 observed variables per latent, all of a sudden I am asked to add not 1 but 2 constraints.
[](https://i.stack.imgur.com/tM7zp.png)
All variables have 1000+ observations and are samples of uniform independent distributions.
The very small correlations is the only reason I can think of for the models to fail, but it is not a very satisfying explanation.
[](https://i.stack.imgur.com/II8U4.png)
| How do you know when the reason for a SEM being unidentified is because of the very variables that you put in? | CC BY-SA 4.0 | null | 2023-05-25T08:41:48.927 | 2023-05-25T10:50:31.120 | null | null | 201461 | [
"structural-equation-modeling",
"identifiability",
"amos"
]
|
616875 | 1 | null | null | 0 | 8 | I want to compare two ranked lists with items and their confidence scores assigned by the model. The lists are non-conjoint, i.e. can have different items. The number of items is the same for both lists.
For example;
List A
- some 0.5
- whole 0.32
- great 0.26
- all 0.14
- same 0.08
List B
- whole 0.62
- all 0.32
- great 0.19
- some 0.18
- new 0.02
I know I can use Kendall's tau for a similar problem. But the problem here is, I don't want to consider only the ranking but also the confidence score assign to each item. Is there a way to do that?
| Comparing two ranked list with scores | CC BY-SA 4.0 | null | 2023-05-25T08:56:09.580 | 2023-05-25T08:56:09.580 | null | null | 386421 | [
"ranking",
"kendall-tau"
]
|
616876 | 2 | null | 616873 | 1 | null | The very small correlations in your correlation matrix are indeed a concern. These look like they are correlations between random numbers (many are essentially zero). The correlations are so small that they could lead to empirical underidentification of an otherwise identified model. For example, factor variances depend on there being substantial covariances/correlations between the indicators of that factor. With only two indicators per factor, the situation gets worse.
With such small correlations, it does not seem to make much sense to test any covariance structure model at all (other than a null/independence model). Are you sure the correlations are correct? Could there be a data error?
| null | CC BY-SA 4.0 | null | 2023-05-25T09:51:21.773 | 2023-05-25T09:51:21.773 | null | null | 388334 | null |
616877 | 2 | null | 616780 | 0 | null | If you know how many people fall into each income bracket (and what the boundaries of the brackets are) and you have some distribution for the population distribution of income you want to assume, you could treat this as interval censored data (i.e. if there's 200 people in the 0-699 bracket, you have 200 records with income interval censored to lie in (0, 699)).
| null | CC BY-SA 4.0 | null | 2023-05-25T09:55:14.770 | 2023-05-25T09:55:14.770 | null | null | 86652 | null |
616879 | 1 | 616889 | null | 1 | 41 | Say I have a set $\mathcal{X}$ and a partition $A := (A_{1}, \dots, A_{M})$, i.e. subsets such that $\dot{\cup}_{i=1}^M A_{i} = \mathcal{X}$ and the $A_{i}$s are pairwise disjoint.
Say further I have a function $f: \mathcal{X} \to \mathcal{Y}$ that is piecewise constant on the cells of $A$, i.e.
$$
f (x) = \begin{cases}
f_{1} & x \in A_{1} \\
\dots \\
f_{M} & x \in A_{M}
\end{cases}
$$
Consider now a random variable $X$ that is evenly distributed on $\mathcal{X}$, i.e. its probability density function is $p(x) = \varphi ~ ,\forall x$.
I am interested in expressing the expectation of $f(x)$ over $\mathcal{X}$ in terms of the function values $f_{1}, \dots, f_{m}$ at the cells of the partition.
Here's what I have so far but I am not sure it's correct. Mostly, I am unsure whether, in the last line, the individual summands should be having a weight factor that corresponds to the sizes of the cells.
$$
\begin{align}
\mathbb{E}_{X}\left[ f(X) \right] &= \int f(x) \varphi \, dx \\
&= \int_{A_{1}} f_{1} \varphi \, dx ~ ~ + \dots + \int_{A_{M}} f_{M}\varphi \, dx
\end{align}
$$
Now let $X_{|i}$ be a random variable that has pdf of $X$ in cell $A_{i}$ and $0$ everywhere else. Then
$$
\begin{align}
\dots &= \mathbb{E}_{X_{|1}}\left[ f_{1} \right] ~ ~ + \dots + \mathbb{E}_{X_{|M}}\left[ f_{M} \right]
\end{align}
$$
and, since the expectation over a constant is just that constant, we'd have
$$
\begin{align}
\dots &= f_{1} + \dots + f_{M}
\end{align}
$$
This seems weird since the expectation $\mathbb{E}_{X}\left[ f(X) \right]$ would then be the sum of the individual values whereas intuitively I'd expect it to be more like a weighted mean.
| Expectation of piecewise constant function | CC BY-SA 4.0 | null | 2023-05-25T10:39:01.060 | 2023-05-26T07:51:28.230 | null | null | 178468 | [
"expected-value"
]
|
616880 | 1 | null | null | 0 | 26 | If the time series process is linear, then the ARIMA model is specified.
The residuals from this model are $(1.)$ no autocorrelation $(2.)$ mean equals zero $(3.)$ constant variance. We say that this process is a white noise process. But under white Gaussian noise, how to proof no autocorrelation = independent? And how to proof the white noise from Gaussian distribution?
| Prove that white noise + normality = independence | CC BY-SA 4.0 | null | 2023-05-25T10:41:15.857 | 2023-05-25T15:57:34.480 | 2023-05-25T15:57:34.480 | 53690 | 295888 | [
"time-series",
"normal-distribution",
"arima",
"independence",
"white-noise"
]
|
616881 | 2 | null | 616873 | 1 | null | Structural equation modeling (SEM) is covariance structure modeling (for the most part). It does not make sense to model covariance structures of independent (uncorrelated) variables. Perhaps generate a data set/covariance matrix by simulating a specific SEM model using Monte Carlo simulation techniques.
Also, Bollen (1989) discusses general SEM identification rules:
Bollen, K. A. (1989). Structural equations with latent variables. John Wiley & Sons. [https://doi.org/10.1002/9781118619179](https://doi.org/10.1002/9781118619179)
| null | CC BY-SA 4.0 | null | 2023-05-25T10:50:31.120 | 2023-05-25T10:50:31.120 | null | null | 388334 | null |
616882 | 1 | null | null | 0 | 29 | Please run this code in order to create a reproducible example:
```
set.seed(1)
n <- 10
dat <-data.frame(thesis = rep(c('Yes', 'No'), each = n / 2),
satisfaction = c(sample(5, replace = TRUE, size = n / 2), rep(NA, n / 2)))
dat$satisfaction_without_NA <- ifelse(is.na(dat$satisfaction),
0,
dat$satisfaction)
(X <- model.matrix(~ thesis * satisfaction_without_NA, data = dat)[, -3])
beta <- c(8, 7, 1)
dat$y <- X %*% beta + rnorm(n)
```
The data are supposed to be the result of a survey, where the competence of students, denoted `y` is measured in a test. The following data are recorded for each participant:
- Has the student started a thesis project already? (Column thesis)
- If the student has started a thesis project, how satisfied is with the guidance by his supervisor? (Column satisfaction, NA when the student has not started yet)
I am considering the following model:
- For people who have not started yet:
$$
y = \beta_0 + \epsilon
$$
- For people who have already started:
$$
y = \beta_0 + \beta_1 + \beta_2 \text{satisfaction} + \epsilon
$$
With the `offset` command I can only force coefficients to the value 1 as far as I can see and, furthermore, I was not successful to apply the `offset` command for pushing to 1 in this case.
The following workarounds seem to work:
```
# option 1:
lm.fit(X, dat$y)$coefficients
# option 2:
new_dat <- data.frame(X, y = dat$y)
lm(y ~ . - 1, data = new_dat)
# option 3, most simple approach:
lm(y ~ thesis + satisfaction_without_NA, data = dat)
lm(y ~ thesis * satisfaction_without_NA, data = dat)
```
So here are my questions:
- Does the proposed statistical model make sense? Are there any statistical problems / caveats associated with this approach?
- Does anyone know a more elegant way how to specify the model in R?
- Does anyone know pointers to resources that give more information how such a situation can be handled reasonably?
Many thanks in advance and best greetings,
Sebastian
PS: Here a [link](https://gist.github.com/sebastian-gerdes/e781fe08922ce68481353eae35b13a73) to the quarto file of this question.
| How to accomodate for missingness in column dependent on value of other column | CC BY-SA 4.0 | null | 2023-05-25T10:52:24.450 | 2023-05-30T07:11:12.637 | 2023-05-25T10:58:05.897 | 388806 | 388806 | [
"r",
"interaction",
"linear-model",
"missing-data"
]
|
616883 | 1 | null | null | 0 | 86 | I would like to request some help from the data science community.
My task with machine learning is do achieve the following in `pyTorch`:
I have an equation given by:
$$
\frac{\mathrm{d} s}{\mathrm{d} t}=4a−2s+\lambda(s)
$$
Where $a$ is an input constant and $\lambda$ is a non-linear term that depends on $s$.
I know that the true solution for $\lambda\left(s\right)$ is $\sin \left( s\right) \cos\left(s\right)$
I have generated data from [0, 5]s with the true solution and with the equation excluding the non-linear term for a range of initial conditions and a range of $a$'s.
```
import numpy as np
import pandas as pd
from scipy.integrate import solve_ivp
def foo_true(t, s, a):
ds_dt = 4*a -2*s + np.sin(s)*np.cos(s)
return ds_dt
def foo(t, s, a):
ds_dt = 4*a -2*s
return ds_dt
# Settings:
t = np.linspace(0, 5, 1000)
a_s = np.arange(1, 10)
s0_s = np.arange(1, 10)
# Store the data
df_true = pd.DataFrame({'time': t})
df = pd.DataFrame({'time': t})
# Generate the data
for a in a_s:
for s0 in s0_s:
sol_true = solve_ivp(foo_true, (t[0], t[-1]), (s0,), t_eval=t, args=(a,))
df_true[f'{a}_{s0}'] = sol_true.y.T
for a in a_s:
for s0 in s0_s:
sol = solve_ivp(foo, (t[0], t[-1]), (s0,), t_eval=t, args=(a,))
df[f'{a}_{s0}'] = sol.y.T
```
In the above code, `df_true` is a data frame containing the true dynamics of the system and `df` the dynamics with a discrepancy.
The figure bellow shows the an example in discrepancy of the data:
[](https://i.stack.imgur.com/0sQW7.png)
Given that I know part of the physics, how can I model $\lambda$?
Does anyone know of a paper/repo with a conceptually similar example that I can a look into?
Best regards
| Modeling uncertainty from known physics | CC BY-SA 4.0 | null | 2023-05-25T10:57:07.160 | 2023-05-31T12:02:55.583 | 2023-05-31T12:02:55.583 | 388802 | 388802 | [
"neural-networks"
]
|
616884 | 1 | null | null | 1 | 57 | Context:
...
The public school system in the city where I live has high demand and low supply. You apply to some schools in order of preference and there is a scoring system to give priority to all applicants based on some factors (having another child in the same school, home proximity, job proximity, social factors, etc.).
For every promotion in every school there is a variable number of available spots.
Despite the scoring system, many applicants end up with the same score and to resolve this, there is a special lottery draw every year. This only draw resolves the order of access of all ties in all classes of all schools in the city.
...
We are currently waiting for the lottery draw and we already know how many applicants and vacants there are for our chosen school and class.
Some vacants will be filled by applicants with more score than us and many applicants won't get access since they have les points than us. We are currently 8 applicants tied for 7 remaining vacants.
According to them, the lottery procedure is the following:
9 withdrawals are made from a bag with the numbers 0 to 9, reintroducing the ball after each extraction. From here, we obtain the first, second, to ninth digits of a number between 000,000,000 and 999,999,999. This number is divided by the total number of requests and obtain the quotient and the remainder. The result of the draw will be the next integer of the remainder of the division. This number will be the starting position from which to order the applicants draw number.
This lottery will be public and live streamed. We have already been given a "random" draw number, no one can tell me how it was generated. I have also asked what is the total number of requests since I believe the draw number range should be the same as the total request number. Their response is they won't know the total number of requests until the day of the lottery draw.
Here is the list of tied applicants and draw numbers assigned. Applicants with more score have already been removed (as well as the vacants they will fill). Also removed from the list all applicants with less score, since there are no vacants for them. I have ordered them by draw number:
|applicant_id |draw_number |
|------------|-----------|
|We |163 |
|n_2 |190 |
|n_3 |412 |
|n_4 |595 |
|n_5 |691 |
|n_6 |738 |
|n_7 |907 |
|n_8 |1157 |
Apparently, we are in good shape since with 7 positions and 8 applicants there are just 27 unfavorable results for us (results from 164 to 190). However, if the total number of requests would be lower than the draw number range, it could mean some applicants would already be accepted BEFORE this lottery draw.
For instance, if the total number of requests would be 900, n_7 would be accepted since the remainder of the division can't be higher than the divisor (total number of requests) making >907 impossible to reach no matter the result of the lottery. Otherwise, a total number of requests higher than 1157 would benefit us, since all higher results would put us first in the list.
I'm not at all an expert but I find it odd they put so much effort in doing this lottery draw so transparent, live streamed, etc. and they don't say anything about the draw number assignation or knowing the total number of requests. Of course, I could be missing something.
| Is this lottery draw fair? | CC BY-SA 4.0 | null | 2023-05-25T11:13:41.717 | 2023-05-25T19:08:56.470 | 2023-05-25T15:12:01.730 | 96022 | 96022 | [
"probability"
]
|
616885 | 1 | null | null | 1 | 50 | I have 4 data points that represent the average cycle time for a process in one month. The data is of the full population, not a sample. I am trying to determine if there has been a significant change over those 4 months. Here are the data points, with 't' representing the month and 'ct' the cycle time:
```
t ct
1 1 48.33
2 2 42.71
3 3 33.42
4 4 34.17
```
I have tried a linear regression model. Here is my code and the results from that model:
```
#create data set
ct <- c(48.33, 42.71, 33.42, 34.17)
t <- c(1,2,3,4)
df <- data.frame(t,ct)
#linear model
mod1 <- lm(ct ~ t, data = df)
summary(mod1)
```
Model results:
```
Call:
lm(formula = ct ~ t, data = df)
Residuals:
1 2 3 4
0.907 0.464 -3.649 2.278
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 52.600 3.828 13.739 0.00526 **
t -5.177 1.398 -3.703 0.06580 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 3.126 on 2 degrees of freedom
Multiple R-squared: 0.8727, Adjusted R-squared: 0.8091
F-statistic: 13.71 on 1 and 2 DF, p-value: 0.0658
```
While this shows a p-value close to 0.05, I don't think this approach is valid since I only have 4 data points and the diagnostic plots indicate violations of the assumption of normality.
I have also created a time series object with the data and tried the MannKendall test:
```
#create a time series object
ct_ts <- ts(ct, start = c(2023,1), frequency = 12)
MannKendall(ct_ts)
```
Results:
```
tau = -0.667, 2-sided pvalue =0.30818
```
This also shows a p-value > 0.05, however I have read that you should have at least 8-10 data points for the test.
Lastly, I have considered creating confidence intervals for each mean and seeing if there is any overlap. The basis for this is a paper that I read by [David Edelman](https://doi.org/10.2307/2684348) on "A Confidence Interval for the Center of an Unknown Unimodal Distribution Based on a Sample of Size 1." The application here, I think, would be to treat each data point as our sample of size 1 and calculate the respective CI. If they overlap, then we could suggest that there are not significant differences and that the points come from the same distribution with any difference attributable to chance variation.
Is there a better way to approach this? Perhaps with a one-sample T-test?
| Can you test for significant changes across 4 data points that represent the average cycle time in 1 month? | CC BY-SA 4.0 | null | 2023-05-25T11:49:10.243 | 2023-06-01T08:51:04.250 | 2023-06-01T08:51:04.250 | 388797 | 388797 | [
"time-series",
"confidence-interval"
]
|
616886 | 2 | null | 616667 | 3 | null |
- You're likely better off modelling the residence time itself, not it's log transform, using a non-Gaussian family, but that could well be tricky if you do actually need the autocorrelation structure.
- You can't use the correlation with gam(); you can only use this with gamm(), FYI; gam() will silently ignore this argument if you provide it.
- There shouldn't really be a relationship between day_ and year_, their effects are unlikely to be similar; one is a within-year effect, the other a between-year effect. There may be an interaction between these two variables, if say the seasonal pattern has changed over time due to say climate change. These variables are not dependent however.
- The correlation structure available via gamm() (provided by the nlme package) don't account for correlation or dependence among covariates. They account for un-modelled correlation among observations.
- The correlation structures available do not cover seasonal ARMA models, only ARMA models.
- corARMA(form = ~ 1|year_, p = 3) and corARMA(form = ~ day_ | year_), p = 3) should give the same results if the data are in time order within year_. Both will describe an AR(3) nested within year, using the same autocorrelation parameter $\rho$
- As mentioned in 4. you only need it if there is unmodelled temporal autocorrelation. With 40 df for the day_ term, you can model quite a complex seasonal effect so maybe you won't need the AR(3)? You might want to consider using te(day_, year_, ...) to allow the seasonal signal to vary with the trend, and then you might not need the autocorrelation.
- Whether you need the autocorrelation structure will depend on how wiggly you want the smooths of day_ and year_ to be; all else equal, you can model the autocorrelation with wiggly smooth functions and then you won't need the autocorrelation structure. But if you are trying to estimate the seasonal and between-year trends without this autocorrelation (so you want simpler, smoother functions) then it is likely that you'll need the autocorrelation structure if you have data recorded at sufficient time resolution to separate the larger scales of temporal variation (seasonal and long-term trend) from the short-scale variation of the autocorrelation.
- You can fit models with no AR(p), and AR(1), an AR(2), and then an AR(3) and use tools to select among these 4 models. I have some example code here showing how to do this (for models with AR(1) vs AR(2) for example): https://fromthebottomoftheheap.net/2016/03/25/additive-modeling-global-temperature-series-revisited/
| null | CC BY-SA 4.0 | null | 2023-05-25T11:59:23.813 | 2023-05-25T11:59:23.813 | null | null | 1390 | null |
616887 | 1 | null | null | 0 | 36 | I am currently working on a very imbalanced dataset:
- 24 million transactions (rows of data)
- 30,000 fraudulent transactions (0.1% of total transactions)
The dataset is split via Year, into three sets of training, validation and test. I am using XGBoost as the model to predict whether a transaction is fraudulent or not. After tuning some hyperparameters via optuna, I have received such results
Model parameters and loss
```
from sklearn.metrics import accuracy_score, classification_report, precision_score, recall_score, f1_score, roc_auc_score, precision_recall_curve, auc, average_precision_score, ConfusionMatrixDisplay, confusion_matrix
import matplotlib.pyplot as plt
evalset = [(train_X, train_y), (val_X,val_y)]
params = {'lambda': 4.056095667860487, 'alpha': 2.860539790760471, 'colsample_bytree': 0.4, 'subsample': 1, 'learning_rate': 0.03, 'n_estimators': 300, 'max_depth': 44, 'random_state': 42, 'min_child_weight': 27}
model = xgb.XGBClassifier(**params, scale_pos_weight = estimate, tree_method = "gpu_hist")
model.fit(train_X,train_y,verbose = 10, eval_metric='logloss', eval_set=evalset)
```
```
[0] validation_0-logloss:0.66446 validation_1-logloss:0.66450
[10] validation_0-logloss:0.45427 validation_1-logloss:0.45036
[20] validation_0-logloss:0.32225 validation_1-logloss:0.31836
[30] validation_0-logloss:0.23406 validation_1-logloss:0.22862
[40] validation_0-logloss:0.17265 validation_1-logloss:0.16726
[50] validation_0-logloss:0.13003 validation_1-logloss:0.12363
[60] validation_0-logloss:0.09801 validation_1-logloss:0.09230
[70] validation_0-logloss:0.07546 validation_1-logloss:0.06987
[80] validation_0-logloss:0.05857 validation_1-logloss:0.05278
[90] validation_0-logloss:0.04581 validation_1-logloss:0.04001
[100] validation_0-logloss:0.03605 validation_1-logloss:0.03058
[110] validation_0-logloss:0.02911 validation_1-logloss:0.02373
[120] validation_0-logloss:0.02364 validation_1-logloss:0.01859
[130] validation_0-logloss:0.01966 validation_1-logloss:0.01472
[140] validation_0-logloss:0.01624 validation_1-logloss:0.01172
[150] validation_0-logloss:0.01340 validation_1-logloss:0.00927
[160] validation_0-logloss:0.01120 validation_1-logloss:0.00752
[170] validation_0-logloss:0.00959 validation_1-logloss:0.00616
[180] validation_0-logloss:0.00839 validation_1-logloss:0.00515
[190] validation_0-logloss:0.00725 validation_1-logloss:0.00429
[200] validation_0-logloss:0.00647 validation_1-logloss:0.00370
[210] validation_0-logloss:0.00580 validation_1-logloss:0.00324
[220] validation_0-logloss:0.00520 validation_1-logloss:0.00284
[230] validation_0-logloss:0.00468 validation_1-logloss:0.00253
[240] validation_0-logloss:0.00429 validation_1-logloss:0.00226
[250] validation_0-logloss:0.00391 validation_1-logloss:0.00205
[260] validation_0-logloss:0.00362 validation_1-logloss:0.00191
[270] validation_0-logloss:0.00336 validation_1-logloss:0.00180
[280] validation_0-logloss:0.00313 validation_1-logloss:0.00171
[290] validation_0-logloss:0.00291 validation_1-logloss:0.00165
[299] validation_0-logloss:0.00276 validation_1-logloss:0.00161
```
Learning curve
[](https://i.stack.imgur.com/vHfsE.png)
F1 and PR AUC scores
```
F1 Score on Training Data : 0.8489783532267853
F1 Score on Test Data : 0.7865990990990992
PR AUC score on Training Data : 0.9996174980952233
PR AUC score on Test Data : 0.9174896435002448
```
Classification reports of training/testing sets
```
Training report
precision recall f1-score support
0 1.00 1.00 1.00 20579668
1 0.74 1.00 0.85 25179
accuracy 1.00 20604847
macro avg 0.87 1.00 0.92 20604847
weighted avg 1.00 1.00 1.00 20604847
Test report
precision recall f1-score support
0 1.00 1.00 1.00 2058351
1 0.95 0.67 0.79 2087
accuracy 1.00 2060438
macro avg 0.98 0.83 0.89 2060438
weighted avg 1.00 1.00 1.00 2060438
```
Confusion matrices (1st is training set, 2nd is testing set)
[](https://i.stack.imgur.com/Bzthq.png)
[](https://i.stack.imgur.com/nRvhK.png)
I see that my PRAUC of the training dataset is nearly 1 and it has perfect recall score, so I suspect that my model is overfitting. However, when I test these results on a validation set and testing set, the results are not too far off, and still achieve what I believe to be decent scores.
I would love to hear your thoughts on this, and thank you all in advance and I would appreciate any response!
| Is my model overfitting? | CC BY-SA 4.0 | null | 2023-05-25T11:59:40.043 | 2023-05-25T14:32:47.487 | null | null | 383080 | [
"machine-learning",
"classification",
"unbalanced-classes",
"overfitting"
]
|
616888 | 2 | null | 616828 | 0 | null | Upon further reflection, I think it is a case where the significance may disagree.
While I had only constructed a CI for the percent change, it's also reasonable to test if that percent change differs from 0 with a zscore:
z = abs((-47.12-0)/sqrt((37.66/1.645)^2+0^2)) = 2.06...
as 2.06 > 1.645, we see that this percent change is significant (as confirmed by its confidence interval not crossing 0)
It is surprising to me that the same set of numbers can produce significance or non-significance depending on the measure you choose to test (raw difference vs. percentage change). Statistics proves to be an ever-manipulatable science.
| null | CC BY-SA 4.0 | null | 2023-05-25T12:04:39.223 | 2023-05-25T12:04:39.223 | null | null | 388763 | null |
616889 | 2 | null | 616879 | 1 | null | As @J. Delaney noticed you didn't normalize it properly.
$$
\Pr(X \in A_i) = \int_{A_i} p(x) dx
$$
so
$$
\int_{A_i} f(x) p(x) dx
$$
is not the expected value as it integrates over just a part of the support for $x$. So you are not summing the expected values. The integration [is correct](https://www.khanacademy.org/math/ap-calculus-ab/ab-integration-new/ab-6-8c/v/definite-integrals-of-piecewise-functions), you just misunderstood the integrals over the $A_i$ subsets as expected values.
If you had [conditional expectations](https://en.wikipedia.org/wiki/Conditional_expectation)
$$
E[f(X) | X \in A_i] = \int f(x) \frac{P(X = x, X \in A_i)}{P(X \in A_i)} dx
$$
then indeed you would have to use the weighted average of those with the weights equal to $P(X \in A_i)$ to cancel the denominator above (see also the [law of total expectation](https://en.m.wikipedia.org/wiki/Law_of_total_expectation)).
| null | CC BY-SA 4.0 | null | 2023-05-25T12:15:27.147 | 2023-05-26T07:51:28.230 | 2023-05-26T07:51:28.230 | 35989 | 35989 | null |
616890 | 2 | null | 616880 | 0 | null | No autocorrelation is not independent. To statistically check if the given time series is autocorrelated you can use Ljung-Box-Test, there is also many other test that have the same target. You can also plot the histogram, the Q-Q-Normal, and the Residuals vs Predicted graph, with those you can visually check if the residuals follow a normal distribution or not.
| null | CC BY-SA 4.0 | null | 2023-05-25T12:31:53.923 | 2023-05-25T12:31:53.923 | null | null | 388814 | null |
616891 | 1 | null | null | 1 | 21 | I want to do a mixed linear model, with two hierarchical random effects. I want to determine wether if these random effects are worth including in the model.
I have tried so far doing :
```
library(nlme)
# Mode lwith random effects but no fixed effects
lme1 <- lme(var ~ 1, random=~1|r.var1/r.var2,
correlation = corCompSymm(form=~1|var1/var2),data)
# Model without random effects
gls.data <- gls( var ~ 1, data)
# Compare the two
anova(lme1 , gls.data)
```
Which gives :
```
call Model df AIC BIC logLik Test L.Ratio p-value
lme1 1 5 -3776601 -3776542 1888305
gls.data 2 2 -3718476 -3718452 1859240 1 vs 2 58130 <.0001
```
The problem is : I have a very large dataset (>1 million observations). Statistical tests tend to always give significative results on big datasets. My question is : do I include these random effects to my model or not ? (and also, am I doing the right test ? )
Thanks
P.S. : I am not really a statistician, so if the answer must contain a lot of matrices and statistical calculation, would it be possible to explain it in detail ?
| Testing signifiance of random effects on large dataset | CC BY-SA 4.0 | null | 2023-05-25T12:57:45.413 | 2023-05-25T12:57:45.413 | null | null | 384253 | [
"r",
"mixed-model",
"model-selection"
]
|
616892 | 2 | null | 256203 | 1 | null | For the differential entropy there also exists another, more mathematical interpretation, which is closely related to the bit-interpretation for the entropy.
The differential entropy describes the equivalent side length (in logs) of the set that contains most of the probability of the distribution.
This is nicely illustrated and explained in Theorem 8.2.3 in Elements of Information Theory by Thomas M. Cover, Joy A. Thomas
## Intuitive Explanation
In non-rigorous terms, this statement means the following:
Let's assume we have a multivariate probability distribution with entropy $h$.
The side length of a volume that entails most of the probability mass of this distribution (apart from a negligible amount), can be described by some volume.
If we assume we describe this volume by some hypercube with sides of equal length (= equivalent side lengths), then this side length is equal to $2^h$.
Intuitively this means, that if we have a low entropy, the probability mass of the distribution is confined to a small area.
Vice versa, high entropy tells us that the probability mass is spread widely across a large area.
## Mathematical View
In actual notation, the theorem states the following
$1 - \epsilon 2^{n(h(X) - \epsilon)} \leq \text{Vol}(A_{\epsilon}^{(n)}) \leq 2^{n(h(X) + \epsilon)}$,
where $X$ is a random variable with the distribution of interest, $\epsilon$ is a real number, $A_{\epsilon}^{(n)}$ is a set, $h(X)$ is the differential entropy of $X$ and $n$ (required to be large) is the dimension of $X$.
This implies that "$A_{\epsilon}^{(n)}$ is the smallest volume set with probability $1-\epsilon$, to first order in the exponent." (Elements of Information Theory by Thomas M. Cover, Joy A. Thomas, Wiley, Second Edition, 2006)
## Relation to entropy of discrete probability distributions
This interpretation of differential entropy is closely related to the entropy for discrete distributions.
Discrete Case: As OP stated, the entropy tells us how many bits are needed to encode a message given a probability distribution over words.
Continuous case: Here we are dealing with continuous support. For example, let's assume the support is on the real line $\mathbb{R}$. The differential entropy tells us, how long the interval on the real line has to be to capture almost all information contained in the probability distribution.
- If we have a widely spread distribution -> the entropy will be high
- If we have a sharp distribution, most probability mass will be in a small interval -> the entropy will be low.
## Example with $N(0,1)$
The entropy of a standard normal distribution with $\sigma^2 = 1$ is
$\frac{1}{2}\text{ln}(2\pi \sigma^2) + \frac{1}{2} = \frac{1}{2}\text{ln}( 2 \pi) + \frac{1}{2}$
We can visualize this with a small code example in Python:
```
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
ys = np.random.normal(size = 10000)
h = 0.5*np.log(2*np.pi*np.exp(1))
side_length = 2**h
sns.kdeplot(ys, fill = True)
plt.vlines(x = side_length/2, ymin = 0, ymax = 0.4, color = 'red', linestyles = 'dashed')
plt.vlines(x = -side_length/2, ymin = 0, ymax = 0.4, color = 'red', linestyles = 'dashed')
```
This side length captures a large portion of the probability mass in this distribution:
[](https://i.stack.imgur.com/hnX7I.png)
The interval between the red lines is $2^h$. As in this case $n$ is only 1 (and the Theorem above requires $n$ to be large), we can clearly see that the entropy is not exactly the equivalent side length of the volume that captures almost all probability mass.
This graph also explains why for the Gaussian, the mean does not affect the differential entropy: No matter where I shift the distribution to - the equivalent side length will stay the same and is only influenced by the variance.
| null | CC BY-SA 4.0 | null | 2023-05-25T13:00:28.227 | 2023-05-25T13:00:28.227 | null | null | 326617 | null |
616893 | 1 | null | null | 1 | 18 | I'd like to ask if somebody can help me to understand these plots, the meaning of dots dispersion on the plots. I can't understand how to read these plots.
Link to the article [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7768686/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7768686/)
Thank you! [](https://i.stack.imgur.com/TalVs.jpg)
| Interpritation of plots (PCA, Volcano) | CC BY-SA 4.0 | null | 2023-05-25T13:15:29.743 | 2023-05-25T13:15:29.743 | null | null | 388816 | [
"pca"
]
|
616896 | 1 | null | null | 0 | 8 | I want to compare two interval independent variable’s with t test how can I compare them?
| Comparing two interval independent variables with two sample t test | CC BY-SA 4.0 | null | 2023-05-25T13:44:39.783 | 2023-05-25T13:44:39.783 | null | null | 388817 | [
"t-test",
"variable"
]
|
616897 | 2 | null | 571779 | 0 | null | I had the same question. My understanding is:
- when the distribution linked to your test-statistics (e.g. z-value, t-value, chi-square, incidence rate ratios) depends on the number of degrees of freedom, then your p-value will not only depend on the type of distribution (e.g. Gaussian, Student's t, Poisson,...) and the test-statistics, but (of course) also on the number of degrees of freedom.
- examples of distributions that depend on the number of degrees of freedom: Chi-Square distribution, Student's t distribution (here you should report test-statistics, degrees of freedom and p-values)
- examples of distributions that do not depend on the number of degrees of freedom: Gaussian distribution, Poisson distribution (here you should just report test-statistics and p-values).
| null | CC BY-SA 4.0 | null | 2023-05-25T13:46:10.643 | 2023-05-25T13:46:10.643 | null | null | 181921 | null |
616898 | 1 | null | null | 0 | 17 | I calculated a directionality map, where at each point `(x,y)` I have a jackknife sample (n=8) of 2d vector displacements $\vec{D}_k$ ($k=1,...,8$ are the black vectors in the figure below; red vector -> "centroid" or average weighted by magnitude; magenta -> simple mean vector). The figure represents a single `(x,y)` position in the map.
[](https://i.stack.imgur.com/ICKnu.png)
I want to know if these vectors have a "directionality trend" and if this trend is significant.
For example: I could assume that if the sample vectors are (in average) pointing within a $\Delta\theta$ angle about the mean or centroid with `p<0.05`, then there is a trend and the mean direction is significant. $\Delta\theta$ is a free parameter, for example.
I have very little knowledge of statistics (although I have implemented t-tests before, and I understand correlation, covariance, and can calculate it in the context of Statistical Physics). So the more details the better (I'm using `MATLAB` for this project).
PS1: ChatGPT gave me many suggestions, although most of them didn't seem to directly apply here. One seemed useful: it suggested I could use the Wilcoxon signed rank test for each of the vectors' components separately; the nice thing is that it's implemented in matlab as `signrank`. The problem I see here is that this would compare to a zero median, and then it can only tell me if the trend is in one of the four quadrants relative to a median of the x or y coordinates of $\vec{D}$.
PS2: I thought of calculating the eigenvalues and eigenvectors of the covariance matrix between the x and y coordinates of the $\vec{D}_k$ sample. It gives the ellipsis within which the vectors are... But then what?
PS3: I found these two similar questions, but none of them exactly matches my problem, nor they have solutions that would be directly applicable here:
- Statistical test for comparing directional data
- How to test if two sets of vectors are statistically different from each other
| How to test the directionality trend and significance of a set of vectors? | CC BY-SA 4.0 | null | 2023-05-25T13:46:49.113 | 2023-05-25T13:46:49.113 | null | null | 13542 | [
"hypothesis-testing",
"statistical-significance",
"multivariate-analysis",
"trend",
"vector-fields"
]
|
616899 | 1 | 616918 | null | 10 | 1107 | I'm comparing two groups of people on a binary variable. The number of successes is really small for the two groups (which wasn't expected, due to a lack of previous quantitative research on the subject) and the number of total observations is not so great (450), so we have group A with 2 successes on 225 observations, and group B with 10 successes on 225 observations.
The p-value from a chi-squared test is significant but really close to the pre-set alpha level (the significance level was 0.05 and the p-value is about 0.04). A problem I see is that for example just one more success in group A would make the test non significant.
For these two reasons (small numbers + p-value really close to the significance level), I have serious doubts about the "significant" result being in any way reliable.
Are there some good references that can confirm (or infirm, for that matter) my intuition, and that I could refer my readers to as a form of caveat?
I still want to report the p-value but attach a caveat to it, to show to interested people that a larger sample size would be required to reproduce the study and get a more reliable result. There's some qualitative research data hinting to a difference between the two groups, but for the moment there's a lack of reliable statistical/quantitative research that would show the same thing or contradict it.
Thanks for any pointer!
| References on how to interpret significant but dubious results (i.e. small numbers, plus borderline p-value) | CC BY-SA 4.0 | null | 2023-05-25T13:56:04.233 | 2023-06-01T11:56:36.653 | 2023-05-25T15:20:26.663 | 388818 | 388818 | [
"statistical-significance",
"chi-squared-test",
"references"
]
|
616902 | 1 | null | null | 1 | 19 | I am working on a project that is seeking to use the synthetic control method to estimate the effect of Costa Rica's military abolition on levels of democracy (0-1 interval). I'll attach my code here and plot the time series to demonstrate the initial issue I am having:
```
library(tidysynth)
synth.opto <- latin.am %>%
synthetic_control(outcome = democracy,
unit = country_name,
time = year,
i_unit = "Costa Rica",
i_time = 1949,
generate_placebos = T) %>%
generate_predictor(time_window = 1935:1949,
l.gdppc = mean(lgdppc, na.rm = T),
l.milspend = mean(lmilspend, na.rm = T),
pr = mean(prop.rights, na.rm = T),
dist = mean(distribution, na.rm = T),
ce = mean(clean.elec, na.rm = T),
pol.vi = mean(pol.v, na.rm = T),
phys.vi = mean(phys.v, na.rm = T)
) %>%
generate_predictor(time_window = 1935,
dem.1935 = mean(democracy)) %>%
generate_predictor(time_window = 1937,
dem.1937 = mean(democracy)) %>%
generate_predictor(time_window = 1939,
dem.1939 = mean(democracy)) %>%
generate_predictor(time_window = 1941,
dem.1941 = mean(democracy)) %>%
generate_predictor(time_window = 1943,
dem.1943 = mean(democracy)) %>%
generate_predictor(time_window = 1945,
dem.1945 = mean(democracy)) %>%
generate_predictor(time_window = 1947,
dem.1947 = mean(democracy)) %>%
generate_predictor(time_window = 1949,
dem.1949 = mean(democracy)) %>%
generate_weights(optimization_window = 1935:1949,
margin_ipop = .02, sigf_ipop = 7, bound_ipop = 6) %>%
generate_control()
synth.opto %>% plot_trends()
```
[](https://i.stack.imgur.com/M5lDL.png)
As you can see, the synthetic control does a decent job at predicting Costa Rica's democracy until the years 1946-1949. This isn't unexpected. Prior to Costa Rica's military abolition, a civil war in 1948 preceded it with increasing levels of political violence leading up to the war. Accordingly, I thought I had found the source of the prediction error. However, as noted in the code here, I incorporated variables that should explain this drop in democracy as covariates for the synthetic unit to be trained on (clean election index, physical violence, political violence). Despite incorporating these variables, the prediction error still exists. In fact, it barely improves.
When I inspect the weights, I notice that they overwhelmingly are sourced from two countries, Panama and Chile. The magnitude of the weights hardly changes when I included the violence predictors, which I imagine is part of the reason why these predictors did little to reduce the prediction error (Neither Chile nor Panama experience comparable levels of violence in this 4-year period). Why is this? Should the allocation of the weights not change in response to the set of covariates? Secondly, what is one to do if the source of the prediction error is known, but including covariates that should cover this error does not work? Is the pre-treatment difference between synthetic and real Costa Rica too large in this example to continue with the project?
| Tuning a Synthetic Control | CC BY-SA 4.0 | null | 2023-05-25T14:08:34.713 | 2023-05-25T14:08:34.713 | null | null | 360805 | [
"causality",
"synthetic-controls"
]
|
616903 | 2 | null | 616884 | 1 | null | The lottery aspect seems fair. It is not transparent how the draw_numbers are generated, but if they are random then it doesn't matter whether or not the final draw for the starting position seems more or less beneficial for different people. The draw_numbers being random make that anybody could have had this beneficial position.
However, the matching system does not need to be fair. These problems have to balance several aspects and none of them are fully optimised in any algorithm (see for an example [the stable marriage problem](https://en.m.wikipedia.org/wiki/Stable_marriage_problem) an for a more in-depth treatment this article: ["The Performance of School Assignment Mechanisms in Practice"](https://www.journals.uchicago.edu/doi/abs/10.1086/721230?journalCode=jpe)). The systems have aspect like
- stability
Whether or not there can be people that would like to change after the matching is over (people that somehow end up with each others preference)
- strategy proof
Whether or not the applicants can fill in a different order of preferences than their true preferences in order to gain some advantage in the matching. E.g. if your second choice is a school with a lot of places free then you might not choose this (and choose a school with few places) if this can increase your chances for your first option.
- efficiency
Do most applicants reach their most preferred places? (This is a vague expression, and can be quantified in different ways. But the point is whether you get in average students in the best place.)
- fairness/envy-free
Do students get in a place with equal probability? Is the placement of students fair.
Example of envy: Your lottery system seems almost like this. Without the rule,
>
Someone with a lower score in first choice gets preference over another with higher score but second choice.
, then there is a single tie-breaking lottery for all students. Say if the starting position is 191, then it seems like you are through because draw_number 190 has bad luck and you with draw_number 163 are in. However, it can be the case that the applicant with draw number 162 is missing out on their first, second, third, etc choice, because those schools are already full. Then, if their last choice is your first choice school, then they get in instead of you. Is it fair that somebody with a school on their last place of choice kicks out somebody with that school on the first place of their list of choices?
Example of strategy behaviour:
With the rule,
>
Someone with a lower score in first choice gets preference over another with higher score but second choice.
it means that the system creates an incentive for strategic behaviour. People might fear to give their first choice if this is a popular school with few places, because it might disadvantage them in their second choice.
| null | CC BY-SA 4.0 | null | 2023-05-25T14:11:13.147 | 2023-05-25T19:08:56.470 | 2023-05-25T19:08:56.470 | 164061 | 164061 | null |
616904 | 1 | 617529 | null | 3 | 74 | I want to assess the importance/association of many features (continuous) with a Y variable (continuous and, in most cases, skewed/normally distributed). So far I have tried running linear regression and extracted the beta coefficients for each X variable against Y. However, I am wondering if I could approach this task differently in a way that would not require linearity. Note, I have a thousand Y variables, and want to associate these with dataframe X with 1000s of columns (each Y variable independently), systematically.
An approach I've considered is turning this into a classification problem, where I split the Y variable into high (1) and low (0) classes, and run logistic regression with the complete X dataset, and use the logistic regression coefficients to determine the contribution of each feature to the identification of Y. The aim would be to trim down the dataset with "important" features which can identify Y. Alternatively, I was thinking about performing this with random forest, and identifying feature importance through this approach.
Would an approach like this make sense, or is this overkill? As a more direct method I'm simply calculating the fold difference between classes 1 and 0, but wanted to explore other methods for this, which could potentially spot feature interactions. Note at this stage I have removed redundant X features such as those with many missing values/low variance.
EDIT
The aim of this is similar to that in a machine learning setting where the features that give the most predictive power are used to boost model performance. Note however that I am not building a model for prediction use, simply to identify features/combination of features that influence Y.
I should also add that there is a great imbalance between the number of observations (X rows) and features (~3000 X cols). It varies between datasets, but the observation number is between 300-800, but can be as low as 30. Thus, I do not expect to build a model with high predictive power, but simply identify any interesting features.
I have tried this using the Y variable split mentioned above, and random forest feature importance, which does identify some features which show difference between the two Y conditions. I'm aware of the problem of overfitting, especially where I have less data.
A question I have on this is: Considering I am only using the random forest for identifying important features, and not external predictions, is overfitting as much of an issue as it is in the conventional sense?
EDIT 2
mkt made a good point on defining variable importance. Reading a suggested thread [What are variable importance rankings useful for?](https://stats.stackexchange.com/questions/202277/what-are-variable-importance-rankings-useful-for) highlighted some interesting points for me.
In my case, I wish to identify key features in relation to Y that I can identify and study in external datasets. This is not feasible with >1000 features, especially as many will have no observable relation to Y.
| Inferring feature importance on thousands of features for classification problem | CC BY-SA 4.0 | null | 2023-05-25T14:15:18.793 | 2023-06-01T11:21:49.797 | 2023-06-01T09:19:41.127 | 365358 | 365358 | [
"machine-learning",
"feature-selection",
"importance"
]
|
616905 | 1 | null | null | 1 | 24 | I am trying to implement a Gibbs sampler with three blocks, that's not actually a full conditional Gibbs sampler.
Say I partitition my parameter $X$ into three: {$X_1$, $X_2$, $X_3$}.
I inted to sample, in random order, $X_1|X_2$ and $X_2|X_1$ and then sample $X_3|X_2,X_1$.
I am running into trouble initializing the algorithm when the initial state $X$ has probability almost zero given the invariant distribution $\pi$ (i.e. $\pi(X(0)) \approx0$).
If I sample $X_1|X_2$ first, the values are so off that I get an almost degenerate $p(X_2|X_1)$, so that I cannot sample from it using numerical methods.
$X_2$ is actually a matrix and a vector conditional on the inverse of that matrix. The problem I'm having is that I can only sample the zero matrix---which doesn't have an inverse---because of floating point precision.
Am I simply running into a numerical problem, or is there something more fundamentally wrong with my Markov chain? If it were irreducible I should be able to reach a point within my target distribution in finite time.
Is it cheating if I pick $X(0)$ such that $\pi(X(0))\gg0$? Would I be risking pseudo-convergence?
| Gibbs Sampler Initialization | CC BY-SA 4.0 | null | 2023-05-25T14:15:25.520 | 2023-05-25T14:15:25.520 | null | null | 387957 | [
"markov-chain-montecarlo"
]
|
616906 | 1 | null | null | 1 | 34 | I want to estimate a set of parameters $\theta = \begin{bmatrix} \theta_1 & \theta_2\end{bmatrix}^\top$, where $\theta_1 \in \mathbb{R}^m$ and $\theta_2 \in \mathbb{R}^n$ (consequently, $\theta \in \mathbb{R}^{m+n}$).
Assume I have an unbiased and consistent estimator $\hat{\theta}$ for the whole set of parameters. If I am given the true values of some of the parameters (say, $\theta_1$), is the conditioned estimator $\hat{\theta}^\mathrm{c} = (\hat{\theta}|\theta_1 = \theta_1^\mathrm{true})$ still unbiased and consistent?
Obviously for the given set of parameters we get that $\hat{\theta}^\mathrm{c}_1 = \theta_1^\mathrm{true}$. But could the bias or consistency for the parameters $\theta_2$ be hurt from the conditioning?
| Can conditioning on true information hurt unbiasedness or consistency? | CC BY-SA 4.0 | null | 2023-05-25T14:23:54.013 | 2023-05-25T15:28:42.547 | null | null | 122077 | [
"estimation",
"bias",
"consistency"
]
|
616907 | 1 | null | null | 1 | 50 | Consider $2N$ i.i.d random variables $x_1,\dots, x_{2N}$ drawn from an unknown distribution $X$. Let $f$ be a continuous function such that $f(X)$ has finite mean and variance. For $1\leq n\leq 2N$, suppose $\epsilon_n \sim \mathcal N(0, \sigma^2)$ is an i.i.d white noise and $\mathbb E[x_n\epsilon_n]=0$. Define $y_n=f(x_n) + \epsilon_n$. The first $N$ samples will be the training set and the remainder will be the test set.
Consider a model class $\mathcal M$, such that every model $m\in \mathcal M$ is unbiased and has finite variance. Our loss function is MSE. It's easy to see that the expected training loss is
$$MSE_{train}(m)=\sigma^2 + \frac{1}{N}\sum_{n=1}^N\mathbb E\left[(f(x_n)-m(x_n))^2-2m(x_n)\epsilon_n\right]$$
Intuitively, I thought that as the number of training samples grows, the term $\frac{1}{N}\sum_{n=1}^N\mathbb E\left[(f(x_n)-m(x_n))^2\right]$ will approach the model's expected MSE loss, ie.
$$\mathbb E[(f(X)-m(X))^2]=\int_I X(f(X)-m(X))^2dX$$, where $I$ is the support of $X$.
In particular, I am hypothesizing that if $N\to\infty$, then a model trained on the training data will have an equal expected fit of $f$, ie. $\mathbb E[(f(x_n)-m(x_n))^2]$ on any sample $x_n$ for $1\leq n\leq 2N$.
Note this, intuitively, can be true even if $m$ memorizes the training data because, with a large number of samples, it can be said that the unseen data is very close to the data $m$ saw before training.
Here is my attempt to prove it:
If parameters of $m\in\mathcal M$ are independent of $x_1,\dots,x_{N}$, the result is obvious. So, let us assume $m$ is conditioned on $x_1,\dots, x_N$.
Since $m$ has a finite variance, the Law of Large Numbers can be applied for $N$ large enough. I will also use an identity: for random variables $A, B$ with finite mean and variance, we have $\mathbb E[\mathbb E[A|B]]=\mathbb E\left[A\right]$.
\begin{align}
\mathbb E\left[\frac{1}{N}\sum_{n=1}^N(f(x_n)-m(x_n))^2|x_1,\dots,x_N\right] &= \frac{1}{N}\sum_{n=1}^N\mathbb E\left[(f(x_n)-m(x_n))^2|x_1,\dots,x_N\right]\\
&=\mathbb E\left[\mathbb E\left[(f(X)-m(X))^2|x_1,\dots,x_N\right]\right]\\
&= \mathbb E\left[(f(X)-m(X))^2\right]\\
&= \frac{1}{N}\sum_{n=N+1}^{2N}\mathbb E\left[(f(x_n)-m(x_n))^2\right]
\end{align}
Questions
- Is this hypothesis even true? Do I need to impose more restrictions on $f$ and $\mathcal M$ before I can say that?
- Is my above attempt at proving this hypothesis correct?
| Limit of expected empirical risk as the number of samples grow | CC BY-SA 4.0 | null | 2023-05-25T14:26:14.940 | 2023-05-25T21:17:12.940 | 2023-05-25T21:17:12.940 | 280102 | 280102 | [
"machine-learning"
]
|
616908 | 2 | null | 616887 | 1 | null | From an ML perspective, there's not really a precise definition of whether you've overfit or not. It's also not binary, all models have picked up some signal at the expense of some noise, and are thus somewhat overfit. Being a little overfit is the cost of learning.
The delta between training and test performance is a good guide, but a nice (admittedly pathological) example of where this will give you incorrect results are Random Forests which intentionally will get very good training loss knowing that this loss won't be achievable on a test set. Nonetheless, a random forest might allow you to achieve better test loss than another model which has a smaller gap between train and test loss, and in this case the Random Forest is the better model.
Looking at the output from your training process, your validation loss has gone down for the entirety of the training process... so you've probably got the best you could have got out of that particular model setup. I guess I would categorically say you'd overfit if you had overshot the optimal point and your validation loss had started to go up. But you could have another model which you'd fit optimally (i.e. stopped training when your validation loss stopped going down), which had a bigger/smaller gap between training and validation loss, but the all-important metric is which of the two models performs better on the test set.
So I personally think the question of "have I overfit" isn't quite the right one. You should always use the model that performs best on the test set, whilst taking great care to make sure your test is valid and you've not leaked information from the training into the test set or used information you wouldn't have available to you in prod (e.g. data that theoretically exists but only comes back from the data provider in a batch ever 24hrs and thus not available at the time you want to make the prediction)
Also, seeing as you mention some of the metrics you've used, some thoughts on this: Fundamentally, the problem you're trying to solve is "should I deploy this model" so you want to understand whether it's sufficiently better than your current baseline (be that an ML model or just a business process, which could be "do nothing") so as to warrant putting into production. To that end, metrics like F1-score, or AUC aren't really what you need.
When it comes to fraud, it would be typical to have an estimate of what a false positive costs you (presumably every predicted positive leads to an extra level of verification being introduced which leads to some dropoff, and you can make some assumptions on the typical range / maybe you have some data on this), as well as what a false negative costs you (essentially the cost of having to reimburse the transaction I would imagine)
Then, at different thresholds, you can quantify how much money you're losing to the combination of fraud + dropouts and compare that to your current procedure. That will lead you to a number of how much money deploying this model could make/lose you. [Disclaimer: it's a little more complicated than this, there are secondary metrics, reputational concerns etc, this was more meant to be indicative]
| null | CC BY-SA 4.0 | null | 2023-05-25T14:32:47.487 | 2023-05-25T14:32:47.487 | null | null | 103003 | null |
616909 | 1 | null | null | 0 | 11 | Suppose that I have a single regression model predicting daily total store sales. Initially the model was trained on only 1% sample of the data and I applied min max normalization to the store sales. Now I gain access to the raw data, and would like to train the model on the raw data instead. How should I transfer the knowledge from the previous model to the current model given the magnitude of sales now changes? Maybe it does not matter too much cause I have applied min max normalization?
| Transfer a regression model from sample to raw data | CC BY-SA 4.0 | null | 2023-05-25T14:48:34.563 | 2023-05-25T14:48:34.563 | null | null | 291544 | [
"regression",
"machine-learning",
"time-series"
]
|
616910 | 1 | null | null | 1 | 31 | I have a time series of daily temperature with autocorrelation that looks like this:

Of course, temperature is heavily autocorrelated. When looking at the partial autocorrelation plot, lags are significant up to the 15th lag.

But now I want to use this data as input for a regression model of a dependent variable y (perhaps something like heating), and I am wondering how many lagged values of temperature I should add.
$\text{heating}=c+\beta_0T_{today}+\beta_1T_{yesterday}+...$
Should I add all fifteen of them? It seems a little much. Especially since I don't think the temperature of fifteen days ago will influence the amount of heating done today very much.
| How many lags to include? (Temperature data set) | CC BY-SA 4.0 | null | 2023-05-25T14:51:21.717 | 2023-05-25T17:13:48.103 | 2023-05-25T17:13:48.103 | 384768 | 384768 | [
"time-series",
"model-selection",
"lags",
"acf-pacf",
"distributed-lag"
]
|
616911 | 1 | null | null | 0 | 10 | I would like to estimate the ATT using the Synthetic Difference in Difference method, where the treatment (e.g., sales coverage for certain accounts) is switched off then on and then off again. This could be staggered, kicking in at different times for different units.
In a generalised Differences-in-Differences, one is able to work with an on/off treatment regime for units, but is this possible for Synth DiDs?
| Synthetic Differences-in-Differences for switchign treatment | CC BY-SA 4.0 | null | 2023-05-25T14:53:13.113 | 2023-05-25T14:53:13.113 | null | null | 388825 | [
"difference-in-difference",
"synthetic-controls"
]
|
616912 | 1 | 617361 | null | 10 | 146 | When you filter on only viewing results with p<alpha, some of those "statistically significant" results go in the wrong direction. This is called a Type S error ("S" for sign). And the average effect size in those selected results is larger than the true effect size. Believing the magnitude of these effect sizes has been called a Type M error. More details in the reference below.
Type S errors are more common, and effect magnification is larger, when the power of a study is low.
My question is: What are the equations that compute the probability of type S error and the average magnification factor from power?
- Gelman, A. & Carlin, J. Beyond Power Calculations Assessing Type S (Sign) and Type M (Magnitude) Errors. Perspectives on Psychological Science 9, 641–651 (2014). DOI: 10.1177/1745691614551642
| The probability of making a Type S error, and the average amount of magnification (type M error) as a function of power | CC BY-SA 4.0 | null | 2023-05-25T15:06:56.360 | 2023-06-03T11:34:20.033 | 2023-05-26T15:50:49.660 | 25 | 25 | [
"hypothesis-testing",
"type-i-and-ii-errors",
"research-design",
"reproducibility"
]
|
616913 | 1 | null | null | 0 | 5 | I want to predict whether the client will renew his/her subscription based on groceries consumption patterns. Suppose an order contain only one type of grocery.
I have a DataFrame containing ratios of values for different types of groceries for each client and the total number of orders. Each ratio represents the number of groceries of a specific type divided by the total number of groceries ordered. However, the reliability of these ratios varies based on the total number of orders.
For example, if a client has only placed one order of a particular type, the ratio for that type will be 100%. However, if another client has placed 97 orders of the same type with 100 orders in total, the ratio would be 0.97%.
I am training a machine learning model using XGBoost, but I am struggling to capture the relationship between the ratios and the total number of groceries to weight reliability of ratios. It appears that the model is not effectively learning this relational information.
I would appreciate any suggestions on how to address this issue. How can I incorporate the varying reliability of the ratios into my machine learning model? Are there any techniques or approaches that can help the model learn the importance of different ratios based on their reliability?
Thank you in advance for your assistance!
| How to represent varying reliability of ratios calculations in a dataset? | CC BY-SA 4.0 | null | 2023-05-25T15:10:29.487 | 2023-05-25T15:10:29.487 | null | null | 386413 | [
"boosting",
"methodology"
]
|
616914 | 1 | null | null | 0 | 11 | Assume we have data for two years, 2021 and 2022.
When training, we use 2021 data to predict 2022 outcomes, resulting in model 1.
Now during inference, we use 2022 data to predict 2023 results.
Assuming the trend changes in 2022 and we have few months of data in 2023, is it possible to make model1 more accurate?
A plausible method is to retrain model using the combined data from 2021, 2022, and the available months in 2023; however, this will not work because we need to use 2022 as target variable (which make 2022 unusable)
| Time series training data lag | CC BY-SA 4.0 | null | 2023-05-25T15:20:55.480 | 2023-05-25T19:19:16.693 | null | null | 51955 | [
"time-series"
]
|
616915 | 1 | null | null | 6 | 201 | I am trying to compare the lodging resistance scores of different wheat cultivars in an agronomic trial. Lodging is the phenomenon in which wheat plant can bend and lean closer to the ground as a result of strong wind and rains, for example. My score goes from 1 (wheat plants are lying flat on the ground) to 9 (wheat plants proudly standing upright). My goal is to compare the means of different varieties for this score (4 reps per variety), for which I aim to run post-hoc tests on the fitted model. I do not wish to make predictions with this model.
My data is thus bound from 1 to 9 and is continuous (but quasi integer: I can get .5 or .25 but no more decimals). Another property of my data is that the variance is quite heterogeneous, with a tendency to increase when mean scores get closer to 1. As 1 is the lower bound, variance rapidly decreases when it gets there, but this is rare in my data set.
For now, I have tried fitting a generalized linear model with a Poisson distribution. Indeed, Poisson is bounded on one side (0) and variance=mean, which fits pretty well with my data after the following transformation:
`mydata$TransfVar = abs(mydata$lodging_score-9)*100`
(The idea here is to convert the 9 into a zero by subtracting the whole data by 9, then "flip" my data with abs(), and multiply it by 100 to get integer values only)
I then fit the model as follows:
`model_lodging = glmer(data=my_data,formula = TransfVar ~ Variety + (1|Block), family = poisson())`
This seems to work pretty well, as I get the following the following Fitted~Residuals plot, which is much better than when I first tried fitting a linear model.
[](https://i.stack.imgur.com/x3XUP.png)
I also get the following summary:
```
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: poisson ( log )
Formula: TransfVar ~ Variety + (1 | Block)
Data: my_data
AIC BIC logLik deviance df.resid
2745.6 2774.3 -1357.8 2715.6 35
Scaled residuals:
Min 1Q Median 3Q Max
-17.0608 -5.0263 -0.9449 5.0668 14.3831
Random effects:
Groups Name Variance Std.Dev.
Block (Intercept) 0.01485 0.1219
Number of obs: 50, groups: Block, 4
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 3.74225 0.09796 38.203 < 2e-16 ***
VarietyCH - PA 2.65454 0.08036 33.034 < 2e-16 ***
VarietyChiddam d'automne a epi blanc 2.16400 0.08098 26.724 < 2e-16 ***
VarietyJ - C 1.57351 0.08427 18.673 < 2e-16 ***
VarietyJ - PA 2.39522 0.08011 29.899 < 2e-16 ***
VarietyJ - PA - RSL - C 1.23198 0.09045 13.620 < 2e-16 ***
VarietyJ - RSL 1.97698 0.08323 23.753 < 2e-16 ***
VarietyJaphet 2.55357 0.07962 32.073 < 2e-16 ***
VarietyPA - R -0.35157 0.12817 -2.743 0.00609 **
VarietyPA - RSL 1.76694 0.08298 21.293 < 2e-16 ***
VarietyPA - RSL - S - R 0.62584 0.10038 6.235 4.52e-10 ***
VarietyPrince Albert 2.56268 0.07959 32.198 < 2e-16 ***
VarietyRouge de St-Laud 0.25783 0.10211 2.525 0.01157 *
VarietyRSL - S -0.30029 0.12610 -2.381 0.01725 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
However, this feels quite unorthodox. My data is not count data, is not integer, and has an upper bound, unlike Poisson.
Another issue is that some of my varieties did not lodge at all and have all 9 as lodging scores (so no variance), which glmer does not like. From my understanding, a zero-inflated model would not apply in this case. For now my solution has been to remove these groups (hence the problem is not apparent in the above summary).
Are there other distribution families that could better match the type of data I have here, or is my approach valid for the intended use (mean comparisons between groups, not predictions)?
| What is the adequate regression model for bounded, continuous but poisson-like data? | CC BY-SA 4.0 | null | 2023-05-25T15:24:00.830 | 2023-05-26T00:49:22.980 | 2023-05-26T00:49:22.980 | -1 | 388826 | [
"generalized-linear-model",
"poisson-regression",
"bounds"
]
|
616917 | 1 | 616926 | null | 2 | 30 | I have datasets from two bivariate normal distributions, $\mathcal{N}(\mu_x, \Sigma_x)$, and $\mathcal{N}(\mu_y, \Sigma_y)$ respectively. Now we know the correlation coefficient for these two distributions can be defined as $\rho_x = \frac{\sigma_{12}}{\sigma_1 \sigma_2}$ and $\rho_ y= \frac{\sigma_{34}}{\sigma_3 \sigma_4}$, assuming
$\Sigma_x = \left(\matrix{\sigma_{1}^2&\sigma_{12}\\\\\sigma_{12}&\sigma_2^2}\right)$ and $\Sigma_y = \left(\matrix{\sigma_{3}^2&\sigma_{34}\\\\\sigma_{34}&\sigma_4^2}\right)$
Is there a standard test for testing $H_0 : \rho_x = \rho_y$ vs $H_1 : \rho_x \neq \rho_y$, I understand this doable by likelihood ratio tests, but not sure if there is already existing distribution for testing the parameters.
| Testing correlation coefficient of two bivariate gaussian | CC BY-SA 4.0 | null | 2023-05-25T15:38:18.107 | 2023-05-25T16:37:53.153 | null | null | 123199 | [
"hypothesis-testing",
"correlation",
"normal-distribution",
"bivariate"
]
|
616918 | 2 | null | 616899 | 7 | null | My suggestion is to complement the p-value with a report of the effect size and its confidence interval (and don't use the word "significant"). For your data, the relevant effect size is the relative risk.
The relative risk is 5.0 and its 95% CI ranges from 1.3 to 20.2, from a 30% increase to a 20-fold increase. That wide range expresses the uncertainty you want to convey.
There are several ways to compute (estimate) the CI. I used Koopman asymptotic score implemented in GraphPad Prism. Here is a screenshot of the results:
[](https://i.stack.imgur.com/90pOi.png)
| null | CC BY-SA 4.0 | null | 2023-05-25T15:41:24.753 | 2023-05-25T16:48:16.187 | 2023-05-25T16:48:16.187 | 25 | 25 | null |
616919 | 1 | null | null | 1 | 26 | I have a dataset of observations taken at random time points across a wide range of time.
These are X-rays taken in a fashion that, sadly, is a bit random (clinic can get a bit manic).
I have 70 patients.
Each patient has between 2 and 8 images.
This means about 300 images.
The image capture time is random, but always between 1 and 80 months from the first image.
We have a series of morphological measurements we have extracted from each image. SC_1 is the first metric, we have about 18 to look at.
I want to determine if the measurements we are obtaining are changing significantly over time.
I expect that around half of them will show a substantial change in morphology after 6 months. But I need to see if this is true, and then identify those patients who did change significantly.
[![Data structure][1]](https://i.stack.imgur.com/A6sCk.png)
I have tried plotting the data - it is a bit of a mess unfortunately because there are more patients than colours to choose, and the varying time-frame. But some are definitely are showing change over time, while others are not.
[](https://i.stack.imgur.com/5ZZHd.png)
I am at a loss to figure out the best way to quantify the change for each patient.
I was thinking perhaps take the percentage change for each patient, between the first and last image - and assigning this number to each patient?
Or take the standard deviation for each patient's measurements over time?
Or is there a more sophisticated method? If I can get a pointer in the right direction, I'd happily learn a new method (e.g. some form of time series). It is just getting to the point of knowing which methods to look at....
I will have about double to dataset in a few months though....
| Identifying those samples which changed over time - but at random timepoints | CC BY-SA 4.0 | null | 2023-05-25T15:47:40.617 | 2023-05-25T16:30:40.770 | 2023-05-25T16:27:57.367 | 199063 | 232000 | [
"time-series",
"statistical-significance",
"multilevel-analysis"
]
|
616920 | 1 | null | null | 4 | 101 | Given the representations of the mean values
$$E[Y]=\int(1-F(x))\,dx$$
and
$$E[X]=\int(1-G(x))\,dx$$ where $F$ and $G$ are the distributions of $Y$ and $X$ respectively,
can I use them to find the expression (or a bound) of $E[|Y-X|]?$ For instance,
is it true that
$$ E[|Y-X|]\leq\int |G(x)-F(x)|\,dx?$$
| Representation of the expectation of absolute value of the difference $Y-X$ | CC BY-SA 4.0 | null | 2023-05-25T16:07:09.340 | 2023-05-31T19:03:14.673 | 2023-05-31T12:34:47.117 | 362671 | 277733 | [
"probability",
"expected-value",
"absolute-value"
]
|
616921 | 1 | null | null | 1 | 20 | I am working on my risk perception model using three measures- probability, affection, and severity. I have one item on a Likert scale (1-5) for each probability and affection. However, for the severity part, I have one item on the Likert scale (1-5) and 7 items of binary types (0,1). The Sample size is 203. I want to use CFA for risk perception measures because existing studies suggest the importance of three measures on risk perception. I use `lavaan` package in r to compute CFA. Prior to this, I change all variables to ordered factor variables and run an analysis. I started by including only three items with the same scale (1-5). When I run the analysis and calculate the model fitness, I got srmr=0 rmsea=0 cfi=1 tli=1. It looks like a perfect model. I feel like something is wrong. Is it because the model is just identified?
Next approach, in my model, I compute the latent variable `severity' by combining all binary variables which is an ordered, factor. Then fit that variable along with the other 3 items of the same scale in the risk measure. When I run cfa I got this warning message:
In lav_object_post_check(object) :
lavaan WARNING: some estimated ov variances are negative.
I can see the negative variance in one of the items. I am not sure what can I do about that. I checked the no. of model parameters= 30. Does the error indicate a larger parameter compared to the sample size?
For second model, srmr= 0.166, rmsea= 0.086, cfi=0.817 and tli=0.767
I would request your help. How can I resolve it? Should I take a different approach like PCA?
Thank you.
| Risk perception measurement using CFA - Items with different scale | CC BY-SA 4.0 | null | 2023-05-25T16:12:38.317 | 2023-06-01T15:10:48.033 | 2023-05-25T16:21:48.070 | 56940 | 388827 | [
"latent-variable",
"confirmatory-factor"
]
|
616923 | 2 | null | 260628 | 0 | null | We are given:
\begin{equation}
Y|X=x \sim N(x, 1) \;\mathrm{and}\; X \sim N(0, 1)
\tag{1}
\end{equation}
The goal is to find
\begin{equation}
(Y-X|X=x) \sim \;?.
\tag{2}
\end{equation}
With
\begin{equation}
Z=Y-X \tag{3},
\end{equation}
we are looking for the probability density $f_{Z|X=x}(z|x)$. We perform a coordinate transform to find,
\begin{equation}
f_{Z,X}(z,x)(dx \wedge dz) = f_{Z,X}(y-x,x)\frac{\partial z}{\partial y} (dx \wedge dy) \equiv f_{Y,X}(y,x)(dx \wedge dy),
\end{equation}
where the $(dx \wedge dx)=0$ term has been dropped. From Eq. (3), $\frac{\partial z}{\partial y}=1$. Now we can write
\begin{equation}
f_{Z|X}(z|x)=\frac{f_{Z,X}(z,x)}{f_{X}(x)}=\frac{f_{Y,X}(y,x)}{f_{X}(x)}=f_{Y|X}(y|x)=f_{Y|X}(z+x|x)
\end{equation}
From Eq. (1) we have the conditional probability density
\begin{equation}
f_{Y|X}(y|x) = \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}(y-x)^2},
\end{equation} so plugging in $y=z+x$ gives
\begin{equation}
f_{Z|X}(z|x)=\frac{1}{\sqrt{2}}e^{\frac{1}{2}z^2}.
\end{equation}
Which means that
\begin{equation}
(Y-X|X=x) = (Z|X=x) \sim N(0,1).
\end{equation}
If you're looking for a description of the "wedge" ($\wedge$) there is a really good description [in this Cross Validated answer](https://stats.stackexchange.com/questions/36093/construction-of-dirichlet-distribution-with-gamma-distribution/154298#154298).
| null | CC BY-SA 4.0 | null | 2023-05-25T16:28:42.733 | 2023-05-25T16:28:42.733 | null | null | 157528 | null |
616924 | 2 | null | 616919 | 1 | null | One of the possible methods you might consider here is a multi-level linear models (I've added these tags to your question). In brief, your model will have a response variable of the morphological measurement (say SC_1), and there will be predictor variables for the individual and the time of the x-ray measurement. The multi-level model will allow you to run this model in such a way that it will estimate a random intercept and/or slope for each patient. This is somewhat more sophisticated than running just a regular OLS regression, but this should get you pointed in the correct direction.
Happy to share more or update this answer as needed.
| null | CC BY-SA 4.0 | null | 2023-05-25T16:30:40.770 | 2023-05-25T16:30:40.770 | null | null | 199063 | null |
616925 | 1 | null | null | 0 | 30 | In numerous articles I have read ([Abadie 2021](https://www.aeaweb.org/articles?id=10.1257/jel.20191450), [Samii 2016](https://nyuscholars.nyu.edu/en/publications/causal-empiricism-in-quantitative-research), basically any Abadie piece that talks about the synthetic control method), the authors cite regression's reliance on extrapolation for weights as a negative feature for estimating causal effects.
For example, Abadie 2021, in the section discussing the benefits of synthetic controls compared to regression cites:
>
Synthetic control estimators preclude extrapolation, because synthetic control weights are nonnegative and sum to one. It is easy to check that, like their synthetic control counterparts, the regression weights in $W^{reg}$ sum to one. Unlike the synthetic control weights, however, regression weights may be outside the [0, 1] interval, allowing extrapolation outside of the support of the data
In Samii 2016:
>
Where there is no overlap, one can only make comparisons with interpolated or extrapolated counterfactual potential outcomes values. King and Zeng (2006) brought the issue to political scientists’ attention, characterizing interpolations and, especially, extrapolations as “model dependent,” by which they meant that they were nonrobust to modeling choices that are often indefensible. By pointing out how common such model dependent estimates are in political science research, King and Zeng raised troubling questions about the validity of many generality claims in quantitative causal research in political science.
In sum, I think the critique is less with the method and more with the lack of transparency for how regression weights are generated (relying on extrapolation and not equally weighting observations in the data set such that a small subset of weights can dominate the weighting of an entire sample.
However, in these pieces, what this entails is somewhat vague. I can see that regression can produce negative weights (as opposed to the synthetic control method where the lower bound of weights is 0), but I do not understand how this is a negative feature. Abadie (2021) notes that weights beyond [0,1] leads to extrapolation outside the support of the data. I am confused on what this means. How do negative weights or weights > 1 extrapolate beyond the support of the data? I am sure that the answer is fairly simple, I am just having a hard time explaining this back to myself in a way that I can understand.
| What does extrapolation mean in the context of regression weights and what are its downsides? | CC BY-SA 4.0 | null | 2023-05-25T16:31:53.270 | 2023-05-31T11:46:36.483 | 2023-05-26T13:35:41.553 | 360805 | 360805 | [
"regression",
"causality",
"synthetic-controls"
]
|
616926 | 2 | null | 616917 | 0 | null | If you wish to test if two correlations from two samples for independent pairs of variables are different from each other, the null hypothesis statistical test (NHST) appropriate for this would be the Fisher z-transform test. This test uses the normal distribution, and requires transforming the correlations via
$$z = \frac12 \ln\left(\frac{1+r}{1-r}\right) = \text{atanh}(r)$$
More can be found here: [https://en.wikipedia.org/wiki/Fisher_transformation](https://en.wikipedia.org/wiki/Fisher_transformation)
Happy to update this answer further if requested.
| null | CC BY-SA 4.0 | null | 2023-05-25T16:37:53.153 | 2023-05-25T16:37:53.153 | null | null | 199063 | null |
616928 | 2 | null | 616910 | 0 | null | I will provide an explanation of how I would go about deciding the number of lag points to consider. This is exploratory in nature, it assumes that you do not have a justification (hypothesis/assumption) directing you to a specific number, and above all is, it is arbitrary. (But, the final choice for any number of lag variables would be arbitrary anyways.)
I would start with a random subset of the data, and I would run a number of models increasing the number of lag points for each model. Once I have determined when the additional lag points stopped reaching significance in the model, I would check to see if the final number of lag points all reach significance as predictors in a model with a different random subset of the data. (If I were feeling extra-adventurous, I might even draw multiple samples at each step to build more evidence on how many lags to consider.)
Then I would run the model with the entire data set. As justification that a reasonable number of lag variables have been included, the model parameter estimates should not change that dramatically if you add or subtract one of the lags.
I'm certain there are more sophisticated methods that can be employed, and I am hoping other contributors will offer other responses explaining those protocols.
| null | CC BY-SA 4.0 | null | 2023-05-25T16:47:32.430 | 2023-05-25T16:47:32.430 | null | null | 199063 | null |
616929 | 1 | null | null | 1 | 19 | I have been fitting a bayesian GLM using brms. The code works well but when I loop this over several data and make it a bit more complex, R encounters a fatal error and crashes. This seems to be happening to other people when using brms. I am now looking into other methods to code this in R (eg, stan_glm, bayesglm), but they don't seem to accept bounded uniform priors like brms does, which is what I need.
Does anyone have suggestions?
Thanks
code
```
N <- c(10, 12, 15, 20, 18)
success <- c(2, 4, 6, 8, 10)
x1 <- c(0, 0, 0, 0, 5)
x2 <- c(0.5, 0.5, 0.5, 0.5, 0.5)
data <- data.frame(N, success, x1, x2)
prior1 <- c(set_prior("uniform(-4,0)", class = "b", coef = "x1"),
set_prior("uniform(-4,0)", class = "b", coef = "x2"))
fit_bglm <- brm(success | trials(N) ~ -1 + x1 + x2,
data = data,
family=binomial(link = "log"),
prior=prior1, chains = 8, iter = 8000,
seed=154511)
```
| Bounded uniform prior in R | CC BY-SA 4.0 | null | 2023-05-25T16:48:54.620 | 2023-05-25T18:44:52.397 | 2023-05-25T18:44:52.397 | 117812 | 212618 | [
"r",
"bayesian",
"prior",
"brms"
]
|
616930 | 1 | null | null | 0 | 17 | In computer graphics, you have an (MC)MC estimate $Q_i$ of the color value of the $i$th pixel and a true value $I_i$. Now you take $\epsilon_i:=Q_i-I_i$, which can be thought of as an "error image".
Now, one has found out, that if the Fourier transform ([https://vincmazet.github.io/bip/filtering/fourier.html](https://vincmazet.github.io/bip/filtering/fourier.html)) of $\epsilon$ has a "blue noise profile" (almost no low frequency content), then the "visually perceived error" is reduced.
What my question here is: Is there any ongoing work in a more general or different (MC)MC context? I mean, from an abstract view point, the problem is that we have finite number of estimates $Q_1,\ldots,Q_k$ of integrals $I_1,\ldots,I_k$ and you can assume that the domains of the integrands are disjoint and their union is the whole space.
| blue noise error distribution in (MC)MC estimation | CC BY-SA 4.0 | null | 2023-05-25T16:57:30.277 | 2023-05-26T15:01:34.893 | 2023-05-26T15:01:34.893 | 222528 | 222528 | [
"sampling",
"references",
"markov-chain-montecarlo",
"monte-carlo",
"noise"
]
|
616931 | 1 | null | null | 0 | 16 | I applied a test for attachment and 2 different tests for internet addiction to 114 highschool students. All tests have a Likert scale 1-5. My hypothesis is that the attachment and internet scales will have a negative correlation.
What are the relevant calculations that I need to perform, to include in a psychology bachelor final thesis?
I'm thinking about central tendencies, Pearson's r for correlation, a comparison between male and female subgroups, and this is where I get lost.
Would a regression test be useful? Or a standard distribution?
Could I estimate the tendencies in the population based on the sample?
I could really use some help here, as I need to submit my bachelor thesis in a few days, so thanks in advance!
| What are some relevant statistic calculations? | CC BY-SA 4.0 | null | 2023-05-25T17:14:59.983 | 2023-05-25T17:14:59.983 | null | null | 388804 | [
"hypothesis-testing",
"correlation",
"descriptive-statistics",
"psychology"
]
|
616932 | 1 | null | null | 0 | 12 | I wrote a seminar paper on the conditional ß-convergence (income convergence) of the Solow Model and therefore ran several cross sectional regressions in R. Thus, I regressed the variables "GDP growth per capita", "savings rate per capita" and miscellaneous variables on the initial stock of capital per capita of a certain year. In one of the regressions, the ß-coefficients for two exogenous variables were significantly negative, while one of the coefficients was more negative than the other. I erroneously argued that the more negative coefficient was "bigger" and therefore the impact of that exogenous variable on the endogenous variable was more significant than the impact of the other exogenous variable. Strictly algebraically speaking, the more negative coefficient is obviously not "greater" but more negative and thus more significant. Furthermore, I got the remark that there are additional statistical tests that need to be implemented in order to be able to assume or to conclude that the more negative coeffcient is more significant. What test is required to do that and how is the test called in R?
Many thanks!
| Statistical test for interpreting negative coefficients in multivariate cross sectional analysis | CC BY-SA 4.0 | null | 2023-05-25T17:15:56.720 | 2023-05-26T19:39:29.690 | 2023-05-26T19:39:29.690 | 388832 | 388832 | [
"r",
"regression",
"regression-coefficients",
"cross-section"
]
|
616933 | 2 | null | 616915 | 8 | null | Consider [ordinal regression](https://stats.stackexchange.com/tags/ordered-logit/info). You have data that are ordered but, from your description, it doesn't seem that the difference between scores of 1 and 2 is the same as the difference between scores of, say, between 4 and 5 or between 8 and 9. Frank Harrell recommends this as an approach when you have this type of data, even when there are so many ordered outcome levels that you might think of the outcomes as continuous. Chapter 13 of his [Regression Modeling Strategies](https://hbiostat.org/rmsc/ordinal.html) provides much detail.
The R [GLMMadaptive package](https://cran.r-project.org/package=GLMMadaptive) can fit mixed models with ordinal outcomes. [This page](https://drizopoulos.github.io/GLMMadaptive/articles/Ordinal_Mixed_Models.html) shows how to proceed.
| null | CC BY-SA 4.0 | null | 2023-05-25T17:36:01.823 | 2023-05-25T17:45:21.153 | 2023-05-25T17:45:21.153 | 28500 | 28500 | null |
616934 | 1 | 616953 | null | 1 | 60 | Let $X_n$ be i.i.d with common df $F$. Let $M_n = \max (X_1, \ldots, X_n)$. Suppose $P(b_n^{-1} M_n \leq x) = F^n(b_n x) \to e^{-x^{-\alpha}}$ weakly, where $x > 0$ and $\alpha > 0$.
Let $x_0 = \sup \{x \colon F(x) < 1 \}$. I want to prove that $x_0 = \infty$, and my book says that if $x_0 < \infty$, then for $x>0$, we must have $b_n x \to x_0$. However, I'm not sure why this follows. If someone could explain that step, I'd be grateful.
Thank you.
[](https://i.stack.imgur.com/VCg6B.png)
| If $F^n(b_n x) \to e^{-x^{-\alpha}}$, $b_n x \to x_0$ where $x_0 = \sup \{x \colon F(x) < 1 \}$ | CC BY-SA 4.0 | null | 2023-05-25T17:42:23.463 | 2023-05-25T22:29:25.010 | 2023-05-25T19:23:05.123 | 20519 | 260660 | [
"distributions",
"convergence",
"extreme-value"
]
|
616935 | 2 | null | 616921 | 2 | null |
- Yes, a single factor model with 3 indicators and free loadings is just identified (saturated, df = 0). Therefore, it trivially fits your data perfectly.
- A negative residual variance estimate (Heywood case/improper solution) is often a sign of model misspecification/misfit. Your fit indices for this model don't look good, indicating that your model may be incorrect/misspecified. Another common reason for a Heywood case is a sample size that is too small. However, your sample size looks pretty OK.
| null | CC BY-SA 4.0 | null | 2023-05-25T17:46:34.440 | 2023-05-25T17:46:34.440 | null | null | 388334 | null |
616936 | 1 | null | null | 3 | 25 | I am studying approach to calculate how correlated 2 nominal variables are by using Cramers V. However when I have a simple 2x2 matrix that I expect high effect size, the calculated outcome shows otherwise.
[](https://i.stack.imgur.com/WfCqc.png)
[](https://i.stack.imgur.com/q1p2r.png)
And the output of CramersV is 0.288.
From the matrix, Material B shows 10X higher fail rate compare to Material A and I expect strong association of Material type to Fail/Pass frequency (cramers V > 0.5?).
I am doing manual calculation, not using python etc. Is there anything I need to tune from the standard Cramers V calculation?
| Need help understanding why Cramers V calculated outcome not as expected | CC BY-SA 4.0 | null | 2023-05-25T17:46:48.177 | 2023-05-26T10:45:34.803 | null | null | 388835 | [
"cramers-v"
]
|
616938 | 1 | null | null | 1 | 14 | Suppose you have a random variable $X$ and black-box function $f$. Suppose you also have prior estimates $m$ and $s$ of the mean and standard deviation of $f(X)$.
How can we use this prior information to reduce the variance of a Monte Carlo estimator of the mean of $f(X)$?
Thanks ahead of time!
my attempts
My first idea was to just treat $m$ as a sample, and give it a weight $w$ based on how accurate $m$ is. We could form the estimator
$$E[f(X)] \approx \frac{1}{n}\left(wm + \sum_{i=1}^n f(X_i)\right)$$
Alternatively, we could use importance sampling by sampling $X$ from, say, a Gaussian $N(m,s^2)$.
| Reduce Variance of monte carlo estimator using guess of mean | CC BY-SA 4.0 | null | 2023-05-25T18:31:35.463 | 2023-05-25T18:31:35.463 | null | null | 276925 | [
"bayesian",
"variance",
"prior",
"estimators"
]
|
616939 | 1 | 616943 | null | 1 | 33 | I am trying to causally evaluate an impact of a specific labor policy that was implemented in three U.S. states. I wonder if constructing a synthetic/artificial control method (SCM) for those states makes sense to resemble the treatment group?
Specifically, based on my reading of econ/policy research that uses SCM, the authors often only compare one treatment group to a set of artificial/synthetic states as in the famous California Tobacco tax [paper](https://www.tandfonline.com/doi/abs/10.1198/jasa.2009.ap08746).
Therefore, I am curious if SCM can be still used in a setting where the treatment group consists of different geographic regions (e.g. three states)?
| Synthetic control method based on several treated units | CC BY-SA 4.0 | null | 2023-05-25T18:38:41.667 | 2023-05-25T20:34:43.323 | 2023-05-25T20:34:43.323 | 8013 | 358813 | [
"regression",
"time-series",
"hypothesis-testing",
"inference",
"causality"
]
|
616940 | 1 | null | null | 3 | 45 | What is the best statistical test for comparing two random forest models, where a different set of variables is made available for each model? I need a test where a power test can be used to justify a sample size. I'm considering using a paired t-test (see below).
More specific information follows.
Two random forest models are generated as follows. The data used is from a stratified random sampling from a 100x100 grid. The same sampled data is used to compute two different random forest models. One random forest model uses optimal variables (the variables used in data generating process). The other random forest model has more variables available (optimal, sub-optimal, and irrelevant). Multiple re-samplings could be done on the same grid; multiple random forest models could be generated using the same data; and more grids can be generated from the same stochastic data generating process. At least 20 different grids will be generated.
I'm considering using a paired t-test, and doing only only sampling for each grid, and computing only one pair of random forest models for each sampling. The power test would tell me, I believe, how many grids I need to generate. The R function pwr.t.test computes the power of a paired t-test. You give pwr.t.test all but one (any one) of the following values, and it gives you the one you left out: sample size, effect size, alpha (p-value), and power. I'm considering using AUC-ROC as the metric.
Is there a better comparison methodology? I need to be able to use a power test to justify a sample size.
The data generating process. I am generating simulated virtual species distributions. Independent variables are measured at different scales. Scale: how many cells around a given cell are averaged to determine the value of an environmental variable, where the averaged value is used in determining whether a grid point is a presence or absence. Each environmental variable is one layer of the grid, and is generated using functions available in a virtual SDM modeling package. A species is present or absent based on the combined values of the environmental variables at a grid location, based on a defined suitability function.
| Best statistical test for comparing two random forest models, where each has a different set of variables available for modeling? | CC BY-SA 4.0 | null | 2023-05-25T18:49:19.830 | 2023-05-26T19:09:25.460 | 2023-05-25T22:15:21.933 | 294655 | 294655 | [
"hypothesis-testing",
"statistical-significance",
"random-forest"
]
|
616941 | 2 | null | 535054 | 1 | null | Actually, I am struggling with the same problem (data with many zeros) and stumbled on a [paper](https://doi.org/10.1016/j.jembe.2005.12.017) that suggests you do exactly that (add a dummy value of 1 to allspecies), and refers to this methods as a "zero-adjusted bray-curtis coefficient". There are assumptions to be met of course, but there is a mathematical justification for doing so. I am still unsure if it is appropriate in my case but I wanted to put the information out there for others who are considering this approach. Honestly, to me the idea of removing real data simply because it has zeros in order to get an analysis to work seems like a grave affront to proper data analysis. Zeros sometimes have important biological meaning, as in my case, and need to be retained.
| null | CC BY-SA 4.0 | null | 2023-05-25T19:07:41.393 | 2023-05-25T19:07:41.393 | null | null | 388839 | null |
616942 | 2 | null | 616914 | 0 | null | In my opinion this is very open-ended and highly model dependent.
If you used an ARMA/ARIMA based model originally to tune the coefficients, then you would need to start it from scratch.
If you used some kind of Bayesian structural time series you may or may not need to re-train it depending on how much data cardinality there is, because ideally Bayesian-based models (and other online learners), should progressively "learn to adapt over time".
If you used one of those fancy neural network one or zero shot learners then you may not even need to technically re-train. You can just give it the observed data and "should in principle" be able to shoot forward in time again based on the changed observations in 2022.
| null | CC BY-SA 4.0 | null | 2023-05-25T19:19:16.693 | 2023-05-25T19:19:16.693 | null | null | 117574 | null |
616943 | 2 | null | 616939 | 1 | null | Yes, it would be fine. To quote Abadie (2021) [Using Synthetic Controls:
Feasibility, Data Requirements, and Methodological Aspects](https://www.aeaweb.org/articles?id=10.1257/jel.20191450): "The synthetic control framework can easily accommodate estimation with multiple treated units by fitting separate synthetic controls for each of the treated units." Section 8 "Extensions and Related Methods" in that paper comments on this further. What is a bit of an open question is whether aggregated outcomes are of interest (i.e. are we going to average the three treatment states in the OP's example) or is it a "non-sensical" view? Abadie & L'Hour (2019)[A Penalized Synthetic Control Estimator for Disaggregated Data](https://www.tandfonline.com/doi/abs/10.1080/01621459.2021.1971535) focus on that question in particular.
Aside from Abadie's immediate work, there a couple of well-cited references particularly on this matter: [Generalized Synthetic Controls](https://www.cambridge.org/core/journals/political-analysis/article/generalized-synthetic-control-method-causal-inference-with-interactive-fixed-effects-models/B63A8BD7C239DD4141C67DA10CD0E4F3) (2017) by Xu and [Examination of the Synthetic Control Method for Evaluating Health Policies with Multiple Treated Units](https://onlinelibrary.wiley.com/doi/10.1002/hec.3258) (2016) by Kreif et al. Especially Xu (2017) focused on heterogeneous causal effects as it is now a more realistic question as we are looking at multiple individual treated units.
As there is an `r` tag here, it is worth mentioning that Xu's methodology is implemented in the R package [gsynth](https://cran.r-project.org/web/packages/gsynth/index.html) but to that extent, R has quite a few packages for heterogenous treatment effect estimation if one wishes to try different approaches (e.g. [scpi](https://cran.r-project.org/web/packages/scpi/index.html) [hettx](https://cran.r-project.org/web/packages/hettx/), [bartCause](https://cran.r-project.org/web/packages/bartCause/), etc.).
| null | CC BY-SA 4.0 | null | 2023-05-25T19:19:49.120 | 2023-05-25T19:19:49.120 | null | null | 11852 | null |
616944 | 1 | null | null | 0 | 16 | I would like to ask a question regarding the interpretation of log-log interaction term in the Two-way fixed effect model. I have the following regression model:
y_it = _0 + 2[ln(price_it)*ln(area_it)] + myu_i + phi_t + u_it
here, y is years of schooling (continuous), price is housing price (continuous), and area is house size.
If 2 is 0.5, then how can I interpret point estimate in the logarithm setting?
For example, 1% change in house price and 1% change in house area at the same time leads to 0.5 more years of schooling?
I am not familiar with the log*log interpretation here and would like to get some help!
Thank you so much in advance for your help!
| Interpretation of coefficient on an interaction term of log(continuos x)*log(continuous y) | CC BY-SA 4.0 | null | 2023-05-25T19:35:10.293 | 2023-05-25T19:35:10.293 | null | null | 187731 | [
"regression",
"interaction",
"interpretation",
"logarithm"
]
|
616945 | 1 | null | null | 1 | 27 | I want to examine the effect of being a grandparent on work. My data spans from 2000 to 2020. The model is
$$\text{work}_{it}\sim \text{grandparent}_{it}+\text{covariates}_{it}+\delta_t+\varepsilon_{it}$$
where `grandparent` is an endogenous dummy variable instrumented by the dummy variable `first_child_female` ($=1$ is the individual's first child is female and 0 if otherwise). $\delta_t$ is time fixed effects and $\varepsilon_{it}$ is the error term.
Now I want to examine the time trend of the effect. Specifically, I want to examine the effect from 2000 to 2010, and from 2011 to 2020, to see how it changes. Instead of splitting the data into two halves, I'm thinking about constructing a dummy variable `post2010 = 1` if the data was observed post-2010 and 0 if observed pre-2010.
Therefore, my model becomes
$$\text{work}_{it}\sim \text{grandparent}_{it} + \text{grandparent}_{it} \times \text{post-2010}+ \text{covariates}_{it}+\delta_t+\varepsilon_{it}$$
where $\text{grandparent}_{it}$ and $\text{grandparent}_{it}\times \text{post-2010}$ are endogenous variables instrumented by `first_child_female` and `first_child_female * post2010`.
Here are my two main concerns:
- How to interpret the coefficients? I think the parameter of interest for 2000-2010 is the coefficient for $grandparent$, and the parameter of interest for 2011-2020 is the sum of the coefficients for $\text{grandparent}_{it}$ and $\text{grandparent}_{it}\times post2010$.
- Is this a valid way to examine the time trend?
Thanks for the insights!
| Is adding an interactive term to a regression analysis a valid way to examine time trend? | CC BY-SA 4.0 | null | 2023-05-25T20:12:18.567 | 2023-05-25T22:49:41.467 | 2023-05-25T21:43:10.800 | 5176 | 336679 | [
"panel-data"
]
|
616946 | 2 | null | 616936 | 1 | null | Your calculations are correct. You can easily check it in any statistical software.
To get a Cramér's V of 0.5, your table would have to look like this:
| |Fail |Pass |
||----|----|
|Material A |1 |999 |
|Material B |403 |597 |
That is 0.1% failure with Material A, and 40% with material B. Is this something you would have expected? (This table is just an example, and there are many other tables that could give you a Cramér's V of 0.5, you'll find a completely different example a few paragraphs below).
Your problem seems rather to be with interpreting the result. In short, you may want to stay away from Cramér's V, and simply report [relative risk](https://en.wikipedia.org/wiki/Relative_risk) instead. Indeed, I see at least two problems with the approach you describe:
- You rely too much on an arbitrary benchmark to say that V is "strong" only when $\geq0.5$.
As you found out, it may be meaningless or confusing to use someone else's benchmark without thinking if it would make sense in your specific situation. There are many [different benchmarks out there](https://stats.stackexchange.com/a/615033/164936) for Cramér's V, so it make little sense to pick one of them without thinking first "why this benchmark?".
It depends on the specific study we are considering, and it may be better to use your own customized benchmark, as suggested by Jacob Cohen - see Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences, p.224: "The best guide here, as always, is the development of some sense of magnitude ad hoc, for a particular problem or a particular field.".
- Cramér's V might not be the appropriate effect size you should look at.
Indeed, you say "Material B shows 10X higher fail rate compare to Material A and I expect strong association of Material type to Fail/Pass frequency ". It sounds like you're interested in relative risk, that is "how much more/less likely is it for Material A to fail, compared to Material B?". If it's what you're interesting in, you could simply report the relative risk (10), hopefully along with its confidence interval. (Here, the 95% confidence interval is [6.4, 15.7]).
From the information you give, I don't see the need to involve Cramér's V here, unless you're interested in something else than the difference between Material A and B.
Indeed, two tables with the same relative risk may have very different Cramér's Vs, so Cramér's V alone won't give you a lot of information about that. Another problem is that Cramér's V won't tell you the direction of the difference (is it Material B that is more likely to fail, or Material A?).
---
For a more concrete illustration of the problem with using Cramér's V in your situation, consider the three following tables:
Table X
| |Fail |Pass |
||----|----|
|Material A |20 |980 |
|Material B |200 |800 |
Relative risk: 0.1. Cramér's V: 0.288
Table Y
| |Fail |Pass |
||----|----|
|Material A |60 |940 |
|Material B |600 |400 |
Relative risk: 0.1. Cramér's V: 0.5
Table Z
| |Fail |Pass |
||----|----|
|Material A |200 |800 |
|Material B |20 |980 |
Relative risk: 10. Cramér's V: 0.288
As you see, we have the same relative risk for table X and Y, but very different Cramér's V. Table X and Z have the exact same Cramér's V, but in table X it's material B more likely to fail, while in table Z it's material A.
So relative risk may be what you're after, much more than Cramér's V. Another effect size you may be interested in is [risk difference](https://en.wikipedia.org/wiki/Risk_difference) (e.g. if failure rate for material A is 1% and failure rate for material B is 10%, the risk difference will be -9%). It might be more appropriate than relative risk, depending on the purpose of your study.
Anyway, the thing is that you should ask yourself what it is you want to learn from your table, what you want to communicate to others, and what effect size is appropriate for that. Here, from the information you give, Cramér's V doesn't seem to be appropriate.
| null | CC BY-SA 4.0 | null | 2023-05-25T20:38:46.460 | 2023-05-26T10:45:34.803 | 2023-05-26T10:45:34.803 | 164936 | 164936 | null |
616947 | 1 | null | null | 0 | 6 | Does anyone happen to be familiar with this method: fixed-effects ordered logit models (feologit in Stata)? In this paper: [https://journals.sagepub.com/doi/pdf/10.1177/1536867X20930984](https://journals.sagepub.com/doi/pdf/10.1177/1536867X20930984)
In the paper, this method was applied to panel-data. Is the BUC-tau method also suitable for case-control data?
For example, two types of cases (hospitalization with complication, hospitalization without complication) are matched to one control (no hospitalization) respectively. I feel that the common odds ratio may be affected by the sampling rate of the cases and control but I am not sure.
| Is fixed-effects ordered logit models (feologit in Stata) suitable for matched case-control data? | CC BY-SA 4.0 | null | 2023-05-25T20:43:12.613 | 2023-05-25T20:43:12.613 | null | null | 298204 | [
"ordinal-data",
"matching",
"latent-variable",
"ordered-logit",
"case-control-study"
]
|
616948 | 2 | null | 616899 | 8 | null | This is a nice example of a situation where the all too conventional all-or-none description of the results as 'significant' or 'not significant' is unhelpful, as you have sensibly noticed. I agree with Harvey Motulsky's answer, but would add a few considerations.
First, note that a p-value of 0.04 is only ever fairly weak evidence against the null hypothesis. Where your the sample size is small the false positive error rate does not increase (assuming the statistical model is appropriate), but a 'significant' result will often come from a sample that severely exaggerates the true effect.
Next, note that the individual binary data points carry relatively little information, and so even though the p-value falls within the conventionally 'significant' range, the likelihood function from your data will be relatively widely spread and so if you were to do a Bayesian analysis (maybe you should!) you would find that the posterior probability distribution would not be moved very much from your prior. (All of that is to say that a a result of p=0.04 is says that you have fairly weak evidence against the null.)
Finally, note that the reliability of a statistical test is only as good as the statistical model is well matched to the actual data generating and sampling systems.
Given your thoughtful question, you might find my explanations of the differences between p-value evidence considerations and error rate accountings will clarify the issues. You will find it interesting in any case: [A reckless guide to p-values: local evidence, global errors](https://link.springer.com/chapter/10.1007/164_2019_286).
| null | CC BY-SA 4.0 | null | 2023-05-25T20:58:42.887 | 2023-05-25T20:58:42.887 | null | null | 1679 | null |
616949 | 1 | null | null | 0 | 6 | Referencing this [Q/A](https://stats.stackexchange.com/questions/564/what-is-difference-in-differences), Andy offers two models that return the DiD lift estimate.
Eq1:
=([|=,=2]−[|=,=1])−([|=,=2]−[|=,=1])
Eq2: =1+2(treat)+3(time)+(treat⋅time)+
From appearances, the in eq1 is the same as in eq2, which is the lift attributable to treatment exposure under parallel trends assumption.
[|=,=2] = 1+2+3++
[|=,=1] = 1+2+
[|=,=2] = 1+3+
[|=,=1] = 1+
So, the following should be true: = [ (1+2+3++) - (1+2+)] - [ (1+3+) - (1+) ]. Doing some algebra, both sides are reduced to = . Great!
Question
However, suppose that the duration of pre-start period and post-start period varied by individual in the natural experiment.
My questions are:
- Is this allowed?
- What is the appropriate modeling choice?
| DiD with differing durations of pre-start and post-start periods by individual | CC BY-SA 4.0 | null | 2023-05-25T20:58:44.153 | 2023-05-25T20:58:44.153 | null | null | 288172 | [
"generalized-did"
]
|
616950 | 1 | 616961 | null | 1 | 41 | In Heffernan and Tawn's [2004 paper](https://doi.org/10.1111/j.1467-9868.2004.02050.x), they describe a procedure to sample multivariate data, conditional on one variable ($Y_i$) being extreme. The idea is that $Y_i$ is extreme if it exceeds some arbitrary threshold $v_Y = t(v_X)$. They propose a simulation procedure to simulate more extreme values of $Y_i$ and then use a conditional dependence model which has been fitted to the data to simulate a set $Y_{|i}=\{Y_j:j\neq i\}$. The details of the conditional model are unnecessary here. My issue is that I don't understand how this extreme set $Y_i^*$ is simulated.
My understanding is that all $Y$ values are Gumbel marginal transformations of the empirical data and have $\text{Gumbel(0,1)}$ distributions with CDF $\text{exp}(-\text{exp}(-y))$. However I don't think this is what the authors use to simulate $Y_i^*$.
They write:
Step 1: simulate $Y_i$ from a Gumbel distribution conditional on its exceeding $t_i(v_{Xi})$.
This [2010 paper](https://doi.org/10.1111/j.1753-318X.2010.01081.x) from Lamb uses the same method and states:
The next step is to simulate a value for $Y_i$ from the conditional Gumbel distribution.
I don't know if I need to derive this distribution, if it comes from EVT, or if it should be clear from the paper. I'm new to EVT so suspect this has been left out because it is supposedly obvious. However it is not obvious to me...
I have tried to derive a new CDF using $$\mathbb{P}(Y < y | Y > u_Y ) = \frac{\mathbb{P}(Y<y) - \mathbb{P}(Y < u)}{\mathbb{P}(Y > u)} = \frac{F(y) - F(u)}{1 - F(u)} $$ using the fact $Y \sim \text{Gumbel}(0,1)$ but this doesn't reduce down to any recognisable distribution.
| Gumbel distribution conditional on exceeding a threshold | CC BY-SA 4.0 | null | 2023-05-25T21:19:10.133 | 2023-05-28T21:30:24.393 | 2023-05-25T21:59:51.503 | 363176 | 363176 | [
"inference",
"multivariate-analysis",
"extreme-value",
"extremal-dependence"
]
|
616951 | 1 | null | null | 1 | 3 | I have multiple cases, each having multiple independent variables measured over time. One of the variables represents known failure/success in the past, and I am trying to model a binary logistic regression to predict future failure/success (DV).
- Am I barking under the wrong tree?
- What will be the best model to move forward?
Thanks,
Joseph
| Multiple cases, each having multiple time seies | CC BY-SA 4.0 | null | 2023-05-25T21:24:30.400 | 2023-05-25T21:24:30.400 | null | null | 388845 | [
"regression",
"logistic",
"binary-data"
]
|
616952 | 1 | null | null | 1 | 12 | I have a dichotomous outcome variable (ever disease) and a continuous exposure variable (chemical). I have already run the logistic regression for the ln exposure and gotten the odds ratio. However, because this exposure is fairly common, a per 1-unit increase doesn't tell you a whole lot. Therefore, I wanted to see if there was a way that I could see the change per IQR increase in my exposure variable. Thank you!
| Calculating Change in OR per increase from the 25th to 75th percentile (IQR) | CC BY-SA 4.0 | null | 2023-05-25T21:31:22.503 | 2023-05-25T21:31:22.503 | null | null | 388846 | [
"regression",
"logistic",
"odds-ratio",
"interquartile"
]
|
616953 | 2 | null | 616934 | 1 | null | The claim "we get from (2.9) that for $x > 0$, $b_nx \to x_0$" definitely requires some elaboration, to which I don't find an easy way to fix. However, to show the ultimate goal $x_0 = \infty$, the italicized statement can be bypassed by showing under the assumption $x_0 < \infty$ it follows that
- $b_n \to 0$.
- $x_0 \leq 0$.
Hence the contradiction can be derived as the last statement in the screenshot.
If $x_0 < \infty$, then for all $x > x_0$, $F(x) = 1$. Then if $\limsup_n b_n > 0$, there would exist a subsequence $\{b_{n_k}\}$ such that $\lim_{k \to \infty}b_{n_k} = b > 0$, whence there would be some positive $x^*$ such that $b_{n_k}x^* > x_0$ for all $k$, which implies that $F^{n_k}(b_{n_k}x^*) \equiv 1 \neq \Phi_\alpha(x^*)$. Therefore, $(2.9)$ and $x_0 < \infty$ must imply $b_n \to 0$.
On the other hand, for fixed $x > 0$, by the inequality (cf. Theorem 3.37 in
Principles of Mathematical Analysis (3rd ed.) by Walter Rudin)
\begin{align}
\liminf_n \frac{F^{n + 1}(b_{n + 1}x)}{F^n(b_nx)} \leq \liminf_n F(b_nx) \leq
\limsup_n F(b_nx) \leq \limsup_n \frac{F^{n + 1}(b_{n + 1}x)}{F^n(b_nx)}
\end{align}
and $(2.9)$, it follows that $F(b_nx) \to 1$ as $n \to \infty$. The right-continuity of $F$ and $b_n \to 0$ then imply $F(0) = 1$. Hence $x_0$ must be less than or equal to $0$. This completes the proof of 2 and the result follows.
| null | CC BY-SA 4.0 | null | 2023-05-25T21:41:29.963 | 2023-05-25T22:29:25.010 | 2023-05-25T22:29:25.010 | 20519 | 20519 | null |
616954 | 2 | null | 616950 | 0 | null | For most known distributions of $Y$, the distribution of $Y|Y>u$ does not belong to a well known class (an important exception is the exponential distribution, where $Y| Y>u$ again has a shifted exponential distribution).
The good news: You can simulate from the conditional distribution $Y|Y>u$ even without identifying it as some known distribution. A lot of research has been done on this topic, and many algorithms have been devised, see for example the paper by Asmussen, Binswanger and Højgaard (2000) [Rare events simulation for heavy tailed distributions.](https://projecteuclid.org/journals/bernoulli/volume-6/issue-2/Rare-events-simulation-for-heavy-tailed-distributions/bj/1081788030.full)
Edit: I overlooked that you asked for a distribution with known cdf (Gumbel) where conditional simulation is particularly easy by inverse cdf. More elaborate methods are only necessary when the quantile function and cdf cannot be calculated (e.g. models that are given by an algorithm)
| null | CC BY-SA 4.0 | null | 2023-05-25T22:14:23.357 | 2023-05-28T21:30:24.393 | 2023-05-28T21:30:24.393 | 237561 | 237561 | null |
616955 | 2 | null | 606949 | 0 | null | Quantile Regression is not necessarily a maximum likelihood method (while it can be when using a working likelihood like the asymmetric Laplace distribution).
Performing empirical risk minimization with the quantile loss is not (necessarily) a maximum likelihood estimation method.
You can simply minimize this special risk and can show for arbitrary distributions that it result in an estimation of the quantile: [https://en.m.wikipedia.org/wiki/Empirical_risk_minimization](https://en.m.wikipedia.org/wiki/Empirical_risk_minimization)
The proof for this goes by
- defining the risk as the expected loss function: $R_\tau = \int_R dy p(y) w_\tau(y,\hat{y}) |y-\hat{y}|$
- splitting each integral explicitly to resolve the absolute value $|y-\hat{y}|$
- taking the derivative with respect to $\hat{y}$ and setting it to zero
- identify the definition of the $\tau$ quantile
All this works without assuming a special probability density p, so it is definitely not a maximum likelihood estimation.
This is also how we can show that the expected mean squared minimization estimates the (conditional) mean and the mean absolute error estimates the (conditional) median.
| null | CC BY-SA 4.0 | null | 2023-05-25T22:21:31.917 | 2023-05-26T09:42:52.137 | 2023-05-26T09:42:52.137 | 298651 | 298651 | null |
616956 | 2 | null | 616945 | 0 | null | Happy to elaborate more as needed, but the answer to both questions is yes.
Briefly, I will use an example with a slightly different context, but the rationale is the same.
Let $y$ be your response variable, $x$ be a predictor (assume it to be a scalar variable) and let $d$ be a dichotomous indicator for group membership (1 in the group, 0 not in the group). Then the model
$$\hat{y} = \beta_0 + \beta_1 · d + \gamma_0 · x + \gamma_1 · x·d$$
is the multiple regression model for the (multiplicative) moderation model of group membership on the relationship between $x$ and $y$.
If you are not in the group ($d=0$), then the equation relating $x$ and $y$ reduces to
$$\hat{y} = \beta_0 + \gamma_0 · x$$
and if you are in the group ($d=1$), the equation reduces to
$$\hat{y} = (\beta_0+\beta_1) + (\gamma_0+\gamma_1) · x$$
Finally, if you are doing a multiple regression, then you can use the p-value for the partial slope for the interaction term to determine if the slopes are statistically significantly different.
Hope this helps.
| null | CC BY-SA 4.0 | null | 2023-05-25T22:49:41.467 | 2023-05-25T22:49:41.467 | null | null | 199063 | null |
616957 | 1 | 616964 | null | -1 | 36 | I think it is easiest if I describe my question using an example:
Let
$$
\mathbf{x} \triangleq [ x_1,x_2,x_3]
$$
and for sake of explanation let's assume that
$$
\{x_i \in [0,3] \}_{i=1}^{3}
$$
which is to say that each element in the array $\mathbf{x}$ can take one of four values: 0,1,2 or 3. All combinations are allowed including repetitions e.g. $\mathbf{x}=[1,1,1]$.
Let us further say that $\mathbf{x}$ follows the distribution $P(\mathbf{X}\mid \boldsymbol \theta)$ and a sample is drawn thus: $\mathbf{x} \sim P(\mathbf{X}\mid \boldsymbol \theta)$.
The expected value of $\mathbf{X}$ is simply
$$
\mathbb{E}[\mathbf{X}] = \sum_i \mathbf{x}_i \cdot P(\mathbf{x}_i)
$$
But (here starts the real question) suppose that I now apply an operation to $\mathbf{X}$ e.g. concatenation of a reversed copy of itself i.e.
$$
f(\mathbf{x}) = \mathbf{x} \cup \texttt{reverse}(\mathbf{x})
$$
Where if we have drawn a sample e.g. $\mathbf{x} = [1,2,3]$ then $f(\mathbf{x}) = [1,2,3,3,2,1]$.
What can we say about the expectation of $f(\mathbf{X})$? I believe I am onto the Law of the unconscious statistician ([https://en.wikipedia.org/wiki/Law_of_the_unconscious_statistician](https://en.wikipedia.org/wiki/Law_of_the_unconscious_statistician)) but I do not understand if the law holds for my above function (i.e. concatenation which is the operation of interest here).
I.e. what can we say about the expected value of $\mathbf{X}$ is simply
$$
\mathbb{E}[f(\mathbf{X})] = \sum_i f(\mathbf{x}_i) \cdot P(f(\mathbf{x}_i))
$$
in terms of the original distribution over $\mathbf{x}$, if anything at all?
An explanation with an example would be most helpful (or even an explained proof) since I do not currently understand how this can possibly work (for my function $f(\cdot)$ since it is neither differentiable nor the inverse monotonic).
| Calculating the expectation (discrete RV) of a modified variable using the original distribution? [Law of the unconscious statistician] | CC BY-SA 4.0 | null | 2023-05-25T23:03:36.673 | 2023-05-26T01:34:53.883 | 2023-05-25T23:26:22.967 | 37280 | 37280 | [
"expected-value",
"conditional-expectation"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.