Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
617424
2
null
616925
0
null
## It seems to be about how far one can move away from the support In the synthetic control setting, the hypothetical value that would have been observed without a treatment is approximated using a linear combination of subjects thought to be similar to the treatment subject. The authors stress that the weights in the linear combination are in [0, 1] (Section 3.2 in Abadie 2021). In Section 4 of Abadie 2021, the authors explain that a regression-based estimator of the hypothetical value without a treatment can be thought of as a different way of computing a synthetic control that is not restricted to weights in [0, 1]. There are other differences, but that's the one the authors stress as they seem very worried by the idea of obtaining extreme values for their synthetic control. Looking at King and Zheng (2006) (cited in Section 4 of Abadie 2021), extreme values for the synthetic control indeed seems to be the concern behind wanting to restrict the coefficients. While I can see some intuition for this, I also feel that the technical reasons are not outlined clearly. Even in the [0, 1] weight scenario, one could clearly generate a linear combination that is not in the support of the original data. However, there is a limit to how far away from the support of the data one could go, and that seems to be the point. Without the [0, 1] restriction, one could move arbitrarily far from the support (see also Section 2.3.1 in Abadie 2021). King and Zheng (2006) make the point that extrapolating further from the support amounts to relying more strongly on the assumption underlying the model class, which one may want to avoid. Overall my feeling is that it would depend on the scenario - the linear regression method could lead to some extreme weights, but it wouldn't have to. Similarly, the synthetic control trial depends on assumptions on the similarity of the units being reweighted to the actual treated unit. The authors seem to think that the first poses a greater danger in many cases. The examples and their subject matter expertise make this a credible point, although I don't think it should be thought of as a fact.
null
CC BY-SA 4.0
null
2023-05-31T11:46:36.483
2023-05-31T11:46:36.483
null
null
250702
null
617425
1
null
null
1
23
I have an experiment with time series data (spike rates). A Python script calculating their autocorrelation with `statsmodels.tsa.stattools.acf` was apparently giving different answers than an implementation of equivalent logic in Python, using the same bins and 99 lags in each case. The answers had the same pattern, but different magnitudes. In an effort to understand this, I first generated a random series with three sinusoidal frequencies, computed its autocorrelation in Python, and then loaded this same series and the Python correlation into Matlab, and computed the autocorrelation in Matlab. They looked identical (the Matlab version was symmetrical and needed to be sliced, but otherwise identical). Then I stopped my own code in the debugger. I wrote out a test file with the values of a representative series of rates and its autocorrelation. I loaded these into Matlab. The Matlab version is the same shape, but different magnitude. Is this expected? If so, what are the differences between Python and Matlab that can explain it? The series from my experiment is very sparse compared to the random numbers. I'm a bit befuddled because the magnitude reduction is not consistent across conditions in this experiment, such that Matlab gives a very theoretically interesting result and Python does not. Every graph, code snippet, and link to the text files follow. Edit: I tried a few other implementations of autocorrelation in Python. The `autocorr` method on a series in Pandas gives the same answer as `acf` but using NumPy's `correlate` and normalizing by the value at 0 lag gives the same answer as Matlab, so whatever this difference is, it can be compared between Numpy and Pandas/statstools, which at least have algorithms open to inspection. Edit 2: I'm removing the code from my question because I got a close vote for off-topic because it was about programming. I only included code to try to stave off any claim that this difference might be due to a programming error. Please note that my question is about what in the implementation of these algorithms leads to this difference, and which would be the most appropriate for my data. It is not a programming question. Edit 3: On further investigation, I'm not sure the NumPy and Matlab versions are identical. Closer, but not identical. [](https://i.stack.imgur.com/zjW8y.png) [](https://i.stack.imgur.com/jQR0i.png) [](https://i.stack.imgur.com/WVMw5.png) [](https://i.stack.imgur.com/xGSCB.png) [Here](https://drive.google.com/drive/folders/1zNhyhDew_O7EfDqlfvspbZhs7nmNi483?usp=sharing) are all the text files.
Python's `acf` and Matlab's `xcorr` apparently give different magnitude (but same pattern) answers for some data
CC BY-SA 4.0
null
2023-05-31T11:52:35.650
2023-05-31T15:40:46.073
2023-05-31T15:40:46.073
245642
245642
[ "python", "autocorrelation", "matlab", "sparse" ]
617426
1
null
null
0
15
You'll often see the goal of a statistical estimation problem as being to fit a model such that it $\approx p_{*}(y|x)$ where $p_{*}(y|x)$ is the "true distribution of the data". My question is: what uncertainty is possessed in this "true distribution of the data"? Does it assume infinite training data...in which case it only possesses "aleatoric" uncertainty? Put another way...assuming correct model specification, if we have little data but we fit a Bayesian model such that the model uncertainty is perfectly accounted for, does that model's posterior predictive $p(y|x, \mathbf{x}, \mathbf{y})$ closely approximate $p_{*}(y|x)$?
Where does the uncertainty of the "true" $p_{*}(y|x)$ come from?
CC BY-SA 4.0
null
2023-05-31T12:23:22.303
2023-05-31T12:23:22.303
null
null
381061
[ "machine-learning", "bayesian", "assumptions", "posterior" ]
617427
2
null
588751
0
null
A fully convolutional network is independent of the number of pixels in the input if the output size is allowed to have a different number of pixels as well. This is due to the fact that the number of parameters in a convolutional layer is independent of the number of pixels in the input. However, the same convolution applied to images of different sizes, will produce outputs with different sizes. This scenario typically occurs in some sort of [auto-encoder](https://en.wikipedia.org/wiki/Autoencoder) setup. After all, it is typically no problem if the size of the hidden representations is greater for large than for small images. E.g. for segmentation, as in the fully convolutional networks paper, or compression tasks. To make a prediction network truly independent of the image size, you would typically use global average pooling at some point in the network. Global average pooling reduces all pixels to a single value by computing the average pixel value. This makes it possible to add fully-connected (or convolutional) layers with fixed dimensions to obtain the desired output size. However, this has nothing to do with fully convolutional networks per se.
null
CC BY-SA 4.0
null
2023-05-31T12:28:01.197
2023-05-31T12:28:01.197
null
null
95000
null
617428
2
null
605756
0
null
I think that in your case, if it is "Cured", the event of interest will never happen and converge to a duration of inifinity. If it's the case, you just put your duration as the duree, the timeline as the maximum timeline you want to observe, and then "event" (not cured) as the event_col (0/1). The model will do the rest for you. Also, you can see this problem as a competing risk problem. In this case, you can use a model that handle multiple events (censored = 0, cured = 1, event = 2).
null
CC BY-SA 4.0
null
2023-05-31T12:46:24.390
2023-05-31T12:48:11.617
2023-05-31T12:48:11.617
389264
389264
null
617429
1
null
null
1
46
I am struggling with the following problem (casella & berger 4.30(b)): $$ \text{Suppose that} \;\;\;Y|X=x \sim normal(x,x^2) \;\; \text{and} \;\; X\sim uniform\,(0,1).\\\text{Prove that} \;\;\frac{Y}{X} \;\text{and}\; X \;\text{are independent.} $$ My attempt: $$ \text{Let} \; u= y/x \;\; \text{and}\;\;v=x. \\ \text{Then} \;\;x= v,\;y = uv, \;and\;\;J = det\left(\begin{bmatrix} 0 & 1 \\ v & u \\\end{bmatrix}\right)=-v \\ \text{Also, this transformation is 1-1 and }\\\text{maps the set} \;\mathscr X=\{(x,y): 0<x<1, \;-\infty<y<\infty \} \;\\\text{onto the set} \;\mathscr Y=\{(u,v): -\infty<u<\infty, \;0<v<1 \}.\\ \text{Therefore,} \;\;f_{U,V}(u,v) = f_{X,Y}(v,uv) *|J| = f_{Y|X}(uv|v)f_{X}(v)*|v| =\frac{v}{\sqrt{2\pi}v^2}e^{-\frac{1}{2v^2}(uv-v)^2}I_{\mathscr Y}((u,v))\\\text{and thus,} \;f_{U,V}(u,v) =\frac{v}{\sqrt{2\pi}v^2}e^{-\frac{1}{2v^2}v^2(1-u)^2}I_{\mathscr Y}((u,v)) \;\text{can not be decomposed product of function of u and function of v.}\\\text{Hence},\;\frac{Y}{X}\;\text{and}\; X \;\text{are not independent.} $$ However, $$ \;\;\frac{Y}{X}\,|\,X=x \sim normal(1,1).\\\text{So}\;\; \frac{Y}{X}\,|\,X=x \;\;\text{does not depend on}\;x. \\\text{Thus,} \;\;\frac{Y}{X} \;\text{and}\; X \;\text{are independent.} $$ I don't know why my attempt is wrong.
Prove that two random variables are independent
CC BY-SA 4.0
null
2023-05-31T12:49:06.613
2023-05-31T19:48:34.930
null
null
389258
[ "mathematical-statistics", "inference", "random-variable", "independence" ]
617430
2
null
617046
0
null
## The difference is in the fact that $X_2$ is an effect of the target This is a really cool question that I hadn't thought enough about! A linear regression of the sort of $Y\sim T+X_1+X_2$ will aim to find the coefficients that explain the most variation an the target. In your first example, controlling for $X_2$ not only opens an otherwise closed path, but it also includes an effect of the target variable $Y$. In an additive noise model (as is often assumed for theoretical proofs and in simulations), $Y$ would be given as $ Y := \beta T + N_Y$ where $N_Y$ is an independent noise component. Once you include an effect of $Y$, in this case $X_2$, the $N_Y$ component can be partially explained. For this reason, $X_2$ is given a non-zero weight. Since $X_2$ is also an indirect effect of $T$, the weight given to $T$ in the regression becomes a biased estimator for $\beta$. Graphically speaking, you can draw an additional arrow for $N_Y$ pointing into $Y$ (you can do the same for all additive noise components). Conditioning on $X_2$ creates a dependency between $T$ and $N_Y$ (because $X_2$ is a descendant of the collider $Y$ between them) which biases the estimate of $\beta$ (See also Figure 11.5 in Pearl 2009). In your second example, none of the control variables contain any information about $Y$ other than that already contained in your treatment, so all of the adjustment sets work. Controlling for $X_2$ therefore introduces two separate problems: It opens a confounding path between $T$ and $Y$, and it opens another path between $T$ and $N_Y$. Controlling for $X_1$ solves the first problem, but not the second, which is why the estimation is still biased.
null
CC BY-SA 4.0
null
2023-05-31T13:06:22.973
2023-05-31T13:06:22.973
null
null
250702
null
617431
2
null
617429
1
null
You have $$f_{U,V}(u,v) =\frac{v}{\sqrt{2\pi}v^2}e^{-\frac{1}{2v^2}(uv-v)^2}I_{\mathscr Y}((u,v))$$ though I think you should extend the square root to $\dfrac{v}{\sqrt{2\pi v^2}}e^{-\frac{1}{2v^2}(uv-v)^2}I_{\mathscr Y}(u,v)$. There are $v$s you can cancel to give $$f_{U,V}(u,v) =\frac{1}{\sqrt{2\pi }}e^{-\frac{1}{2}(u-1)^2}I_{\mathscr Y}(u,v)$$ and since $I_{\mathscr Y}(u,v)$ depends on $v$ but not on $u$, this shows independence by decomposition into the marginal densities $\frac{1}{\sqrt{2\pi }}e^{-\frac{1}{2}(u-1)^2}$ and $I_{\mathscr Y}(v)$.
null
CC BY-SA 4.0
null
2023-05-31T13:08:17.283
2023-05-31T13:08:17.283
null
null
2958
null
617432
1
null
null
1
13
From a total of $N$ words i have the following dataset where the first column represents the ranks and the second the frequency. For example $$\begin{array}{cc} 1 & 4300 \\ 2 & 3100 \\ 3 & 2500 \\ 4 & 1900 \\ \vdots & \vdots \end{array} $$ I want to find the constant where satisfies $$cf_i =\frac{\text{const}}{i}$$ where $i$ is the rank and $cf_i$ the frequency. I have plotted the loglog plot of the data and found the fitting line through regression but i cannot understand how to found $\text{const}$. Any ideas?
Given the rank and frequency find the constant in Zipfs law
CC BY-SA 4.0
null
2023-05-31T13:22:04.153
2023-05-31T16:49:53.687
2023-05-31T16:49:53.687
5176
389267
[ "power-law", "zipf" ]
617433
1
null
null
0
18
- A "new broom" in the modeling department has swept clean the existing 5-figure number of dictionary geo features (kept in a key/value store), replacing them with just their key (more precisely, by exact latitude and longitude of client's zip code - a pair of multi-level features that together can probably proxy for unique ID), arguing that it's "just geography" (i.e. the key of all these dictionary features is a good replacement for all their diversity). - The other argument - against dictionary features in general - was is that imprecise features (e.g. aggregated by geo keys such as client zipcode) will give less accurate models, so we don't even have to test such features that have more accurate, i.e. non-aggregated alternatives (even if dictionary features would rank higher in the model features rank, they are a priori less precise and for that reason alone should not be considered). What can possibly go wrong?
Replacing 10k+ geo features with just their key (zipcode coordinates) in a GBDT model - a sound idea?
CC BY-SA 4.0
null
2023-05-31T13:22:10.157
2023-05-31T13:29:33.700
2023-05-31T13:29:33.700
325325
325325
[ "machine-learning", "feature-selection", "boosting", "feature-engineering", "geography" ]
617434
1
null
null
1
32
I did a power analysis to calculate the sample size in GPower. Now I'd like to do the same in R. However, I am not able to figure out how... I found [ss.2way](https://rdrr.io/cran/pwr2/src/R/ss.2way.R) but that seems to require different inputs. Is there any way to calculate the sample size for a 2x3 design in R? Thanks in advance. This is the GPower input/output [](https://i.stack.imgur.com/L7vS4.png)
Power Analysis for 2-Way-Anova in R
CC BY-SA 4.0
null
2023-05-31T13:26:12.577
2023-05-31T13:28:54.750
2023-05-31T13:28:54.750
389269
389269
[ "r", "anova", "statistical-power", "gpower" ]
617435
2
null
617375
0
null
The confusion might come from the [multiple parameterizations of the Weibull distribution](https://en.wikipedia.org/wiki/Weibull_distribution#Alternative_parameterizations). Note that the first hazard function can be written in the form $\lambda(t)=\lambda_0(t) \exp(\eta_i)$, where $\eta_i$ is the linear predictor for individual $i$ and $\lambda_0(t)=\sqrt t/2$. With the baseline hazard $\lambda_0(t)$ proportional to a power of $t$, as for a Weibull hazard, that's a proportional-hazards representation of a Weibull survival model. Thus the hazards both for the survival and for the censoring are based on Weibull models. In the parameterization used in the [Austin paper you cite](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3546387/), the Weibull cumulative hazard is represented as $H_0(t)=\lambda t^{\nu}$ and you simulate a Weibull-distributed survival time $T$ by sampling $u$ from a uniform distribution of survival fractions over (0,1) and calculating: $$T = \left(-\frac{\log u}{\lambda \exp (\eta)} \right)^{1/\nu} .$$ That comes from the general relationship between the survival function $S(t)$ and the cumulative hazard function, $S(t)=\exp(-H(t))$. Both examples of simulation code that you show are in that general form for simulation from a Weibull survival model, although I can't rule out some discrepancies in the translation from page 13 of the [arXiv paper](https://arxiv.org/pdf/2207.07758.pdf) to the supplementary code arising from the differing parameterizations.
null
CC BY-SA 4.0
null
2023-05-31T13:35:19.033
2023-05-31T13:35:19.033
null
null
28500
null
617436
1
null
null
0
16
Is it true that for a square symmetric matrix such as the covariance matrix, the singular values are equal to the eigenvalues? The eigen decomposition for covariance is the same as singular value decomposition?
Singular value and Eigen value for Square Matrix
CC BY-SA 4.0
null
2023-05-31T13:37:46.370
2023-05-31T13:50:33.683
null
null
388783
[ "eigenvalues" ]
617437
1
null
null
0
13
Let say I am sampling from a population with an unknown distribution to approximate the mean of the population. I am trying to figure out how large my sample size n has to be in order to guarantee with 95% confidence that my sample mean is within, say, 1% of the population mean. I know I can get the 95% confidence interval of the sample mean if the standard deviation $\sigma$ is known, with: $\bar{X} \pm \frac{1.96\sigma}{\sqrt{n}}$ But $\sigma$ requires knowing population parameters like $\mu$. Is there a way to estimate a confidence interval for the mean if I only know $n$?
Estimate potential distance from population mean given only sample size?
CC BY-SA 4.0
null
2023-05-31T13:42:40.207
2023-05-31T13:42:40.207
null
null
76851
[ "confidence-interval", "sample" ]
617438
2
null
617436
1
null
For a real symmetric positive semi-definite matrix like a covariance matrix, the nonnegative square roots of the eigenvalues are equal to the singular values. The eigenvectors are also equal to the left and right singular vectors. This is because, for these types of matrices, the eigendecomposition and the SVD give equivalent decompositions. Detailed discussions and derivations can be found in this extensive Mathematics Stack Exchange Question: [Intuitively, what is the difference between Eigendecomposition and Singular Value Decomposition?](https://math.stackexchange.com/questions/320220/intuitively-what-is-the-difference-between-eigendecomposition-and-singular-valu)
null
CC BY-SA 4.0
null
2023-05-31T13:44:38.433
2023-05-31T13:50:24.093
2023-05-31T13:50:24.093
53580
53580
null
617439
2
null
617420
0
null
@Frank Harrell I'm not a statistics major, but I'll try to present a few motivations for using IPTW from a non-expert perspective. Assuming that by "covariate adjustment" you mean multiple covariate Cox regression: - IPTW allows for adjustment of more parameters. I once attended a lecture on your modeling strategy, and multiple regression had a rule of thumb of "10 events to 1 variable." Additionally, in your comment on another question on this website, you mentioned that IPTW only requires "4 events to 1 variable." In my small-sample study, this can help me balance more covariates, whereas in multiple regression, I need to consider how many degrees of freedom I can afford. - The entropy balancing algorithm in IPTW can achieve perfect matching of covariates between two groups. At the same time, I will report the effective sample size after weighting in the study results. - I'm unsure if the effects of covariates are linear. Assuming non-linear effects would increase the cost of degrees of freedom in Cox multiple regression. - In IPTW, I only need to perform simple tests on the balanced groups. Balanced survival curves are more intuitive, and I can easily obtain effect sizes through Cox univariate regression.
null
CC BY-SA 4.0
null
2023-05-31T13:45:57.150
2023-05-31T13:45:57.150
null
null
388935
null
617440
1
null
null
0
7
I have pooled several rounds of annual cross-sectional survey data to create 5 synthetic-cohorts to assess differences in (say smoking) prevalence between cohorts at the same age. The age-range (25-60) for the cohorts does not overlap completely - the most recent cohort has rates for ages 25-38, the oldest cohort 45-60; the middle cohort has prevalence rates for all ages 25-60 and is the reference cohort. I use prevalence rate ratios (PRR) by averaging the mean prevalence over those ages in a cohort which overlap with the same ages in the reference cohort. I would now like to stratify the overall PRR by social class to see if the risk of smoking is changing by social class (5 categories) within each cohort - i.e. an increase in relative inequality. But the sample sizes are too small to split by sex by age, cohort, or class for descriptive analysis. Could I do this with a logistic or Cox model (which is appropriate?) and also include an interaction term for cohorts by social class to assess gradient in inequality change? My analysis will be done in SPSS. Thanks for advice in advance.
Best model to use for computing prevalence rate ratios in cross-sectional data with binary outcome?
CC BY-SA 4.0
null
2023-05-31T13:50:28.157
2023-05-31T13:59:16.823
2023-05-31T13:59:16.823
345611
389270
[ "logistic", "binomial-distribution", "cox-model", "prevalence", "synthetic-cohort" ]
617441
1
617444
null
2
31
In page 64 of [Bayesian Data Analysis](http://www.stat.columbia.edu/%7Egelman/book/) by Gelman et.al. they write > ... sensible vague prior density for µ and σ, assuming prior independence of location and scale parameters, is uniform on ($\mu$, $\log~\sigma$) or, equivalently, $p(\mu, \sigma^2) \propto 1/\sigma^2$. Also in page 87 of [The BUGS Book](https://www.mrc-bsu.cam.ac.uk/software/bugs/the-bugs-project-the-bugs-book/) ([pdf download](https://www.mrc-bsu.cam.ac.uk/wp-content/uploads/bugsbook_chapter5.pdf)) they discuss the equivalence of the Jeffreys prior to the uniform on the scale: > ... the Jeffreys prior is $p_J(\sigma) \propto \sigma^{-1}$, which in turn means that $p_J(\sigma^k) \propto \sigma^{-k}$ for any choice of power k. ... we note that the Jeffreys prior is equivalent to $p_J(log \sigma^k) \propto constant$. I have understood this to mean that a uniform prior on $\log \sigma^2$ should be $\propto 1/\sigma^2$. I have been unable to derive this. This is my attempt (with a nod to [this answer](https://stats.stackexchange.com/questions/7430/creating-a-uniform-prior-on-the-logarithmic-scale#415239)): \begin{align} \text{Let}~ Y =& \log \sigma^2 \\ p(Y) \propto& 1 \\ \frac{dY}{d\sigma^2} =& 2/\sigma \end{align} Then to get the distribution on the $\sigma^2$ scale: \begin{align} \text{If}~ X =& \sigma^2 \\ \text{then}~ p(X) =& p(Y) |\frac{dY}{d\sigma^2}| \\ =& 1 \times 2/\sigma \\ \propto & 1/\sigma \end{align} Where have I gone wrong please?
Derive the prior on variance scale if uniform prior placed on logarithm scale
CC BY-SA 4.0
null
2023-05-31T13:57:53.433
2023-05-31T14:30:51.373
null
null
43842
[ "bayesian", "prior" ]
617442
1
null
null
2
48
I'm a physicist trying to finally get a hold on practical statistics for particle physics and am having problem with the following -- I apologize for the lack of formality below. Suppose the number of events within a single channel is governed by a Poisson distribution $P(N,\mu)$, whose parameter for Null ($\mu_b$) and Alternative ($\mu_s$) hypotheses I obtain from external reasoning/simulations (notice that $\mu_s \ge \mu_b$). I want to find a threshold for exclusion of signal at 95% CLs (not CL), which is defined by $$ CL_s = \frac{CL(\text{alternative})}{CL(\text{null})}. $$ Since I want to rule out the alternative (and $\mu_s \ge \mu_b$), I want to define the $N_0$ threshold where $$ CL_s = \frac{P(N\le N_0\mid\mu_s)}{P(N\le N_0\mid\mu_b)}\le 0.05. $$ Lack of precision aside, I understand what is happening here. However, some texts for statistics in High Energy Physics seem to suggest using the Likelihood Ratio test statistic for CLs. In this case, I try to replace $N$ by $$ Q \equiv -2\ln \frac{P(N\mid\mu_s)}{P(N\mid\mu_b)} $$ and then define the threshold as $$ CL_s =\frac{P(Q> Q_0\mid\mu_s)}{P(Q> Q_0 \mid \mu_b)} = \frac{1-P(Q\le Q_0 \mid \mu_s)}{1-P(Q\le Q_0 \mid \mu_b)}\le 0.05. $$ In the end, however, I end up inverting $Q_0 = Q(N_0)$, to obtain a limit on $N$. Does this make sense at all? Do I gain something going over to $Q$, or is the Likelihood Ratio Test something entirely different? Edit: For context and confirmation, check the following didactic [article](https://www.ma.imperial.ac.uk/%7Edvandyk/Research/14-reviews-higgs.pdf).
Likelihood Ratio vs Modified Frequentist Approach (CLs)
CC BY-SA 4.0
null
2023-05-31T14:04:39.737
2023-06-02T16:50:20.283
2023-05-31T20:23:04.390
389271
389271
[ "hypothesis-testing", "confidence-interval", "likelihood-ratio" ]
617443
2
null
617333
1
null
You can think of it like this: a [function](https://en.wikipedia.org/wiki/Function_(mathematics)) is a mapping $f: x \to y$. We use Gaussian Processes to model random functions $f \sim \mathcal{GP}$, where the mapping is non-deterministic. GP takes some points $x$ and the realizations of the functions $f(x) = y$ to learn the random function $f$. Using the symbols you used in your question $x$ are the observed inputs, and $x_*$ are the new, unobserved ones. The intervals describe the distribution over functions (as modeled with GP). We can evaluate the GP over arbitrary inputs $x_*$ to get the draws from the mappings for those inputs. So "the points" are just the mappings that we happened to look at, but GP can be evaluated for any input $x_*$ and tell us what is the distribution of the realizations of $f$ at those points.
null
CC BY-SA 4.0
null
2023-05-31T14:18:49.190
2023-05-31T14:18:49.190
null
null
35989
null
617444
2
null
617441
2
null
Your error is going from $\text{Let}~ Y = \log \sigma^2$ to $\dfrac{dY}{d\sigma^2} = 2/\sigma$ You should have: $\dfrac{dY}{d\sigma^2} = 1/{\sigma^2}$ (simple derivative of a logarithm) though perhaps you tried $\dfrac{dY}{d\sigma} = 2\sigma \frac{1}{\sigma^2}= 2/{\sigma}$ (chain rule). This gives you: $\text{If}~ X = \sigma^2 \text{ then } p(X) \propto 1/{\sigma^2}$ for the improper prior. It is worth noting that $\log(\sigma^k)=k\log(\sigma)$ and this indicates why something similar happens for all powers of the standard deviation.
null
CC BY-SA 4.0
null
2023-05-31T14:22:54.700
2023-05-31T14:30:51.373
2023-05-31T14:30:51.373
2958
2958
null
617446
1
null
null
0
19
I am trying to implement open set classification and from my research, softmax (usually with temperature scaling) can be used to create a confidence metric. However, for a complete outlier input which is not part of any of the known classes, the temperature scaled softmax assigns a probability of 1 to the middle class because it wants every other probability to be very small i.e. 0 but it still needs to satisfy the total probability = 1 requirement. How do I handle this situation?
Softmax gives high value for middle class when seeing outlier data
CC BY-SA 4.0
null
2023-05-31T14:36:47.433
2023-05-31T14:36:47.433
null
null
389274
[ "machine-learning", "tensorflow", "computer-vision", "artificial-intelligence", "softmax" ]
617447
1
null
null
0
12
I am working with different transformations of my response. I use two error metrics which normalize by the range of the data, in order to make comparisons between different models based on these transformations. Does anyone know a source which discuss the implications of normalizing by the range when the distribution is normal, T-shaped or skewed? Or normalization by the range in contrast to distributions in general? Thanks. Below is an example of two distributions, with the right-hand one being transformed according to the first difference transformation. [](https://i.stack.imgur.com/bkpk6.png)
Implications of normalizing by the range of data when comparing evaluation metrics for different distributions
CC BY-SA 4.0
null
2023-05-31T15:13:33.563
2023-05-31T16:46:44.573
2023-05-31T16:46:44.573
320876
320876
[ "distributions", "normalization", "error", "model-evaluation" ]
617448
1
null
null
0
17
What kind of deep learning is the generation of numerical features (Y) from objects (X) used to compute a score (f(.), differentiable) that is to be maximized directly? Basically NN$\theta$(x) = y, f(y) = score, so $\frac{d score}{d\theta} = \frac{dy}{d\theta} \frac{d f(y)}{dy}$ can be used for backpropagation; f(.) can be seen as a form of loss function. This doesn't seem like supervised learning as no label exists for $x$. A good one could be found, but the best label score-wise isn't known and the search shouldn't be guided with potentially sub-optimal $y$s. This isn't reinforcement learning, as the score is differentiable and used for backprop; there is also no sequential decision-making. Is this a form of deep unsupervised learning? Self-supervised learning?
What kind of learning is feature generation for score maximization?
CC BY-SA 4.0
null
2023-05-31T15:25:06.880
2023-05-31T16:51:57.627
2023-05-31T16:51:57.627
389279
389279
[ "machine-learning", "generative-models" ]
617449
2
null
617419
4
null
The event* $$0 \in \left[Y-\log\left(\frac{1-\alpha_2}{\alpha_2}\right),Y-\log\left(\frac{\alpha_1}{1-\alpha_1}\right)\right]$$ is equivalent to the event $$\theta \in \left[X-\log\left(\frac{1-\alpha_2}{\alpha_2}\right),X-\log\left(\frac{\alpha_1}{1-\alpha_1}\right)\right]$$ so if you can show that that the first event occurs $1-\alpha_1-\alpha_2$ of the time, then it works as well for the second event. The advantage of working with the first event is that it doesn't depend anymore on the parameter $\theta$. --- To compute the probability of the first event you might alternatively use the equivalent event $$Y \in \left[\log\left(\frac{\alpha_1}{1-\alpha_1}\right),\log\left(\frac{1-\alpha_2}{\alpha_2}\right)\right]$$ Which follows from $a \leq Y \leq b$ then $a-Y \leq 0 \leq b-Y$ and $Y-b \leq 0 \leq Y-a$ Also useful to know is that the $f(x)$ in the question is the [logistic distribution](https://en.m.wikipedia.org/wiki/Logistic_distribution) which has the quantile function $\log\left(\frac{p}{1-p}\right)$ for the case with zero mean and scale one. So the event is also the same as $$Y \in \left[Q(\alpha_1),Q(1-\alpha_2)\right]$$ --- *I believe that you made a typo and should have used $$[X-\log(\frac{1-\alpha_2}{\alpha_2}),X-\log(\frac{\alpha_1}{1-\alpha_1})]$$ demonstration with R-code ``` set.seed(1) x = rlogis(10^3) alpha = 0.2 beta = 0.2 ### correct boundaries contain ### zero in 602/1000 cases upper = x-log(alpha/(1-alpha)) lower = x-log((1-beta)/beta) mean((lower<0)*(upper>0)) ### wrong boundaries contain ### zero in 664/1000 cases upper = x-log(alpha/(1+alpha)) lower = x-log((1-beta)/beta) mean((lower<0)*(upper>0)) ```
null
CC BY-SA 4.0
null
2023-05-31T15:34:29.807
2023-05-31T19:19:57.517
2023-05-31T19:19:57.517
164061
164061
null
617450
1
null
null
1
18
I want to characterize the relation of a few input parameters to a single output parameter. The problem I have is that my data is collected from several groups. The groups are defined both by the input parameters and by how the input parameters interact with the output parameter. I don't know the identity or proportion of each group in the data. Example: I want to characterize how ear and tail lengths are related to maximal vertical jump height and my data has for each surveyed animal the value of these three parameters. Unbeknownst to me, the data was collected from: - bunnies, whose maximal jump height is proportional to the length of their ears. - striped cats whose maximal jump is proportional to the length of their tail. - dotted cats whose maximal jump is inversely proportional to the length of their tail. Regressing the whole group together would not identify the true effects in the data. I could first cluster the data on the input parameters (e.g. with k-means) but then miss groups that share similar input features while interact differently with the output feature (striped and dotted cats). A non-efficient solution could be to consider different partitions of the data and give each data partitioning some score integrating the cluster quality (e.g. within cluster similarity and between cluster dissimilarity) with the regression accuracy (e.g. sum of squares of the residuals). The problem is that there are a lot of ways to partition the data. I wonder whether there is a more systematic way to integrate the clustering and regression processes.
Regression with unlabeled data from several clusters
CC BY-SA 4.0
null
2023-05-31T15:34:50.507
2023-05-31T18:42:39.910
2023-05-31T18:42:39.910
52004
52004
[ "regression", "clustering", "unsupervised-learning" ]
617451
1
null
null
0
4
I am using R. I have a dataset that looks like this using `srt()`: ``` 'data.frame': 233 obs. of 3 variables: $ Design : Factor w/ 4 levels "Crossover","Observational",..: 2 3 3 3 4 3 3 1 3 2 ... $ Status : Factor w/ 3 levels "Active","Passive",..: 1 1 2 2 2 2 1 1 2 1 ... $ Outcome: Ord.factor w/ 3 levels "Positive"<"Neutral"<..: 2 1 1 1 1 1 3 2 1 1 ... ``` This is the code I have run so far: ``` c.1<-c%>% transmute(Design = as.factor(Design), Status = as.factor(Status), Outcome = factor(Outcome, ordered = TRUE, levels = c("Positive", "Neutral", "Negative")))%>% dcast(Design+Status~Outcome, drop = FALSE) ``` To get the output that looks: ``` Design Status Positive Neutral Negative 1 Crossover Active 8 10 1 2 Crossover Passive 6 4 0 3 Crossover Posted 0 0 0 4 Observational Active 9 4 0 5 Observational Passive 0 0 1 6 Observational Posted 0 0 0 7 Parallel Active 33 32 2 8 Parallel Passive 67 37 1 9 Parallel Posted 1 2 0 10 Pre/Post Active 4 0 0 11 Pre/Post Passive 10 1 0 12 Pre/Post Posted 0 0 0 ``` Situation: "Design" refers to the study designs that have been reported. "Status" refers to the way a treatment was delivered (active, passive, or posted). The Positive, Neutral and Negative are the outcomes of the intervention. The numbers below are the counts for each outcome for a given combination of Design and Status. So for example, the combination of Crossover (Design) + Active (Treatment) gave 8 positive outcomes. My question: Which statistical test can I use to: - Check if there is significant difference in the nature of the outcome (Positive, neutral or negative) for a given combination of Design and Status. I was advised to use a method to test "ordinal" factors as Positive, neutral and Negative as in our context was the same as "High", Medium" or "Low".
Testing difference of counts of one category across a combination of the other two categories
CC BY-SA 4.0
null
2023-05-31T15:54:46.943
2023-05-31T16:06:03.730
2023-05-31T16:06:03.730
378020
378020
[ "categorical-data", "many-categories" ]
617452
1
null
null
4
156
I I want to perform a paired t-test to check if there's some effect, I have the distribution of "before" and the distribution of "after" the manipulation. Do I need to assume the population variances of the two distribution are equal ?
equal *population* variances in paired t test
CC BY-SA 4.0
null
2023-05-31T16:17:54.197
2023-06-01T07:21:36.193
2023-06-01T06:23:31.740
53690
389283
[ "hypothesis-testing", "distributions", "variance", "t-test", "paired-data" ]
617454
2
null
617452
8
null
Any assumptions you make in a paired test would have to do with the paired differences. After all, a paired test is a one-sample test in disguise. Therefore, NO, you do not need to assume equal variances of the two groups for a paired t-test.
null
CC BY-SA 4.0
null
2023-05-31T16:31:44.597
2023-06-01T07:21:36.193
2023-06-01T07:21:36.193
247274
247274
null
617455
1
null
null
0
13
As I am reading about recommender systems in Machine Learning, UV decomposition caught my eye ([click](https://stats.stackexchange.com/questions/189730/what-is-uv-decomposition) for an explanation or see below). So I have two questions: Question 1: what are the drawbacks of trying to UV-decompose a 1 by m vector into a product of a 1 by d ( $U$ ) vector and a d by m matrix ($V$)? As far as I can see, the problem arises from the fact that the system is poorly constrained. Is it so? How can I relate that fact to the rank of the original matrix? Question 2: Imagine that we have two matrices. Matrix $A$ is a user-item n by m matrix. Matrix $B$ is a 1 by m vector describing certain characteristics of items. Is it possible to overcome the problem of poor constraint by constructing the decomposition as following: $A \approx U_1V^{T}$, $B \approx U_2V^{T}$? Note that $V^{T}$ matrix is shared by both decompositions and thus we expect the constraints of matrix A to bind the solution. Please also note that the optimization function will then look (roughly) like that: $$\min J = ||A - U_1V^{T}||_2^2 + \beta ||B - U_2V^{T}||_2^2$$ Effectively, what I am trying to achieve is to inject some item-related side information from a matrix $B$ to the solution. What are the cons of doing so? Thanks! --- More on UV decomposition: > Consider movies as a case in point. Most users respond to a small number of features; they like certain genres, they may have certain famous actors or actresses that they like, and perhaps there are a few directors with a significant following. If we start with the utility matrix M, with n rows and m columns (i.e., there are n users and m items), then we might be able to find a matrix U with n rows and d columns and a matrix V with d rows and m columns, such that UV closely approximates M in those entries where M is nonblank. If so, then we have established that there are d dimensions that allow us to characterize both users and items closely. We can then use the entry in the product UV to estimate the corresponding blank entry in utility matrix M. This process is called UV-decomposition of M. [](https://i.stack.imgur.com/BfXDs.png)
Is it possible to apply matrix decomposition to a vector, injecting additional information to UV decomposition?
CC BY-SA 4.0
null
2023-05-31T16:48:05.837
2023-05-31T16:48:05.837
null
null
389285
[ "machine-learning", "linear-algebra", "recommender-system", "svd", "matrix-decomposition" ]
617456
1
617551
null
2
18
Define $\pi_i$ as the probability that person $i$ will be missing from your sample and $Y_i = 1$ denotes that a subject is missing. Say we're in a missing at random (MAR) scenario where $\pi_i$ depends on two known continuous variables $X_1$ and $X_2$: $$logit(\pi_i) = \beta_1 x_1 + \beta_2 x_2$$ Let's say that I introduce a "baseline" rate of missing values, $\beta_0$, that is independent of all covariates. For a specific example, let's say there is a 5% chance that you an observation is missing. In this case, $\pi_i$ is described by: $$\text{logit}(\pi_i) = \beta_0 + \beta_1 x_1 + \beta_2 x_2$$. In our specific example of 5% baseline missing, this would be: $$\text{logit}(\pi_i) = \text{logit}^{-1}(0.05) + \beta_1 x_1 + \beta_2 x_2$$ In this scenario, there parts of $\pi$ that depend on known covariates, i.e. $\beta_1$ and $\beta_2$. There is also a part of $\pi$ that doesn't depend on any covariates: $\beta_0$. If we observe missing values through this second scenario, is this considered MAR? My guess it that it is because the distribution of $\pi$ depends on $X_1$ and $X_2$ even if not all parts of $\pi$ do.
Baseline rate of missing values. Can missing values be MAR and MCAR?
CC BY-SA 4.0
null
2023-05-31T16:49:09.203
2023-06-01T15:15:55.903
2023-06-01T15:15:55.903
45453
45453
[ "logistic", "mathematical-statistics", "missing-data" ]
617457
1
null
null
0
16
It has KM and model predicted curve overlayed for repeated event model produced by below code. Struggling to interpret this plot. Any help appreciated.Thanks. 1)Is this for first event or all events(counting event method)regardless of subject? fmods = flexsurvreg(Surv(START,STOP,EVENT) ~ 1,data=data,dist="weibull") p=ggflexsurvplot(fmods, xlab = "Time (Months)", censor = F,title = "",conf.int = T, fun = "survival") p=p+labs(y = "Probability") p1=p$plot + theme_bw()+theme(legend.position = 'none')+ scale_x_continuous(breaks = seq(0,110,8)) p1 [](https://i.stack.imgur.com/e3xKw.png)
does ggflexsurvplot overlay KM curve with model predicted curve for the first event or all events for repeated event analysis?
CC BY-SA 4.0
null
2023-05-31T16:53:04.007
2023-05-31T21:42:51.800
2023-05-31T21:42:51.800
297005
297005
[ "survival" ]
617458
1
null
null
0
26
Suppose I have m observations of $y$ vectors of varying dimensions $y_1=(y_{11},\dots, y_{1n_1}),\dots,y_m=(y_{m1},\dots, y_{mn_m})$, where $y_i$ is of dimension $n_i\geq 300$ for $1\leq i\leq m$. Let $X_i$ be corresponding covariate matrix of $y_i$ of dimension $n_i\times p$. I will denote $D=(y_1,X_1,\dots, y_m,X_m)$ for all data. Suppose $\beta$ is a column vector of dimension $p$. Suppose $y_i|X_i,\beta,\sigma_i\sim Normal(y_i-X_i\beta,\sigma_i^2I_{n_i})$ where $\sigma_i^2$ is the variance and $I_{n_i}$ is identity matrix of size $n_i$. Suppose $(\beta,\sigma_1^2,\dots,\sigma_m^2)\sim \frac{1}{\prod_{1\leq i\leq m}\sigma_i^2}$ for non-informative prior, where RHS of $\sim$ is the proportional density. Then $p(\beta,\sigma_1^2,\dots,\sigma_m^2|D)\propto\prod_{1\leq i\leq m}\frac{1}{\sigma_i^{n_i+2}}\exp(-\frac{y_i^Ty_i}{2\sigma_i^2}-\beta^T\frac{X_i^TX_i}{2\sigma_i^2}\beta+2\frac{y_i^TX_i\beta}{2\sigma_i^2})$ From joint density, I can read off conditional density of $p(\beta|\sigma_1^2,\dots,\sigma_m^2,D)$ and $p(\sigma^2_i|\beta,\sigma^2_{-i},D)$ where $\sigma^2_{-i}=(\sigma^2_1,\dots,\sigma^2_{i-1},\sigma^2_{i+1},\dots,\sigma^2_m)$. I will denote matrix $A=\sum_{1\leq i\leq m}\frac{X_i^TX_i}{\sigma_i^2}$ and vector $v=\sum_{1\leq i\leq m}\frac{X_i^Ty_i}{\sigma_i^2}$. $p(\beta|\sigma_1^2,\dots,\sigma_m^2,D)\propto \prod_{1\leq i\leq m}\exp(-\beta^T\frac{X_i^TX_i}{2\sigma_i^2}\beta+2\frac{y_i^TX_i\beta}{2\sigma_i^2})=\exp(-\frac{1}{2}(\beta^TA\beta+2\beta^Tv))$. This gives $\beta|\sigma_1^2,\dots,\sigma_m^2,D\sim Normal(A^{-1}v,A^{-1})$. $p(\sigma_i^2|\beta,\sigma_{-i}^2,D)\propto\prod_{1\leq i\leq m}\frac{1}{\sigma_i^{n_i+2}}\exp(-\frac{(y_i-X_i\beta)^T(y_i-X_i\beta)}{2\sigma_i^2})\propto\frac{1}{(\sigma_i^{2})^{\frac{n_i}{2}+1}}\exp(-\frac{(y_i-X_i\beta)^T(y_i-X_i\beta)}{2\sigma_i^2})\sim Inv-\chi^2(\nu=n_i,s^2=\frac{(y_i-X_i\beta)^T(y_i-X_i\beta)}{n_i})$ where $Inv-\chi^2(\nu,s^2)$ is the inverse $\chi^2$ distribution with degree of freedom $\nu$ and scale $s$. $Q:$ Have I made any algebraic mistakes? When I did my Gibbs sampling on simulated data by conditional densities, it seems that I got wrong median for regression coefficients. That is why I was wondering whether I did calculation wrong. If there is no calculation mistake, then I would presume that it is implementation mistake.
Was there any mistake in my derivation in Gibbs sampling?
CC BY-SA 4.0
null
2023-05-31T16:57:08.053
2023-05-31T19:58:30.650
2023-05-31T19:58:30.650
79469
79469
[ "regression", "bayesian", "markov-chain-montecarlo", "model", "gibbs" ]
617459
1
null
null
2
36
I have a known distribution for my population, and it is very right skewed. Let's say Lognormal with mu = 0 and sigma = 3. The mean of this distribution is about 90, and the median is 1. For a given sample, I am interested in knowing the ratio of values in excess of a certain threshold (lets say 90) to the total sum of the sample. For example, if I had a sample of 3 values: 3, 10, 150, then this would be 60 / (150 + 10 + 3) = 0.368. If I sample many values and calculate this ratio, I get ``` set.seed(2021) # large sample size x <- rlnorm(1000000, sdlog=3) excess <- x - 90 excess[excess < 0] = 0 sum(excess) / sum(x) # [1] 0.8670641 ``` However, with a smaller sample size (10), the average value is much lower ``` set.seed(2021) N <- 10 # sample size R <- 100000 # how many times to draw samples of size N ratios <- rep(NA, R) for (i in 1:R){ x <- rlnorm(N, sdlog=3) excess <- x - 90 excess[excess < 0] = 0 ratios[i] <- sum(excess) / sum(x) } mean(ratios) # [1] 0.2787529 ``` Is there any other situation where the expected value of a estimator depends on the sample size? Is there a mathematical way to quantify this, and to estimate the expected ratio given a known distribution and sample size? EDIT Clarifying question: Is there a method to quantify how biased the estimator of the ratio presented above is based on the sample size?
Distribution Estimator dependent on sample size
CC BY-SA 4.0
null
2023-05-31T16:58:47.690
2023-05-31T19:26:33.600
2023-05-31T19:26:33.600
389281
389281
[ "distributions", "mathematical-statistics", "expected-value" ]
617460
2
null
617459
0
null
It seems like you want an example where the expected value of an estimator depends on the sample size. Such examples certainly exist. Consider the mean $\mu$ and an estimator $\hat\mu = \bar X + \dfrac{1}{\sqrt{n}}$, where $\bar X$ is the usual sample mean. Then $\mathbb E\left[\hat\mu\right] = \mu + \dfrac{1}{\sqrt{n}}$, so the expected value of the estimator depends on the sample size. In more generality, let $\theta$ be some parameter, let $\tilde\theta$ be an unbiased estimator of $\theta$, and let $\hat\theta = \tilde\theta + \dfrac{1}{\sqrt{n}}$ be some other estimator of $\theta$. Then $\mathbb E\left[\hat\theta\right] = \theta + \dfrac{1}{\sqrt{n}}$, so, again, the expected value of the estimator depends on the sample size.
null
CC BY-SA 4.0
null
2023-05-31T17:08:50.807
2023-05-31T19:16:24.583
2023-05-31T19:16:24.583
247274
247274
null
617461
2
null
617143
1
null
An option is to use proportional colored circles (or squares), showing simultaneously absolute numbers and ratios. If you want to show the absolute number of servers, while taking into account the "size of the country" (e.g. the number of inhabitants, the total number of computers in this country, or whatever you think is relevant), then you can create a new variable using this information as a denominator for your quantity. For example, let's take the quantity per one million residents: |Country |Quantity |Continent |Quantity per 1M residents | |-------|--------|---------|-------------------------| |Ireland |5 |Europe |0.7 | |Singapore |9 |Asia |1.7 | |Canada |25 |North America |0.6 | |UK |43 |Europe |0.6 | |USA |50 |North America |0.1 | Then, you'll use the variables "quantity" and "quantity per 1M people" in your map, with the two following features: - circles representing the absolute value of your variable "quantity" - a shade of a color of your choice, representing the ratio. A lighter shade will mean a value close to zero, and a darker shade a value far from zero. Below is an example of this kind of map. Note that in this specific case, it uses two colors (blue and red), to distinguish between positive and negative ratios, but in your case you'll probably need just one color (as hopefully you don't have a negative quantity of servers): [](https://i.stack.imgur.com/SE4oV.png) ([source](https://magrit.cnrs.fr/docs/carto/propsmbolchoro_fr.html)) Here, the "compound annual growth rate" would be our "servers per 1 million people" variable, and the "total population" would be our quantity of servers. If you're concerned about overlapping circles, they generally do not hinder the understanding of the message you want to convey. Otherwise, you generally can tweak a bit your mapping software to find a solution, like defining a smaller maximum size of the circles.
null
CC BY-SA 4.0
null
2023-05-31T17:12:21.080
2023-05-31T17:21:21.847
2023-05-31T17:21:21.847
164936
164936
null
617462
1
null
null
0
11
What does the StandardScaler() command do when called on other than the individual subcommands? Here are two code examples where I get a different ML-score. From which I conclude that the standardization must be different. Standardization with fit and transform ``` clf = KNeighborsClassifier(n_neighbors=3) X_Scale_train = StandardScaler().fit_transform(T1234_train) clf.fit(X_Scale_train, np.ravel(Y_train)) scaler = StandardScaler().fit(T1234_test) X_Scale_test =scaler.transform(T1234_test) score = clf.score(X_Scale_test, Y_test) ``` Standardization with StandardScaler() in pipeline ``` clf = KNeighborsClassifier(n_neighbors=3) clf = pipeline(StandardScaler(), clf) clf.fit(T1234_train, np.ravel(Y_train)) score = clf.score(T1234_test, Y_test) ``` ```
What is the difference between StandardScaler() in pipline and StandardScaler().fit_transform separate from the ML-qualification
CC BY-SA 4.0
null
2023-05-31T17:18:09.813
2023-05-31T17:31:31.767
2023-05-31T17:31:31.767
389289
389289
[ "machine-learning", "python", "scikit-learn", "standardization", "multidimensional-scaling" ]
617463
1
null
null
0
19
I am planning a study where we have a low number of observations. We know we need to control for at least two variables, but other variables also exist that we can control for. It seems to me that adding variables to an analysis is always best. By controlling for the variables that have the largest effect, we can reduce the probability of a type 1 error. However, will adding more variables always increase the probability of a type 2 error (given the same number of observations)? How can this be shown mathematically?
What is the effect of adding variables to an analysis on type 1 and type 2 error?
CC BY-SA 4.0
null
2023-05-31T17:27:11.600
2023-05-31T17:27:11.600
null
null
338681
[ "experiment-design", "type-i-and-ii-errors" ]
617464
2
null
617419
4
null
The good think with the pivotal method, is that you can actually find a distribution of the observations independent of the unknown parameter $\theta$ and the implicitly through that distribution construct the confidence interval for $\theta$. So, so the goal is to create a $(1-a_{1}-a_{2})$ confidence interval for $\theta$ that will be $$\mathbb{P}(c_{1}\leq \theta \leq c_{2})=1-a_{1}-a_{2}$$ However, we have full access to the distribution of $Y$ which we already know is a function of $\theta$, since $Y=X-\theta$. So, we can start by creating the confidence interval $$\mathbb{P}(T_{1}\leq Y\leq T_{2})=1-a_{1}-a_{2}$$ For doing that we simply have to calculate the probabilities $$\mathbb{P}(Y\leq T_{1})=a_{1}$$ and $$\mathbb{P}(Y\geq T_{2})=a_{2}$$ since $\mathbb{P}(Y\geq T_{2})+\mathbb{P}(Y\leq T_{1})+\mathbb{P}(T_{1}\leq Y\leq T_{2}) = 1$. We can do that with the distribution of the random variable $Y$ $$\mathbb{P}(Y\leq T_{1}) = 1 - \frac{1}{1+e^{T_{1}}} = a_{1}\Rightarrow ... \Rightarrow T_{1} = log(\frac{a_{1}}{1-a_{1}})$$ $$\mathbb{P}(Y\geq T_{2}) = \frac{1}{1+e^{T_{2}}} = a_{2}\Rightarrow ... \Rightarrow T_{2} = log(\frac{1-a_{2}}{a_{2}})$$ So, we found the confidence interval for $Y$ such that $\mathbb{P}(T_{1}\leq Y\leq T_{2})=1-a_{1}-a_{2}$, however $Y$ is a function of $\theta$ so we can rewrite $$\mathbb{P}(T_{1}\leq X-\theta \leq T_{2})=1-a_{1}-a_{2}$$ now rearranging the inequality we conclude that $$\mathbb{P}(X-T_{2}\leq \theta \leq X-T_{1})=1-a_{1}-a_{2}$$
null
CC BY-SA 4.0
null
2023-05-31T17:31:40.480
2023-05-31T17:31:40.480
null
null
208406
null
617465
1
null
null
0
29
### There are two classifiers; - Classifier_A -> [A] Classifier_A outputs a binary variable either A or nothing (implicitly not A). - Classifier_B -> [B, C, D] Classifier_B outputs any n-combination of B, C, D. All three variables are booleans. And a lack of an output implicitly implies that that output is false. Multi-class. ### Problem: Ground truth is not multi-class. The outputs are only one item from the set of `[A, B, C, D]`. And all 3 (or none of them) of the others are false. I need to evaluate the performance of the two classifiers. The immediate problem that stands out to me is that the ground truth variables are all mutually exclusive but the predicted variables are independent. How should I go about this? I have enough historical data to create a balanced dataset for all of the ground truth classes. What should my class breakdown look like? Can I evaluate the two classifiers separately? Is this even possible or is the premise of the testing flawed because of this discrepancy?
Creating a test set where predicted variables are independent, ground truth variables are mutually exclusive. Two different classifiers
CC BY-SA 4.0
null
2023-05-31T17:38:02.543
2023-06-01T11:18:32.387
2023-06-01T11:18:32.387
386952
386952
[ "classification", "categorical-data", "binary-data" ]
617467
2
null
617208
0
null
In this question, the aim is to make inference for the average monthly rate $p$ of faulty items in a production line ($p$=number of faulty items per total). Daily count data are available over a long time period (2 years). The total number of items per day is large (thousands), and the failure rate is not very small (around 4-5%). What makes the analysis difficult? - There are good reasons to doubt that daily counts can be modelled as independent. - Usually, one would model daily failure counts as binomial random variables, but that would require that there are no events that systematically affect larger batches of items, such as a manufacturing machine being faulty. What could be done? Look at the daily rates directly. They cannot be treated as i.i.d., but some analysis is possible anyway, at least if the daily total production does not vary extremely. A simplistic hands-on suggestion: 1.) Analyse daily rates as a time series: calculate the autocorrelation function, and from there, the effective sample size. Also try to see if there is some systematic or seasonal trend. If you are not familiar with time series, then maybe this [discussion of autocorrelation on CV](https://stats.stackexchange.com/questions/616173/what-does-it-mean-for-a-time-series-to-be-autocorrelated) can help to give some idea. 2.) If there is no trend, look at the distribution of the rates. Compare with a normal distribution e.g. using a normal qqplot. 3.) If modelling the daily rates as normal distributed is deemed ok, you can apply a test for the mean of a normal random variable, but use the effective sample size from 1.) instead of the true number of days, unless the time series did not have autocorrelation. If the data deviate grossly from normal, a transformation might help. Commonly used simple resampling methods (bootstrap) are not recommended if you detected dependence in step 1.
null
CC BY-SA 4.0
null
2023-05-31T18:03:28.643
2023-05-31T18:19:25.113
2023-05-31T18:19:25.113
237561
237561
null
617468
1
null
null
0
14
I am working on a multilevel analysis aiming to investigate factors that impact student GPA. The data comes from 16 different schools. To include school effects, we are using a mixed effect model (with random intercepts). However, I would like to understand the effect of school tuition on student GPA and I am confused if this should be included as a fixed effect or random effect, since it is information about the higher level cluster (school). My `lme4` models I have constructed so far are either: ``` gpa ~ student_study_hours + student_income + student_physical_activity + (1|school / school_tuition) ``` OR ``` gpa ~ student_study_hours + student_income + student_physical_activity * school_tuition (1|school) ```
How to structure higher level effect (between clusters) in mixed-effect models
CC BY-SA 4.0
null
2023-05-31T11:52:28.717
2023-06-01T13:12:49.440
2023-05-31T21:02:12.420
11887
284325
[ "r", "regression", "mixed-model", "lme4-nlme", "multilevel-analysis" ]
617469
1
null
null
0
6
In a roll-playing board game, there are various kinds of dice. Dice may have 4, 6, 8, 10 or 12 sides. On any throw, we may toss 1-5 dice of the same kind. Dice are fair. Given a sample of 100 throws, I need to determine the most likely number of faces and dice per throw. I supposs I am searching for a goodness of fit metric, and comparing the sample to the various populations. My question: what statistical test or metric is most appropriate. Attempted: after weeding out impossible choices, sorted the remaining choices according to difference between the standard deviations, and then performed a stable sort on closeness of sample mean and expected value. The results were poor: correct answers in about half of all attempts. Considering: Chi-squared test. (highest p score wins). I dont think this is right. Thanks!
Goodness of fit test, variable dice faces and count
CC BY-SA 4.0
null
2023-05-31T18:05:50.913
2023-05-31T18:05:50.913
null
null
389290
[ "chi-squared-test", "goodness-of-fit", "discrete-data" ]
617470
2
null
617442
1
null
If I'm understanding you correctly, $P(N\mid \mu)$ means the value at $N$ of the Poisson probability mass function with expectation $\mu.$ To find this, you need the value of $N.$ If you observe the value of $N$ you can find $Q.$ The quantity you called $\mathrm{CL}_\mathrm{s}$ can be used if you know $N\le N_0$ but you don't know $N.$ If you use only the fact that you know that $N\le N_0,$ rather than using all the relevant information that you have (i.e. the value of $N$) then the test will have less power. The power of a statistical test is the probability of rejecting the null hypothesis, as a function of the unobservable parameters of interest. With the less powerful test, you are less likely to reject the null hypothesis when it is false. I think the way I've seen it written is $$ Q = -\ln\frac{P(N\mid\mu_b)}{P(N\mid\mu_s)}, $$ i.e. the alternative hypothesis corresponds to the denominator, so that when $Q$ is too big you reject the null hypothesis. It is too big when it is more than a critical value $c$, so chosen that $$ \Pr(Q>c\mid\text{null hypothesis}) \le \alpha $$ where $\alpha$ is the level of the test. The level is the highest probability of a Type I error that you are willing to allow. (Type I error${}={}$false positive, i.e. erroneously rejecting the null hypothesis.) The choice of $\alpha$ is a subjective economic decision. (In this case "subjective" would mean any objectivity in its choice is located elsewhere than in the theory of statistics.) The test using this logarithm is of course exactly equivalent to the test using the ratio $$ \frac{P(N\mid\mu_b)}{P(N\mid\mu_s)}. $$ The reason for taking the logarithm is $(1)$ in some commonplace problems $Q$ has a very familiar probability distribution, and $(2)$ often this transforms the probability distribution of the test statistic to one that asymptotically has a chi-square distribution. "Asymptotically" means a limit is taken as the sample size increases. If the sample consists of $n$ independent observations $N_1,\ldots,N_n,$ each having a Poisson distribution, and you can base your test on $N=N_1+\cdots+N_n,$ which also has a Poisson distribution, and $n$ is the sample size. If your Poisson-distributed random variable $N$ is the number of raindrops the fell on a hectare of land, during a particular second of very light rain, then expanding that hectare to a larger plot of ground would be what increasing the sample size consists of. In the case of the sum $N=N_1+\cdots+N_n,$ the conditional probability distribution of $(N_1,\ldots,N_n)$ given the value of $N$ does not depend on $\mu,$ so the sum, $N,$ gives you all information that is relevant, as long as you know that the Poisson distribution is the right model. If you were sure of the independence of $N_1,\ldots,N_n,$ then knowing the whole tuple $(N_1,\ldots,N_n),$ rather than knowing only $N,$ would be relevant to judging whether the data are consistent with the Poisson distribution being the right model.
null
CC BY-SA 4.0
null
2023-05-31T18:23:25.060
2023-06-02T16:50:20.283
2023-06-02T16:50:20.283
5176
5176
null
617471
1
null
null
0
10
I think I understand what average precision is: the area under the precision-recall curve.The curve is constructed by calculating the precision and recall metrics at each threshold. There are a few methods how you actually calculate/approximate area but that is not the focus of my question. For me it was clear how AP is calculated for classification because you only have one threshold to control which is the probability value from which a given object belongs to the class. It's easy to understand from [this article](https://sanchom.wordpress.com/tag/average-precision/). But what about detection? I read some articles and I know they take the bounding boxes to account with Intersection Over Union (IOU). But now you have 2 thresholds to control, the IOU threshold and the probability threshold. In some of the articles I read they only classify the detections as TP, FP and FN considering the IOU of the boxes and then use the probability threshold to construct the precision-recall curve. This seems wrong beacuse the calssification part also effects the TP, FP and FN metrics. How to take into account both? So my question in short is how to calculate average precision for detection. But if my above assumptions regarding AP in genaral are correct the question is simplified to how the TP, FP and FN metrics get calculated in this case. (Compared to classification only)
Average precision in in calssification vs in object detection
CC BY-SA 4.0
null
2023-05-31T18:52:24.097
2023-05-31T18:52:24.097
null
null
389064
[ "neural-networks", "classification", "model-evaluation", "object-detection", "average-precision" ]
617472
1
null
null
0
14
i have difficulties to better understand about what we commonly called a sampler, especially how to produce a covariance matrix between parameters during a MCMC code run. In MCM, I know that we start from "guess" values and after we iterate by choosing a random value and compute the Chi2 thanks to Experimental data. If the new Chi2 is lower than the previous one, then we accept the value. So 3 questions : 1) Can we generate multiple parameters with a multivariate Gaussian to compute the new Chi2 or should we generate only one single random value (the one that has the larger standard deviation) and compute the new Chi2 ? - How can I produce a covariance matrix from which we can generate one or multiple values at the same time ? . How is produced this covariance matrix ? I guess this covariance matrix evolves during the run (I mean not to remain constant). - Can we qualify the Monte-Carlo as a "Sampler" ? Sorry for the confusion, I try to grasp the important keypoints of MCMC or what we call a sampler. UPDATE : I understand better thanks this explanation and formula of Chi2 with covariance matrix : For a "simple" chi-squared test statistic $\chi^2=\sum_i\left(x_i-\mu_i\right)^2 / \sigma^2$, it's clear that the domain is positive since both the numerator and denominator of every term in the sum over bins $i$ are positive. But for the more general form with correlations between bins expressed via covariance matrix $\Sigma=\left\langle\left(x_i-\mu_i\right)\left(x_j-\mu_j\right)\right\rangle=\left\langle x_i x_j\right\rangle-\mu_i \mu_j$, this positivity is less clear: $$ \chi^2=\sum_{i j}\left(x_i-\mu_i\right)\left(\Sigma^{-1}\right)_{i j}\left(x_j-\mu_j\right) $$
Subtilities of MCMC method and more generally about covariance matrix and Samplers
CC BY-SA 4.0
null
2023-05-30T18:08:50.073
2023-06-02T16:09:42.110
2023-06-02T16:09:42.110
11887
389017
[ "normal-distribution", "markov-chain-montecarlo", "covariance-matrix" ]
617473
1
null
null
0
12
I have been using GAMMs to analyse time series data and I have included a smoothing term (hour of day by season) and I cant seem to find the results for the winter season. I have the proper information (edf, Ref.df, F, and p value) for all my smoothed terms and each season except for winter. I am using the summary function and calling on the gam results within the model (instead of the lme summary output). Attached is a snapshot of the summary output. Hopefully I am just missing something simple. ![summary output](https://i.stack.imgur.com/JIrt8.png)
How to interpret smoothing effects in the summary output of a generalised additive mixed effect model GAMM
CC BY-SA 4.0
null
2023-05-31T18:07:15.737
2023-06-02T16:10:35.350
2023-06-02T16:10:35.350
11887
null
[ "r", "modeling" ]
617474
2
null
343146
0
null
From the `sklearn` [documentation](https://github.com/scikit-learn/scikit-learn/blob/1495f6924/sklearn/metrics/classification.py#L500): $$ \kappa = (p_o - p_e) / (1 - p_e) $$ > where $p_o$ is the empirical probability of agreement on the label assigned to any sample (the observed agreement ratio), and $p_e$ is the expected agreement when both annotators assign labels randomly. $p_e$ is estimated using a per-annotator empirical prior over the class labels. Let's unpack the documentation. > $p_o$ is the empirical probability of agreement on the label assigned to any sample (the observed agreement ratio) The observed agreement ratio is the classification accuracy. I say this makes sense, as the classification accuracy is the number of predicted labels that agree with the true labels divided by the total number of attempts. > $p_e$ is the expected agreement when both annotators assign labels randomly This means that $p_e$ is the expected classification accuracy when predicted labels are randomly assigned and true labels are randomly assigned. > $p_e$ is estimated using a per-annotator empirical prior over the class labels. This means that the distribution of labels comes from the true distribution of labels. In other words, the random labels come from sampling from the true labels with the class ratios respected. The reference given for how exactly this is calculated is Artstein and Poesio (2008), with the derivation completed on page $8$, which appears to be the same as the calculation given on [Wikipedia](https://en.wikipedia.org/wiki/Cohen%27s_kappa). Let $N$ be the total number of classification attempts; let there be $K$ categories; let $n_{k1}$ be the number of times label $k$ appears in the predictions; and let $n_{k2}$ be the number of times label $k$ is a true label. Then: $$p_e = \dfrac{1}{N^2}\overset{K}{\underset{k=1}{\sum}}n_{k1}n_{k2}$$ With these definitions for $p_0$ and $p_e$, we arrive at the `sklearn` calculation: $$ \kappa = (p_o - p_e) / (1 - p_e) $$ > and is it a good idea for my usecase Cohen's $\kappa$ is a function of the classification accuracy, so if you are interested in the classification accuracy, Cohen's $\kappa$ might be a statistic that gives a context for that accuracy. In particular, Cohen's $\kappa$ can be seen as a comparison between the classification accuracy of your model and the classification accuracy that comes from randomly assigning labels. An advantage of transforming the accuracy this way is that it exposes performance worse than random. For instance, a common complaint about classification accuracy is that it can be high for an imbalanced problem yet not indicate good performance, such as getting $95\%$ accuracy when $99\%$ of the observations belong to one category. While the $95\%$ accuracy looks high, running such a situation through the Cohen's $\kappa$ calculation is likely to expose such performance as being worse than it would be for random guessing. If this sounds appealing, then Cohen's $\kappa$ might be a good measure of performance for you. A drawback of Cohen's $\kappa$ is that it requires you to bin continuous model outputs, such as those given by logistic regressions and neural networks. While [this](https://stats.stackexchange.com/q/603663/247274) does not discuss Cohen's $\kappa$ in particular, all of the criticisms apply. REFERENCES Artstein, Ron, and Massimo Poesio. "Inter-coder agreement for computational linguistics." Computational linguistics 34.4 (2008): 555-596.
null
CC BY-SA 4.0
null
2023-05-31T19:14:02.527
2023-05-31T22:40:29.353
2023-05-31T22:40:29.353
247274
247274
null
617475
2
null
617338
0
null
In the simpler case of independent data points, a simple two-sample t-test here would give too-low p-values because you choose the change point to create the pair of datasets with the largest possible t-statistic. Suppose we generate a time series of 26 observations of $N(0, 1)$ observations, with no change points. If we split the data in two in the middle, the p-value will be $U(0, 1)$. But if we do binary segmentation then the p-value is likely to be lower, even though there are no change points in the data. I've simulated this happening 1,000 times in the code below ``` N = 1000 n = 26 set.seed(1) # p_vals_half gives the p-values of t-tests from splitting the data into two equal parts p_vals_half = numeric(N) # p_vals_bin gives the p-values after choosing the split with binary segmentation and then computing the p-value for the resulting t-test p_vals_bin = numeric(N) for (i in 1:N) { data = rnorm(n) p_vals_half[i] = t.test(x = data[1:(n/2)], y = data[(n/2 + 1):n], alternative = "two.sided")$p.value # choose the change point that minimises the difference between the two data sets (and so minimises the p-value of a t-statistic) pvals_all = numeric(n - 3) for (j in 2:(n - 2)) { pvals_all[j - 1] = t.test(x = data[1:j], y = data[(j+1):n], alternative = "two.sided")$p.value } p_vals_bin[i] = min(pvals_all) } par(mfrow = c(1, 2)) hist(p_vals_half, xlab = "P-Value", main = "Histogram of p-values from splitting in half") hist(p_vals_bin, xlab = "P-Value", main = "Histogram of p-values from binary segmentation") ``` As you can see from the output, the straightforward t-test applied to a dataset generates a lot of spuriously low p-values. So even in the independent data case, you would need to take the fact that you are choosing the change point into account when you calculate the p-value. [](https://i.stack.imgur.com/P3ewI.png) I'd need more information about a model for the correlation structure of the data that you have to give a specific answer, but [this paper](https://arxiv.org/pdf/1612.01520.pdf), Change point detection in autoregressive models with no moment assumptions by Akashi, Dette, Liu (2016), may be useful. Usefully, it cites a lot of papers on the topic of change detection in autoregressive models.
null
CC BY-SA 4.0
null
2023-05-31T19:16:11.380
2023-05-31T19:16:11.380
null
null
78857
null
617476
2
null
575314
0
null
> When evaluating a machine learning (or other statistical model) against multiple evaluation metrics, is there a standardized way to choose the "best" model? NO It depends on what you value from your predictions. In your example, if you value a high $F_1$ score over a high dice score, you might be inclined to go with the model with the highest $F_1$ score. If you value a high dice score over a high $F_1$ score, you might be inclined to go with the model with the highest dice score. If you value both, you might be inclined to go with a model that never achieves the highest of either but always performs well, as opposed to the top-performing model in terms of $F_1$ that has a terrible dice score (or the top-performing model in terms of dice score that has a terrible $F_1$ score). If you want to make this quantitative, you can use an equation of multiple measures of model performance that accounts for how you value each measure of performance. Economics refers to such an equation as a [utility function](https://en.wikipedia.org/wiki/Utility), and different users can have different utility functions. In many regards, the $F_1$ score can be seen as a utility function of the precision and recall, so you are already used to working with this notion of combining measures of performance in a way that gives one real number as an output. This is not even a matter of using proper scoring rules. Brier score and log loss can disagree about which model is best. If this happens, it becomes a matter of what you value for your problem. Our [Stephan Kolassa](https://stats.stackexchange.com/users/1352/stephan-kolassa) has argued in favor of the log loss in answers/comments on here, while our [Dikran Marsupial](https://stats.stackexchange.com/users/887/dikran-marsupial) has written about why Brier score might be preferred.
null
CC BY-SA 4.0
null
2023-05-31T19:29:44.840
2023-05-31T20:04:34.557
2023-05-31T20:04:34.557
247274
247274
null
617477
2
null
501835
1
null
I've found that some of my students are helped by thinking of the p-value as a percentile. They are familiar with the concepts of being in the top 10% of a class by GPAs, or "among the 1%" in terms of wealth. So for your example, a p-value of 0.04 means "Our observed value of the test statistic $T$ was among the top 4% possible values of $T$ under $H_0$ that are least like $H_0$ and most like $H_A$." In other words, "Our observed test statistic was among the top 5% most un-$H_0$-like values, but not among the top 1%."
null
CC BY-SA 4.0
null
2023-05-31T19:31:24.873
2023-05-31T19:31:24.873
null
null
17414
null
617478
2
null
405872
1
null
(This seems to be a near-duplicate of a question I [answered](https://stats.stackexchange.com/a/577858/247274) a year ago.) $R^2$ is often defined as a comparison of the sum of squared residuals for the model of interest vs the sum of squared residuals for a model that only has an intercept. With this in mind, I would proceed as follows. - Fit models using the leave-one-out cross-validation strategy. - However, for each data set with a point left out, fit two models: your model of interest and an intercept-only model. - For each obseration that is left out, your two models will give predicted values for the observation that is left out. Take the difference between that prediction and the true value of the observation that was left out, and square that value. - Across all of the obserations, you get one of these squared residuals. Add up the squared residuals from your model of interest; denote this as $L_1$. Now add up the squared residuals from your intercept-only models; denote this as $L_0$. - Calculate an $R^2$-style statistic as $1 - \dfrac{L_1}{L_0}$, which is extremely analogous to the usual $R^2$. Proceeding this way allows you to do an $R^2$-style calculation with only one observation in each test set while also having an interpretation as the PRESS statistic for your model of interest compared to the PRESS statistic for an intercept-only model. If your model of interest cannot outperform (in terms of PRESS) an intercept-only model, which will be denoted by $1 - \dfrac{L_1}{L_0}< 0$, then your model has some issues.
null
CC BY-SA 4.0
null
2023-05-31T19:40:57.790
2023-05-31T19:40:57.790
null
null
247274
null
617479
1
null
null
1
3
In my team we do a study on a group of patients undergoing abdominal surgery and we evaluate the correlation between frailty amongst patients and complications. Background knowledge: We use a frailty-score (Clinical Frailty Score, CFS) where 1 is non-frail and 9 is severe frail. Patients are categorized into three groups (CFS 1-3, 4-6 and 7-9) We score complications based upon a complication score taking into account severity of the complication (Clavien Dindo Score, CD1-5, where 1 is a minor complication, and 5 is death). We want to examine data regarding the distribution of the complications regarding severity in the 3 different groups of frail patients. One important point: 1 patient can suffer from many complications. As so - we have 3 groups that can suffer from (multiple) events of different severity. as an example a dummy-table [](https://i.stack.imgur.com/5IR3f.png) Which would mean that 80 complications graded CD3 was observed within the group CFS1-3 and 60 complications graded CD2 was observed in the group 7-9. Questions: Q1) how would you report data in the table? thoughts: if we report percentage complications related to the number of patients in each group we get percentages >100 and it is weird. If we report complications as a percentage of the total number of complications in the group we do not take into account the different sizes of the groups. I argue that we should report event-rates (Incdence rates) but would love to hear som feedback. Q2) If we would like to evaluate data statistically - what kind of analysis would you prefer, taking into account the multiple events for a single patient? We would like to statistically show that there are difference between the groups (which we intuitively can see, but are uncertain as to what kind of analysis one should use here). feel free to ask if anything stands unclear. I've tried to be as thorough as i could think off. Kindly. Thomas, Copenhagen.
Multiple groups with multiple events of different character
CC BY-SA 4.0
null
2023-05-31T19:48:17.930
2023-05-31T19:48:17.930
null
null
388632
[ "recurrent-events" ]
617480
2
null
617429
0
null
Conditioned on $X$ having value $x$, the distribution of $Y$ is $N(x,x^2)$. The conditional distribution of $Z = \dfrac YX$ given that $X=x$ is the same as the distribution of $\dfrac Yx$ which, as you have discovered, is an $N(1,1)$. distribution. Thus, $$f_{Z \mid X=x}(\alpha \mid X=x) = \frac{\exp\left(-\frac{(\alpha-1)^2}{2}\right)}{\sqrt{2\pi}}$$ for all choices of $x$ and so $\frac{\exp\left(-\frac{(\alpha-1)^2}{2}\right)}{\sqrt{2\pi}}$ is also the unconditional density $f_Z(\alpha)$ of $Z$. Note that $$f_{Z, X}(\alpha,x) = \frac{\exp\left(-\frac{(\alpha-1)^2}{2}\right)}{\sqrt{2\pi}}\cdot f_X(x) = f_Z(\alpha)\cdot f_X(x)$$ which shows that $Z = \dfrac YX$ and $X$ are independent random variables.
null
CC BY-SA 4.0
null
2023-05-31T19:48:34.930
2023-05-31T19:48:34.930
null
null
6633
null
617481
1
null
null
1
15
What is the relation between the vector X used to create a Gaussian process prior, the X used to 'train' the GP, ie. giving it some observations (X,y), and the X* (used to make predictions of y* values)?
Gaussian Process prior, posterior, and predictive x vectors?
CC BY-SA 4.0
null
2023-05-31T19:52:59.963
2023-05-31T19:52:59.963
null
null
389294
[ "bayesian", "normal-distribution", "gaussian-process" ]
617482
2
null
503081
0
null
An important consideration is that your models are not giving categories. They are giving values on a continuum that are binned according to a threshold to give discrete categories (above the threshold is one category, below the threshold is the other). Moving this threshold around is what yields ROC curves. A similar idea to ROC curves can be applied to precision and recall instead of specificity and recall. For a given model output, as you change the threshold, the predicted categories change. With these changing predicted categories come changing precision and recall. These precision and recall values can be plotted to give the precision-recall curve. Thus, every model you have created gives not just one precision-recall pair but a bunch of them. It might be that a threshold other than the software default would give you the desired combination of precision and recall, and you might benefit from looking at these precision-recall curves.
null
CC BY-SA 4.0
null
2023-05-31T20:02:06.007
2023-05-31T20:02:06.007
null
null
247274
null
617483
2
null
616677
1
null
In the `emmeans` call, you can specify only predictor (independent) variables. Seems like you want `Persona` there. The dependent variable is understood from the model. You mention that `Prominence` is a moderator, and if you think that it is influenced by `Persona`, consider adding `cov.reduce = Prominence~Persona` to the call. This will cause it to use a different mean prominence for each persona, rather than the same overall mean prominence.
null
CC BY-SA 4.0
null
2023-05-31T20:21:10.417
2023-05-31T20:21:10.417
null
null
52554
null
617484
1
null
null
-1
32
I am trying to better understand the importance of "matching" in medical studies. For example, suppose I have a dataset that has different covariates (e.g. height, weight, sex, employment, place of residence, smoking history, etc.) for a large group of people, and a response variable if a person has asthma or not (let's say that significantly more people do not have asthma in this dataset). ``` id height weight sex employment place_of_residence smoking_history asthma 1 1 164.3952 55.06302 Female Employed Urban Current No 2 2 167.6982 54.40067 Female Unemployed Rural Current No 3 3 185.5871 69.73030 Female Unemployed Rural Current Yes 4 4 170.7051 68.01737 Male Unemployed Rural Former No 5 5 171.2929 31.75986 Male Unemployed Rural Never No 6 6 187.1506 85.60860 Male Employed Rural Current No ``` Suppose this dataset was part of a "Case-Control Observational Study" - data was recorded for consenting medical patients from a hospital in a city : no instructions were given to the patients on how to live their lives. In this example, suppose I am interested in using a Logistic Regression model to estimate the impact of different covariates on asthma. The first thing that comes to mind is to simply fit a (unconditional) Logistic Regression on this dataset and obtain estimates for the Odds Ratio. However, I am beginning to learn that the this approach might not be ideal - this approach might add biases to the results by overestimating or underestimating the effect of different covariates on the Odds Ratios. I attended a presentation where the presenter showed us that if each asthma patient is "matched" with a similar non-asthma patient (or "n" similar non-asthma patients) - and then if a Conditional Logistic Regression model is used - this approach would be more "rigorous" compared to the earlier approach. While I can loosely understand this logic and argument for Matching (e.g. comparing apples to oranges vs apples to apples) - from a mathematical perspective, I am trying to understand why this is the case. The first thing that comes to mind relates to Odds Ratio vs Relative Risk: I have heard that Relative Risk favors situations where the asthma to non-asthma ratio in the available data is similar to the ratio of asthma to non-asthma in population - when this is not the case, Odds Ratio is preferred. I was wondering if a similar logic can be used to justify the advantages of Conditional Logistic Regression and Matching in the example I outlined. Intuitively, can we demonstrate using some mathematical approach that Conditional Logistic Regression and Matching in Case-Control Observational studies will produce more favorable results (e.g. less biased estimates) compared to Unconditional Logistic Regression and No Matching? Thanks!
Understanding the Need for "Matching" in Medical Studies
CC BY-SA 4.0
null
2023-05-31T20:25:39.387
2023-05-31T20:25:39.387
null
null
77179
[ "regression" ]
617485
1
null
null
0
16
Let's say I have $n$ samples which are vectors of length $p$. I know that the $p \times p$ sample covariance matrix is singular if $n \leq p$. Is there another estimator for the covariance that results in a non-singular matrix when $n \leq p$? My goal is to estimate covariance from many datasets and then quickly sample from a multivariate Gaussian using the estimated covariance. I know this can be done using SVD even if the matrix is singular, but $p$ in my problem is very large and I am estimating thousands of covariance matrices (which would require computing SVD on thousands of matrices with large $p$. This problem is tractable with Cholesky but this requires a PSD covaraince matrix.
Is there an alternate estimator for a sample covariance matrix when n < p such that the estimator is not singular
CC BY-SA 4.0
null
2023-05-31T20:35:27.580
2023-05-31T20:35:27.580
null
null
261708
[ "covariance", "estimators", "multivariate-normal-distribution", "svd", "singular-matrix" ]
617486
2
null
615790
0
null
Is there a clear and precise explanation of why minimising the variance of the weights in SIS with respect to a proposal ensures that the samples generated from the empirical distribution induced by the normalised weights will be closer to the posterior/target distribution? I tend to think of this problem in terms of the effective sample size (ESS). Quoting from Monte Carlo Strategies in Scientific Computing, Liu (2008) ([pdf](https://github.com/szcf-weiya/MonteCarlo/blob/master/References/Monte-Carlo-Strategies-in-Scientific-Computing-2008.pdf)) pp. 34-36 > Importance sampling suggests estimating $\mu = E_\pi\{h(x)\}$ by first generating independent samples $x^{(1)}, \dots , x^{(m)}$ from an easy-to-sample trial distribution, $g( \thinspace )$, and then correcting the bias by incorporating the importance weight $w^{(j)} \propto \pi(x^{(j)})/g(x^{(j)})$ ... A useful "rule of thumb" is to use the effective sample size (ESS) to measure how different the trial distribution is from the target distribution. Suppose $m$ independent samples are generated from $g(x)$; then, the ESS of this method is defined as $$ESS(m) = \frac{m}{1 + \textrm{var}_g(w(x))}$$ ... This can be interpreted as that the $m$ weighted samples is worth of $m / \{1 + \textrm{var}_g [w(x)]\}$ i.i.d. samples drawn from the target distribution. There's much more detail in the book. Also is there a reference which proves something about the distribution (mass function) induced by SIS in relation to the target/posterior density? Like random variables sampled from SIS with increasing particle counts converge in distribution to random variables sampled from the target? If you're referring to the SIS algorithm on p. 11 of Doucet and Johanson's tutorial, with no resampling, then the distribution of the particles will just be sampled from $q_n(x_{1:n})$, regardless of the target/posterior distribution. SIS with a massive number of particles might still be useful in estimating an expectation in cases where the error decreases with the number of particles. Once resampling is introduced to make SMC then you get the results given in Section 3.6 of the tutorial > Comparing (26) to (37), we see that the SMC variance expression has replaced the importance distribution $q_n(x_{1:n})$ in the SIS variance with the importance distributions $\pi_{k−1} (x_{1:k−1}) q_k (x_k| x_{1:k−1})$ obtained after the resampling step at time $k-1$. The authors give two citations for the results about the density in section 3.6.
null
CC BY-SA 4.0
null
2023-05-31T20:45:03.887
2023-05-31T20:45:03.887
null
null
78857
null
617487
2
null
616613
0
null
I think you have a nested fixed-effects structure, where `group` is nested in `sub_type`. Did not `emmeans` auto-detect this? You can make this structure explicit by omitting any term where `group` does not interact with `sub_type`: ``` mod2 <- lm(value ~ (sub_type + sub_type:group)*study_day*gender, data = dd) ``` `emmeans()` should detect this nesting and automatically bypass the cases where `group` and `sub_type` are mismatches. I also recommend looking at the model summary and seeing if you can omit a bunch of other interactions from the model.
null
CC BY-SA 4.0
null
2023-05-31T20:48:24.410
2023-05-31T20:48:24.410
null
null
52554
null
617488
1
null
null
1
26
I am trying to understand how the values of the irf plots are estimated I read following page: [https://www.statsmodels.org/stable/vector_ar.html](https://www.statsmodels.org/stable/vector_ar.html) But I don't understand how the values of the impulse response are estimated. I have a model that I fit with order of 3. ``` result = model.fit(3) ``` coefficients and residuals: ``` result.summary() ``` irf values: ``` result.irf(3).irfs ``` At first, I thought the values of the coefficients are equal to the values of the irfs, but that only holds for the first lagged value 'L1'. After that, the values of the irfs start to differ. An example: Imagine there are three variables in the system: `GDP, Sales, Export`, with order `3` Coefficient for GDP -> Sales Results for equation Sales ``` L1.GDP = 1.37 L2.GDP = 1.63 L3.GDP = -0.69 ``` Impulse response for GDP -> Sales GDP -> Sales ``` irf(t1): 1.37 irf(t2): 1.31 irf(t3): -0.762 ``` How do I obtain the values at `t2` and `t3`? From the docs: > Impulse responses are of interest in econometric studies: they are the estimated responses to a unit impulse in one of the variables. They are computed in practice using the MA representation of the VAR(p) process: [](https://i.stack.imgur.com/CR5Yk.png)
impulse response values VAR statsmodels
CC BY-SA 4.0
null
2023-05-31T21:02:11.623
2023-06-01T06:21:49.397
2023-06-01T06:21:49.397
53690
246234
[ "python", "vector-autoregression", "statsmodels", "impulse-response" ]
617489
2
null
526583
1
null
Here is a drawing of a two-layer neural network. [](https://i.stack.imgur.com/AkvZD.png) The blue, red, purple, and grey lines represet network weights, and the black line is a bias. Assume the pink output neuron to have sigmoid activation function so it gives a predicted probability. Then the equation is: $$ p = \text{sigmoid}\left( b + \omega_{blue}x_{blue} + \omega_{red}x_{red} + \omega_{purple}x_{purple} + \omega_{grey}x_{grey} \right)\\ \iff\\ \log\left( \dfrac{ p }{ 1 - p } \right) = b + \omega_{blue}x_{blue} + \omega_{red}x_{red} + \omega_{purple}x_{purple} + \omega_{grey}x_{grey} $$ The second equation is that of a logistic regression (save for some statistical technicalities). Since there is no limit to how many input neurons there could be, and the sigmoid activation function could have been the inverse of any of the link functions from generalized linear models, the Keras claim is not as strong as it could be. Indeed, generalized linear models (including OLS linear regression) can be expressed as two-layer neural networks.
null
CC BY-SA 4.0
null
2023-05-31T21:21:33.053
2023-05-31T21:21:33.053
null
null
247274
null
617490
1
617499
null
0
26
I have the following result from a hierarchical model.[](https://i.stack.imgur.com/c3dz2.png) I know how to write the equation for a multiple regression model. Is it possible to write a similar mathematical equation using the coefficients from this hierarchical regression model?
How to write the results of an hierarchical regression into an equation?
CC BY-SA 4.0
null
2023-05-31T21:36:06.777
2023-06-02T02:36:29.187
null
null
250576
[ "regression", "mixed-model", "lme4-nlme" ]
617493
1
null
null
0
39
I was always taught to use $p\times(1-p)\times n$ for binomial variance. In a textbook for actuarial problems, I have: ``` probability of death benefit A .01 200,000 B .05 100,000 ``` Using $Var(A) = .01\times 200000^2 - (.01\times 200000)^2 = 396000000$ I get the same answer with binomial formula when I square the $n$: $.01\times .99\times 200000^2 = 396000000$ Using $Var(B) = .05\times 100000^2 - (.05\times 100,000)^2 = 475000000$ I get the same answer with binomial formula when I square the $n$: $.05\times .95\times 100,000^2 = 396000000$ --- But, I don't understand why I should square the $200000$ and $100000$. Isn't the formula for binomial variance: $p\times (1-p)\times n$
When to use $p\times (1-p)\times n^2$ for variance?
CC BY-SA 4.0
null
2023-05-31T23:32:07.170
2023-06-01T07:44:26.710
2023-06-01T00:25:28.083
44269
114193
[ "variance" ]
617495
2
null
617493
1
null
You are right about the variance of a binomial random variable. In your example, the number of deaths would be modelled as a binomial variable. In the example of your textbook, the quantity of interests is however not the number of deaths, but apparently the benefit payed for one particular person in one year. This is either 0 or 100000 / 200000, depending on the incident. Consider the random variable $X$ that is $X=0$ if the person does not get the benefit, or $X=1$ otherwise (which is bad for that person...). $X$ has a binomial distribution with $n=1$, thus $\mathrm{Var}(X)=1\cdot p\cdot(1-p)$. The benefit itself, $B = b\cdot X$, has variance $\mathrm{Var}(B) = b^2 \mathrm{Var}(X) = b^2\, p(1-p)$.
null
CC BY-SA 4.0
null
2023-05-31T23:56:09.783
2023-05-31T23:56:09.783
null
null
237561
null
617496
1
null
null
0
19
I am looking to build a multi state model some packages for instance in `R` are for panel or intermittently observed data (`msm` package) which I believe would be interval censored data and others can be used to fit models where transitions times are known (packages such as `mstate` and `flexsurv`). My question is to whether I have interval censored data or not. I will be creating the 'states' based on weekly observations over a month. For simplicity if a variable is 1 to 3 -> low, 7-10 -> high else mid for each weekly state. Could I model these transitions as exact given I am defining the states or would this be interval censored? specifically would my setup be best classifed as scenario 1,2, or 3 below TIA From the `msm` package documentation [msm](https://cran.r-project.org/web/packages/msm/msm.pdf) 1 An observation of the process at an arbitrary time (a "snapshot" of the process, or "panel-observed" data). The states are unknown between observation times. 2 An exact transition time, with the state at the previous observation retained until the current observation. An observation may represent a transition to a different state or a repeated observation of the same state (e.g. at the end of follow-up). Note that if all transition times are known, more flexible models could be fitted with packages other than msm - see the note under exacttimes. Note also that if the previous state was censored using censor, for example known only to be state 1 or state 2, then obstype 2 means that either state 1 is retained or state 2 is retained until the current observation - this does not allow for a change of state in the middle of the observation interval. 3 An exact transition time, but the state at the instant before entering this state is unknown. A common example is death times in studies of chronic diseases.
Defining states in Multi-State model
CC BY-SA 4.0
null
2023-06-01T00:27:00.737
2023-06-01T15:00:59.523
2023-06-01T01:35:21.743
281323
281323
[ "r", "survival", "censoring", "interval-censoring", "competing-risks" ]
617497
1
null
null
0
13
I did ANN classification using SMOTE random sampling in python but I found strange plot loss and accuracy results. This is my code: ``` #With SMOTE sm = SMOTE(random_state=42) Train_X2_Smote, Train_Y2_Smote = sm.fit_resample(Train_X2_Tfidf, Train_Y2) #TRIAL 4 def reset_seeds(): np.random.seed(0) python_random.seed(0) tf.random.set_seed(0) reset_seeds() model4 = Sequential() model4.add(Dense(10, input_dim= Train_X2_Smote.shape[1], activation='sigmoid')) model4.add(Dense(1, activation='sigmoid')) opt = Adam (learning_rate=0.001) model4.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy']) model4.summary() es = EarlyStopping(monitor="val_loss",mode='min',patience=10) history4 = model4.fit(Train_X2_Smote, Train_Y2_Smote, epochs=1000, verbose=1, validation_split=0.2, batch_size=32, callbacks =[es]) ``` resulting loss and accuracy graph: [](https://i.stack.imgur.com/32WOQ.png) is there anything have to fix in my code?
ANN uses python smote random oversampling
CC BY-SA 4.0
null
2023-06-01T00:50:05.163
2023-06-01T00:50:05.163
null
null
375024
[ "neural-networks", "python", "data-visualization", "oversampling", "smote" ]
617498
1
null
null
0
7
Can someone explain me the difference between these approaches, if you want i can provide the results, but since they are quite extensive, i could attach on demand. I'm working with the Theory of Planned Behavior, lets say - a1,a2,a3,a4 are for construct A - s1,s2,s3,s4 are for construct S - p1,p2,p3,p4 are for construct P Then my model has two regressions: Int ~A + S + P Beh ~ P + Int So far, so good... i can run this and perform some evaluations, but then i was wondering using pca to reduce my a1,a2,...s1....p4, and extract 3 factors. Lets call these, A_x, S_x and P_X Then I could run a sem Int ~A_x + S _x + P _x Beh ~ P _x + Int --> for some reason this model adjust better than the one before. Finally, i run 2 regressions (with low explanatory power 20%-40%) Int ~A_x + S _x + P _x Beh ~ P _x + Int Besides validation of TPB, I'm interested on extracting coefficients so that i can use them in a simulation (agent-based model) to recreate the behavior. Please if someone has been working with this issues or has faced similar questions, any pointer will be very very useful!
Difference between SEM AND ols+pca
CC BY-SA 4.0
null
2023-06-01T01:10:40.007
2023-06-01T01:10:40.007
null
null
376081
[ "least-squares", "structural-equation-modeling", "lavaan" ]
617499
2
null
617490
2
null
If LaTeX output is OK, try `equatiomatic::extract_eq(mdl)`. For example, on this model: ``` library(lme4) (fm1 <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)) equatiomatic::extract_eq(fm1) ``` I get the LaTeX: $$ \begin{aligned} \operatorname{Reaction}_{i} &\sim N \left(\alpha_{j[i]} + \beta_{1j[i]}(\operatorname{Days}), \sigma^2 \right) \\ \left( \begin{array}{c} \begin{aligned} &\alpha_{j} \\ &\beta_{1j} \end{aligned} \end{array} \right) &\sim N \left( \left( \begin{array}{c} \begin{aligned} &\mu_{\alpha_{j}} \\ &\mu_{\beta_{1j}} \end{aligned} \end{array} \right) , \left( \begin{array}{cc} \sigma^2_{\alpha_{j}} & \rho_{\alpha_{j}\beta_{1j}} \\ \rho_{\beta_{1j}\alpha_{j}} & \sigma^2_{\beta_{1j}} \end{array} \right) \right) \text{, for Subject j = 1,} \dots \text{,J} \end{aligned} $$ This particular model says that - Reaction for observation $i$ (from subject $j$) is normally distributed with mean $\alpha_{j[i]} + \beta_{1j[i]}(\operatorname{Days})$ and variance $\sigma^2$ - $(\alpha_j, \beta_{1j})$, for subject $j$ is drawn from a multivariate (2-dimensional in this case) with mean vector and covariance matrix specified as above i.e. for each subject $j$, the intercept and slope are "modified" by a subject-specific effect.
null
CC BY-SA 4.0
null
2023-06-01T01:24:04.603
2023-06-02T02:36:29.187
2023-06-02T02:36:29.187
369002
369002
null
617500
1
null
null
0
23
As is in the title, I'm curious: Is including only the consecutive observations of an ID in longitudinal data a prerequisite for estimating ID level fixed effect? For instance, I have longitudinal data with the structure of firm_id-year. There is one value of firm_id, say, 'Corp_Umbrella,' with the value of `sales` at years 1998, 1999, 2001, 2002, 2003, 2004, 2005, 2010, 1011, 2012. To run an estimation with firm fixed-effect, do I need to keep only consecutive observations of 'Corp_Umbrella', i.e., only observations from 2001, 2002, 2003, 2004, 2005 (the maximum consecutive groups)? Or I can have all of the observations as long as there is more than 1 observation in each 'firm_id'?
Is consecutive observations only is a prerequisite of fixed effect estimation in panel data?
CC BY-SA 4.0
null
2023-06-01T01:25:52.863
2023-06-01T02:58:01.550
2023-06-01T02:58:01.550
362671
130153
[ "regression", "panel-data", "fixed-effects-model" ]
617501
1
null
null
2
61
I am running a statistical test to determine if females are more influenced by the framing effect. I designed a survey with three overall questions, each with a "positive frame" and a "negative frame". Each participant would be randomly chosen to answer either the positive or negative frame, and each frame would have a 1-6 Likert scale where participants would rate whether or not they would choose one of two options presented in the questions' scenarios (i.e. rate 1 for definitely choosing drug A and rate 6 for definitely choosing drug B). I already compared the means of the positive frame and negative frame for each gender using a two-sample t-test (see table below), and I was wondering how I would go about analyzing whether or not females are more influenced by the framing effect. Essentially, I'm asking for a test/procedure that could provide a p-value or other measure that can identify if the difference between the frames for each question, for each gender, is statistically significant. (For example, how would I determine if 0.592 and 0.724 in the table are statistically different?) [](https://i.stack.imgur.com/ebQWd.png)
Compare and find p-value between two t-tests
CC BY-SA 4.0
null
2023-06-01T01:30:05.373
2023-06-03T02:57:21.553
2023-06-03T02:57:21.553
389304
389304
[ "hypothesis-testing", "t-test", "multiple-comparisons", "difference-in-difference", "group-differences" ]
617502
1
null
null
-1
24
We'd like to test if two rates (number of occurrences / number of days) are statistically different for a paper. However, for one of the rates (let's say Rate A), we have uncertainty around the exposure (number of days). We have several different estimates for the exposure of Rate A. But the exposure variability is not statistically modeled. We'd like to test the statistical difference between Rate A and the other rate, ideally where the confidence interval for Rate A can capture the uncertainty we have around the exposure estimates. We have a few ideas that would be great to get some thoughts on: - Don't account for uncertainty in Rate A's exposure estimates. Choose one exposure estimate and then use Poisson distribution - Estimate an upper and lower bound for Rate A (without using a proper statistical method) and say this is our confidence interval. I don't know if this is kosher, especially if we'd like to publish. But could this be a reasonable approach to make non-statistical statements about the difference between two rates for a paper? (I'm not a statistician, so apologies for any improper terminology)
Confidence interval for a rate where there is uncertainty around the exposure (number of days)
CC BY-SA 4.0
null
2023-06-01T02:04:55.017
2023-06-01T04:36:04.813
2023-06-01T02:54:38.023
362671
389305
[ "statistical-significance", "confidence-interval", "uncertainty" ]
617503
1
null
null
0
18
I have been using GAMMs to analyze time series data and I have included a smoothing term (hour of day by season) and I can't seem to find the results for the winter season. I have the proper information (edf, Ref.df, F, and p-value) for all my smoothed terms and each season except for winter. I am using the summary function and calling on the gam results within the model (instead of the lme summary output).
Why am I missing the result of a smoothing effect in a GAMM while interpreting results from summary command
CC BY-SA 4.0
null
2023-06-01T02:38:53.557
2023-06-01T14:27:41.707
2023-06-01T14:27:41.707
389307
389307
[ "r", "modeling", "mgcv" ]
617504
1
null
null
2
14
Suppose that I observe a bi-variate joint distribution over two random variables, $(X_1,X_2)$. I want to represent this joint distribution as arising from a function $F$ applied to i.i.d. uniform random variables, that is, I want to find $F:\mathbb [0,1]^2\to\mathbb R^2$ such that when $U_1,U_2$ are i.i.d. $Uniform(0,1)$ random variables, we have $$\begin{pmatrix}X_1\\ X_2\end{pmatrix} = F(U_1,U_2)$$ I also know that $F$ satisfies the following monotonicity property: if $F_2(U_1,U_2) = F_2(U_1',U_2')$, and $U_1 > U_1'$, then $X_1 > X_1'$ and vice versa, if $F_1(U_1,U_2) = F_1(U_1',U_2')$ and $U_2 > U_2'$, then $X_2 > X_2'$. Do these conditions suffice to ensure uniqueness of $F$? If not, what further conditions might I need? Is there any keyword I can search for to find a literature discussing questions like this one?
Uniqueness of a Latent Representation Under Monotonicity Condition?
CC BY-SA 4.0
null
2023-06-01T03:26:35.630
2023-06-01T03:26:35.630
null
null
188356
[ "uniform-distribution", "copula", "latent-variable", "identifiability" ]
617505
1
null
null
1
9
When I do causal mediation analysis in R package mediation::mediate(), I need to print out the standard error of the indirect effect estimate. However, I suppose there is no such output? I have two questions as following, - Is there anyway to get the SE based on this function output? - The output gives me the CI. And I try to calculate the SE based on 'estimate + 1.96*SE = CI upper limit' 'estimate - 1.96*SE = CI lower limit'. However, the estimate is not equal to '(CI upper limit + lower limit)/2', and I don't think this is an appropriate way to calculate the SE. Could you please give me any suggestions for this case? Thank you very much!
The CI and standard error of indirect effect in Mediation analysis
CC BY-SA 4.0
null
2023-06-01T04:12:28.883
2023-06-01T04:43:52.847
2023-06-01T04:43:52.847
386760
386760
[ "confidence-interval", "bootstrap", "standard-error", "mediation" ]
617506
1
null
null
0
5
$U \colon (0,\infty) \to (0,\infty)$ is called a $\rho$-varying function is $\frac{U(xt)}{U(t)} \to x^{\rho}$ as $t \to \infty$. Here, we assume that $U$ is $\rho$-varying function with $\rho>-1$. Furthermore, we assume $U$ is locally integrable and that it's integrable on any interval of form $(0,b), b < \infty$. In the proof I'm reading, we first prove that $\int_0^\infty U(t) dt = \infty$. Next, the proof says for $x>0$, $N<\infty$, $\lim_{t \to \infty} \frac{\int_0^t U(sx)ds }{\int_{N}^t U(sx)ds} = 1$ as $U(sx)$ is $\rho$-varying function of $s$. However, I do not understand why $\lim_{t \to \infty} \frac{\int_0^t U(sx)ds }{\int_{N}^t U(sx)ds} = 1$ holds. If someone could explain, I'd be grateful. The proof is from Extreme Values, Regular Variation and Point Processes by Resnick. [](https://i.stack.imgur.com/puVEg.png) [](https://i.stack.imgur.com/sLRKF.png) I had an attempt but I don't think it's the correct way as I don't believe I can apply L'Hopital rule like this: Not sure if it's the right answer, but: letting $F_1(t) = \int_0^t U(sx)ds$, $F_2(t) = \int_N^t U(sx)ds$ for $N<\infty$, as $U$ is locally integrable and integrable on any finite open interval containg $0$, we may simply apply L'Hôpital's rule to get $\lim_{t \to \infty} \frac{F_1(t)}{F_2(t)} = \lim_{t \to \infty} \frac{U(tx)}{U(tx)} = 1$. This proof doesn't use that $U(sx)$ is $\rho$-varying, which is worrying.
For $\rho$-varying function with $\rho>-1$, $\lim_{t \to \infty} \frac{\int_0^t U(sx)ds }{\int_{N}^t U(sx)ds} = 1$
CC BY-SA 4.0
null
2023-06-01T04:15:56.797
2023-06-01T04:15:56.797
null
null
260660
[ "extreme-value", "measure-theory" ]
617507
2
null
617270
2
null
$\mathbf X$ is a random $n$-vector that models the observation $\mathbf x$ in $\mathbb R^n$. The decision rule is specified by partitioning $\mathbb R^n$ into disjoint subsets $R_1, R_2, \cdots$: if the observation $\mathbf X$ is an element of $R_j$, then we decide or declare that $\mathsf H_j$ is the true hypothesis: $\mathsf H_j$ might or not be the true hypothesis and thus our decision can be in error, or be correct. $L_{k,j}$ is the cost of deciding/declaring that $\mathsf H_j$ is the true hypothesis when in fact, $\mathsf H_k$ is the true hypothesis. Now, Let $f_{\mathbf X\mid \mathsf H_k}(\mathbf x\mid \mathsf H_k)$ denote the conditional density of $\mathbf X$ given that $\mathsf H_k$ is the true hypothesis. Then, given that $\mathsf H_k$ is the true hypothesis, \begin{align} P(\mathbf X \in R_j\mid \mathsf H_k ) &= P(\text{declare that }R_j~\text{is true when in fact }\mathsf H_k~\text{is true})\\ &= \int_{R_j}_{∣\mathsf H_}(∣\mathsf H_)\, \mathrm d\mathbf x. \end{align} Now, declaring that $\mathsf H_j$ is true given that $\mathsf H_k$ is in fact true costs $L_{k,j}$, and so the average cost of making a decision when $\mathsf H_k$ is the true hypothesis is $\displaystyle \sum_j L_{k,j}\int_{R_j}_{∣\mathsf H_}(∣\mathsf H_)\, \mathrm d\mathbf x$. Since the probability that $\mathsf H_k$ is the true hypothesis is $P(\mathsf H_k)$, we incur this cost with probability $P(\mathsf H_k)$, and thus the average loss is \begin{align} E[L] &= \sum_k P(\mathsf H_k) \sum_j L_{k,j}\int_{R_j}_{∣\mathsf H_}(∣\mathsf H_)\, \mathrm d\mathbf x\\ &= \sum_k \sum_j L_{k,j}\int_{R_j}_{∣\mathsf H_}(∣_)\cdot P(\mathsf H_k)\, \mathrm d\mathbf x. \end{align} Bishop writes $p(\mathbf x, C_k)$ for what I have written as $ _{∣\mathsf H_}(∣\mathsf H_)\cdot P(\mathsf H_k)$ which is analogous to writing $P(A\mid B)\cdot P(B)$ as $P(A,B)$ instead of the more formal $P(A\cap B)$. Turning to the OP's question, suppose that we are given that $\mathbf X \in R_j$, and so the decision (or declaration or classification) is that $\mathsf H_j$ is the true hypothesis. If in fact $\mathsf H_k$ is the true hypothesis, then we incur a cost of $L_{k,j}$. So, what is the average cost of deciding that $\mathsf H_j$ is the true hypothesis? Well, we are given that $\mathbf X \in R_j$ and so the average cost of deciding that $\mathsf H_j$ is the true hypothesis is $$\text{average cost of deciding }\mathsf H_j~\text{is true} = \sum_k L_{k,j}\cdot P(\mathsf H_k\mid \mathbf X \in R_j). $$ Bishop expresses this as $\displaystyle \sum_k L_{k,j}\cdot P(\mathsf C_k\mid \mathbf x)$ which does not mention the condition that the observation $\mathbf X$ must belong to $R_j$. In my opinion, this lacuna just serves to confuse the issues.
null
CC BY-SA 4.0
null
2023-06-01T04:15:59.127
2023-06-01T20:52:27.123
2023-06-01T20:52:27.123
6633
6633
null
617508
2
null
617502
0
null
I'm not sure there's a "best way" for this. All your results will have to be taken with a big pinch of salt, given that you are basically guessing the exposure time, and you don't know if your estimates are biased. Some options are: - Use your own knowledge of how the exposure times were guessed, to choose the best one and go with that. This is the simplest option, but does not account for any uncertainty in the exposure. - Use inverse variance weights in regression. For each point, find the variance of the exposure estimates, and use the inverse of the variance as a "weight" for each observation. This assumes that on average, any estimate of exposure rate is unbiased. A bit more complicated. - Run the regression multiple times, once for each estimation method for exposure time. Critically examine the results (not just p-values; check for sense against your expert knowledge). If they are consistent, and that's some support for your hypothesis. - Apply your method of estimating exposure time, to Rate B (for which you do have exact exposure times). Use that to quantify how "good" your exposure estimates are via regression or some other form of analysis. Then, based upon those findings, pick one of the above methods to estimate the relationship between exposure and occurrences. Note on implications of guessing exposure times: - If they are randomly centered around zero, it's just adding noise to the estimate. So no bias, but you're less likely to pick up an effect. - If exposure is over-estimated, then your estimate of Rate A will be too low. - If exposure is under-estimated, then your estimate of Rate A will be too high.
null
CC BY-SA 4.0
null
2023-06-01T04:36:04.813
2023-06-01T04:36:04.813
null
null
369002
null
617509
1
null
null
0
12
Let W be a discrete random variable with cmf $$ F_{W}(w)= 1-\left(\frac{1}{2}\right)^{\lfloor w \rfloor}\ \text{if}\ w>0\ (0\ \text{otherwise}), $$ where $\lfloor w \rfloor$ is the largest integer less than or equal to $w$. How can I get the pmf of $Y=W^2$?
How to get the pmf of Y=W^2 when CMF is given
CC BY-SA 4.0
null
2023-06-01T05:11:53.567
2023-06-01T05:11:53.567
null
null
389309
[ "density-function", "cumulative-distribution-function" ]
617510
1
null
null
0
6
SGD disadvantage is scale gradient to all directions and Adam is fixed it. How can it be? How is the example if depicted in a graph?
SGD disadvantage is scale gradient to all directions and Adam is fixed it. How can it be?
CC BY-SA 4.0
null
2023-06-01T05:26:31.600
2023-06-01T05:26:31.600
null
null
375024
[ "optimization", "gradient-descent", "gradient", "stochastic-gradient-descent", "adam" ]
617511
1
null
null
2
56
We know that if $X \sim N_p(\mu,\Sigma)$ then $(X-\mu)^T \Sigma^{-1}(X-\mu) \sim \chi^2_p$, does the converse hold? Is it possible for a non-multivariate Gaussian random variable to satisfy $(X-E(X))^T (cov(X))^{-1}(X-E(X)) \sim \chi^2_p$?
Does $(X-E(X))^T (cov(X))^{-1}(X-E(X)) \sim \chi^2_p$ imply normality?
CC BY-SA 4.0
null
2023-06-01T05:36:29.340
2023-06-01T13:30:10.293
null
null
68301
[ "distributions" ]
617512
2
null
617511
6
null
A super simple counter example: Let $X \sim \mathcal{N}(0, 1)$, but let $Y = |X|$. Well, what's the distribution of $Y^2$?
null
CC BY-SA 4.0
null
2023-06-01T05:45:39.373
2023-06-01T13:30:10.293
2023-06-01T13:30:10.293
8013
8013
null
617513
1
null
null
0
12
I am learning the weighted majority algorithm in "foundation of machine learning" by Mohri. But I can not understand the conclusion from the book and other reference. It has the statement > No deterministic algorithm can achieve a regret $R_T = o(T)$ over all sequences. How can we prove this? The book provide a scenario with two experts. But I think the statement is a general case, which can not be proved by the special case. Meanwhile, in some reference [course 1](https://people.seas.harvard.edu/%7Eyaron/AM221-S16/lecture_notes/AM221_lecture11.pdf) and [course 2](http://www.cs.cmu.edu/%7E15850/notes/lec15.pdf), they said > Assume the number of mistakes the best expert makes is L ≤ T /2. Then, in the worst case, no deterministic algorithm can make fewer than 2L mistakes. It is confused why the author also proves the general statement by using a special case. Is there any way to prove these two statements by using general cases?
Regret for deterministic algorithm
CC BY-SA 4.0
null
2023-06-01T05:57:43.050
2023-06-01T05:57:43.050
null
null
157934
[ "machine-learning", "online-algorithms" ]
617514
1
null
null
0
16
64,810 women were screened for cervical cancer with a pap-smear test, suppose 132 of the 177 women diagnosed with cancer using colonoscopy and another 983 women tested positive through the screening program Construct a 2x2 table and answer the following: b. What is the prevalence of disease in this population? c. Calculate Sensitivity and specificity? d. Interpret Sensitivity and specificity e. Calculate the: Predictive value positive Predictive value negative False positive rate False negative rate
how to construct a 2x2 table: 64,810 women screened for cervical cancer,132/177 diagnosed with cancer by colonoscopy,983 pos with screening program?
CC BY-SA 4.0
null
2023-06-01T06:43:03.690
2023-06-01T06:43:03.690
null
null
389313
[ "biostatistics" ]
617515
2
null
617493
0
null
The squared terms do probably not refer to the parameter $n$ in a binomial distribution. Instead your computation* and the formula with a square relates to a scaled [Bernoulli distribution](https://en.wikipedia.org/wiki/Bernoulli_distribution) with support $x \in \lbrace 0, w \rbrace$. This has variance $w^2\cdot p(1-p)$. The square term originates from a scaling which increases the variance with the square of that scaling. See '[Variance: Addition and multiplication by a constant](https://Variance#Addition_and_multiplication_by_a_constant)'. --- *The computation is using the rule $\text{Var}(X)=E[X^2]-E[X]^2$
null
CC BY-SA 4.0
null
2023-06-01T06:53:56.110
2023-06-01T07:44:26.710
2023-06-01T07:44:26.710
164061
164061
null
617516
1
617518
null
0
21
I understand the analytic proof that lasso regularisation tends to shrink coefficients to zero. However, from a practical standpoint, most of those methods are combined with gradient optimisation (like SGD). For this reason, gradient of the loss w.r.t. each parameter is $\lambda\texttt{sign}(w_i)$, where $\lambda$ is the regularisation coefficient. Combined with the learning rate, this parameter is $\alpha\lambda\texttt{sign}(w_i)$, where $\alpha$ is a nonzero learning rate. So it can be seen, that unless the value of the parameter $w_i$ is already zero, the gradient w.r.t. the regularisation coefficient is either $\alpha\lambda$ or $-\alpha\lambda$. With this said, how is it possible that many packages (like `glmnet`) produce coefficients that are strictly zero? Shouldn't the value of the coefficient "jump" around 0, with a magnitude not larger than $\alpha\lambda$? For example, let the value of the parameter $w_p$ be $0.1\alpha\lambda$ - then after a single optimisation step, the value of the parameter will change to $-0.9\alpha\lambda$, if only regularisation is concerned. For this reason I have concluded that the value actually becoming zero is a result of an interaction of the regularisation and normal MSE optimisation, but I can't grasp the intuition of this interaciton. Could you please provide a simple intuition of how this interaction makes it possible for the coefficient to be 0 from a computational standpoint?
From a computational perspective, how does the lasso regression shrink coefficients to 0?
CC BY-SA 4.0
null
2023-06-01T07:01:07.523
2023-06-01T07:53:59.090
2023-06-01T07:03:34.313
389315
389315
[ "regression", "lasso", "regularization" ]
617518
2
null
617516
2
null
The coefficients start at zero. That is, the algorithm starts by applying a sufficiently large penalty that all the coefficient estimates are exactly zero. As the penalty is progressively decreased, coefficients start moving away from zero, one at a time. The problem you point out is one reason that starting from a high penalty and computing the whole regularisation path is about as fast as computing $\hat\beta$ for a single value of the penalty. (For linear regression lasso it's even more straightforward, since the path followed by the coefficients as the penalty decrease is piecewise linear and doesn't have to be approximated by gradient descent -- the LARS algorithm does it exactly)
null
CC BY-SA 4.0
null
2023-06-01T07:53:59.090
2023-06-01T07:53:59.090
null
null
249135
null
617519
1
null
null
0
5
How do I keep `normalmixEM` from printing the number of iterations it required? I am using it in a call inside a bootstrap and the resulting dynamic report in RMarkdown becomes a beast as it prints out the required number of iterations for each bootstrap resample fit, as per the 3-line sample below: '## number of iterations= 23 '## number of iterations= 16 '## number of iterations= 17 This behaviour seems to be induced by line 128 in the code for function `normalmixEM`, reading `cat("number of iterations=", iter, "\n")`, and I don't seem to be able to silence it. `normalmixEM` includes an argument `verb` but that is already `FALSE` by default. Tryed RMardown options `message=FALSE,warning=FALSE` but that does not work either.
Silencing output of `normalmixEM` from R package `mixtools`
CC BY-SA 4.0
null
2023-06-01T08:14:36.590
2023-06-01T08:19:35.143
2023-06-01T08:19:35.143
110833
180421
[ "r" ]
617520
2
null
617511
2
null
There are various artificial solutions to this - Let $Y_1$ be any zero-mean variable that is lighter-tailed than Normal and has variance at most 1. Take $Q\sim \chi^2_2$ correlated with $Y_1^2$ so that $Q-Y_1^2$ is always non-negative and then take $Y_2=\sqrt{Q}$ with a random $\pm$ sign (so that its mean is zero). - Let $Y_i$ be independent $\sqrt{\chi^2_{q_i}}$ with a random sign (so that the mean is zero) and with $\sum_i q_i=p$. As an extreme case, take $Y_1\sim \sqrt{\chi^2_p}$ with a random sign and the other $Y_i=0$. [if you follow wikipedia in saying $\chi^2$ distributions have to have integer df, then pretend I said $\Gamma(q_i/2)$ rather than $\chi^2_{q_i}$]
null
CC BY-SA 4.0
null
2023-06-01T08:18:29.793
2023-06-01T08:18:29.793
null
null
249135
null
617521
1
null
null
0
24
An often cited advantage of Structural Equation Modeling (SEM) is that it is able to account for measurement error in the observed indicator variables, therefore allowing for consistent estimates in the presence of error-in-variables (in contrast to standard linear regression). It is not clear to me, however, what types of measurement error are accounted for (i.e., what the specific assumptions concerning this measurement error are). - What if we are not dealing with random measurement error (pure noise, unrelated to both Y and X), but some kind of systematic measurement error (e.g., self-report measures of dietary intake are systematically lower than the actual intake for those at the higher end of the spectrum)? - Do these two types of measurement error correspond to classical and non-classical measurement error? - What are the implications for parameter retrieval? I understood that measurement error usually leads to attenuation bias (coefficient biased towards zero), but in other books I read that the effect can also be overestimated. Under what circumstances would it possibly be overestimated instead of underestimated? Multiple predictors, relaxing assumptions,...? Any explanations or referrals to clarifying textbook chapters would be greatly appreciated.
Types of measurement error and their implications in SEM
CC BY-SA 4.0
null
2023-06-01T08:22:09.370
2023-06-01T09:56:13.223
null
null
321797
[ "structural-equation-modeling", "measurement-error" ]
617522
1
null
null
0
15
I have a dataset where `treatment` and `subject` play a role in how the data is behaving. I am fitting a linear model where I am modeling feature abundance as function of both covariates. My aim is to perform comparisons at `treatment` level while removing the effect of `subject` on the data. However, in some cases I have some `subjects` that are only present in some of the treatments, i.e. their coefficients are not estimatable since they are correlated with treatment, and thus for one treatment group it is confounded by `subject`. My question is thus if I can still trust the estimates of the remaining coefficients for `treatment`? I do not get any warnings but conceptually I'm not sure if it is plausible to trust the `treatment` coefficients, even bearing in mind that for some they may not have been able to decouple `subject` effect?
Linear models: can we trust estimated coefficients when some are not estimatable?
CC BY-SA 4.0
null
2023-06-01T08:42:29.573
2023-06-01T08:42:29.573
null
null
59647
[ "multiple-regression", "linear-model" ]
617523
1
null
null
-2
17
[](https://i.stack.imgur.com/gGt4q.jpg) [enter image description here](https://i.stack.imgur.com/iUIld.jpg) [](https://i.stack.imgur.com/yal3y.jpg) [](https://i.stack.imgur.com/Zz20l.jpg) [](https://i.stack.imgur.com/8qRrS.jpg) need interpreatation
can you interprete my graph?
CC BY-SA 4.0
null
2023-06-01T08:46:49.907
2023-06-01T08:46:49.907
null
null
389320
[ "regression", "interpretation", "linear" ]
617525
1
null
null
1
28
I wanted to ask conceptual about what to do with main effects? Assume, I have randomly assigned equal two groups(let's say CBT, Control; N1=25; N2:25). I collected the depression levels at three time points (pre, pos and follow-up). At pre level, using independent sample t-test, groups did not differ each other significantly)(p>0.05). Now using Two-Way Repeated Measures ANOVA(one-between, one within), I found the time main effect is significant(p<0.05) while group main effect is not significant(p>0.05). However, I found the interaction is also significant(p<0.05). Now obvious logical procedure so far, I need to look for EMM post-hoc comparisons at which time points and groups differ. So, I performed the comparison tests for the interaction with adjusting Bonferroni: Before ending the results let's assume my results look like this: pre-condition: for control: 61(SD:18) for treatment: 60(SD:19) post-condition: for control: 59 (SD:17) for treatment: 45 (SD:18) follow-up: for control: 56 (SD:19) for treatment: 40 (SD:16) I found between group differences which is significant at post and follow up level. I also found pre to post level, treatment condition is significantly lower while control group not. Similar result is also valid for pre to follow-up level but not for post to follow up. When it comes to my conceptual question, my colleague warned me not to look pairwise comparisons unless I found both main effect is significant. Because treatment over control across time cannot be interpret unless you have a main effect of condition. Bottom line is my logic tells me I did the correct way to interpret the results, but I cannot find the exact way or source to defend my method especially two-way repeated measures ANOVA. Some of my colleagues also pointed out if you want to not be affected by pre level condition to assess main effect of group condition, use ANCOVA instead since it was better choice when comparing between groups. Because ANCOVA is an alternative approach to handling data when pre-post test designs especially groups formed by random assignment. And also they suggesting this strategy sourcing this book: ANOVA:Repeated Measures by Girden, E.(1992). In short, how can I interpret this insignificant group main effect? Or is this really critical step to interpret regarding my study design and results whether to interpret interaction? I've read related questions and comments but still not sure to know that whether I need to interpret the insignificant main effect. Or even whether I need to perform ANCOVA for adjusting pre-level at this step instead of RM-ANOVA?
Which is the correct way to deal with insignificant main effect of Group condition? Stick with interaction effect in RM-Anova or perform ANCOVA?
CC BY-SA 4.0
null
2023-06-01T09:09:44.397
2023-06-01T20:03:54.300
2023-06-01T20:03:54.300
389177
389177
[ "anova", "repeated-measures", "ancova", "random-allocation", "main-effects" ]
617526
2
null
616808
0
null
## The causal effect can be identified with the right methodology An instrumental variable (IV) can be used to estimate the causal effect even under hidden confounding. However, one has to use a suitable estimation procedure and how to best estimate effects in an IV setting is a research question of its own. The Wikipedia article on [Instrumental Variable Estimation](https://en.wikipedia.org/wiki/Instrumental_variables_estimation) has a good summary and cites many of the standard works in the literature. Probably the simplest and most common approach would be what's called "Two stage least squares" (2SLS). The idea is to first regress $X$ on the instrument $Z$ to obtain an unconfounded estimate $\hat{X}$. One can then regress $Y$ on $\hat{X}$ to obtain the causal effect. The Wikipedia entry on the subject also contains a short proof of the computation of the 2SLS estimator.
null
CC BY-SA 4.0
null
2023-06-01T09:22:28.343
2023-06-01T09:22:28.343
null
null
250702
null
617527
2
null
358766
1
null
Just a small edit to Kevin's answer: I think there's a small typo as the derivative of the expression $\frac{p}{1-p}$ written above reaches a stationary point at $x=\frac{0.0265}{2 * 0.000462}$. So $57.36$ should be divided by 2.
null
CC BY-SA 4.0
null
2023-06-01T09:30:44.667
2023-06-01T09:30:44.667
null
null
389325
null
617528
1
null
null
0
7
How can I convert 17 joint points into pose parameters in the SMPL model
How to convert 17 joint points in the human 3.6 dataset into pose parameters for 24 nodes in the SMPL model?
CC BY-SA 4.0
null
2023-06-01T09:33:24.410
2023-06-01T09:33:24.410
null
null
389326
[ "machine-learning", "forecasting" ]
617529
2
null
616904
2
null
If you had a large number of data points, I'd strongly recommend simply fitting a random forest while keep your response continuous. Random forests can deal with possible nonlinearities and are structurally quite robust to overfitting. There's no need to dichotomise your continuous variable - it throws away information that is likely useful - so just keep it continuous. Importance can be defined in different ways (more on this in a bit), but the 'permutation importance' commonly used for random forests is conceptually appealing. But with 300-800 data points and >3000 predictors, fitting any flexible model is optimistic. You may be better off fitting a less-flexible model such as a linear regression, though I suppose you could include quadratic terms if linearity is too strong an assumption. An important part of this fitting would be to use regularisation, such as LASSO. Note however that the retained parameters [aren't necessarily more important](https://stats.stackexchange.com/a/367176/121522), in part because [importance isn't a single well-defined concept](https://stats.stackexchange.com/a/202853/121522). The [relaimpo R package](https://cran.r-project.org/web/packages/relaimpo/index.html) implements a variety of different importance metrics and they can provide [fairly different rankings](https://stats.stackexchange.com/questions/155246/which-variable-relative-importance-method-to-use). You could try both modelling options and see how well they work in your case. For importance rankings, I recommend looking into the different metrics and thinking carefully about what way of quantifying importance would be most useful for your specific application.
null
CC BY-SA 4.0
null
2023-06-01T09:38:47.173
2023-06-01T11:21:49.797
2023-06-01T11:21:49.797
121522
121522
null
617530
1
null
null
1
24
I am working with a dataset of 110.000 rows. Each row only has categorical data, most of which is also nominal. Each of these rows represents an event that has several parameters (again, nominal) and an outcome. The question I'm trying to answer is what combination of parameters gives the best result. In this case, the result is the amount of times a combination of parameters is positive or negative. Because there is no numerical value to work with, I'm not sure what to do. I already calculated the average of each single parameter and also included the standard deviation and did a chi squared test to see what values of a certain parameter are likely to be different from the rest (p<0.05). Now, I mainly want to know what combination of parameters (2 or more) are more likely to have a positive or negative outcome. I'm also planning to use machine learning (decision trees/random forests and neural networks) to give some more insight, but these techniques don't provide an objectively true basis to fall back on.
Way to find the best performing combination of categorical parameters
CC BY-SA 4.0
null
2023-06-01T09:55:47.027
2023-06-01T09:55:47.027
null
null
388715
[ "machine-learning", "categorical-data" ]
617531
2
null
617521
0
null
It depends on your measurement design and model(s) what types of measurement "error" (systematic vs. unsystematic) you can account for and whether the different sources of "error" can be separated from one another. For example, to separate random error in self report measures of depression, you need at least two measures (indicators) of depression to separate the true score (reliable) variance from random measurement error variance. If you also wanted to isolate "error" (or "bias") due to self reports, you would need to additionally include multiple indicators of another method (e.g., spouse reports in addition to self reports). If you wanted to isolate/separate indicator- (or item-)specific effects, you would need to use a repeated measures (longitudinal) design in which the same multiple indicators (e.g., items or tests) are measured multiple times. Statistical mediation analysis is an example where the effects of measurement error on parameter estimation can be complex. This is described, for example, in the following article: Fritz, M. S., Kenny, D. A., & MacKinnon, D. P. (2016). The combined effects of measurement error and omitting confounders in the single-mediator model. Multivariate Behavioral Research, 51(5), 681-697. [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5166584/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5166584/)
null
CC BY-SA 4.0
null
2023-06-01T09:56:13.223
2023-06-01T09:56:13.223
null
null
388334
null