Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
614811
1
null
null
0
16
A characterization of the multivariate Gaussian distribution with a fixed mean and covariance matrix is that it is the unique probability distribution that maximizes differential entropy. That is, the Gaussian PDF is the unique solution to the following maximization problem: $$ \text{argmax} \{\mathbb{E}_f[\log(f(x))] : \mathbb{E}_f[x_i] = \mu_i,\; \mathbb{E}_f[x_i x_j] = \sigma_{ij}\}. $$ When there is a solution to this problem if we specify the moments of degree at most $k$ to the probability distribution? That is, when is there a solution to the problem $$ \text{argmax} \{\mathbb{E}[\log(f(x))] : \forall \alpha,\; \mathbb{E}_f[x^{\alpha}] = \mu_{\alpha}\}. $$ Here, $\alpha = (\alpha_1, \dots, \alpha_n)$ ranges over those values where $\sum_{i=1}^n \alpha_i \le k$. For example, it is obviously necessary that there exists a probability distribution with these given moments for there to be a solution (I've heard the question of whether or not there exists such a probability distribution is referred to as a moment problem in some contexts). Is this sufficient? Has this particular family of probability distributions been referred to in the literature before?
Are there maximum entropy distributions with fixed moments of a certain order?
CC BY-SA 4.0
null
2023-05-03T19:30:52.787
2023-05-03T19:30:52.787
null
null
387154
[ "moments", "exponential-family", "maximum-entropy" ]
614812
1
614867
null
2
55
I'm still learning about mixed effects models, so bear with me here. I'm interested in modelling a binary response using a generalized additive mixed effects model with "year" as a covariate and random effect, but I've seen far smarter people than I argue for and against it (a covariate can't be both). For [example](https://stats.stackexchange.com/questions/173159/can-a-variable-be-both-random-and-fixed-effect-at-the-same-time-in-a-mixed-effec): > Absolutely. In fact, the vast majority of the time, you absolutely should include a fixed effect. The reason for this is that random effects are restrained to ∑γ=0 , or always centered around 0. Thus, the random effect is the individual's estimated deviation from the group average for that individual. By leaving out the fixed effect, you would imply that the average effect of time must be 0. I've also heard from colleagues: > "In short, no, a variable can't be both fixed and random. In Frequentist statistics, fixed effects are assumed to represent an actual "true" effect of the variable on the target mean, while random effects are assumed to have an effect on the mean that's randomly drawn from some distribution of possible values." I'd like to know if there's a way around this problem. Please let me know if I'm misinterpreting these points of view. My experimental design: Fish are collected and there stomachs examined to see if they ate something, or not (0/1), at the same 12 sites, every 9 months, every year. Many sites = a zone, and some zones have many more sites than others. My repeated measure is the length of the fish as another covariate. If I'm interested in differences between years, differences between zones, and their interaction, but I also want to capture similarity between observations taken in the same site, zone, and year, is converting year to a continuous variable in the random effects and a factor in the fixed effects one possible solution? Where every zone has it's own trend through time (s(fZone, CYR, bs='re')) and the fixed effects shows where those differences are (fZone*fCYR)? Or, if it must be one or the other, can fixed effects also capture correlation structures without random effects? - fCYR = factor calendar year - fZone = factor zone - CYR = continuous year
Can random slopes also be included as fixed effects?
CC BY-SA 4.0
null
2023-05-03T19:31:58.560
2023-05-04T10:01:34.550
null
null
337106
[ "mixed-model", "mgcv" ]
614813
1
null
null
2
55
I am working on a problem that gives me a joint pdf: $$f_{x,y}(x,y) = 6xy, 0<x<1, 0<y<\sqrt{x} $$ I am asked to find $P(X < 0.5)$ with three decimal places. My approach was to integrate: $\int_{0}^{\sqrt{x}} 6xy\ dy = 3x^2 = f_{x}(x)$ to get the marginal pdf of $x$. Then, I integrated again: $\int_{0}^{0.5} 3x^2 dx$ to get $P(X < 0.5)$. What I got was 0.125, but apparently the answer is 0.625. Am I missing something small or is the answer key just wrong?
Is there a simple error in the answer key, or am I using the wrong approach to get $P(X<0.5)$
CC BY-SA 4.0
null
2023-05-03T19:34:26.877
2023-05-06T18:10:32.810
2023-05-06T18:10:32.810
20519
387153
[ "probability", "joint-distribution", "marginal-distribution" ]
614814
1
null
null
0
5
I've learned that ANOVA is simply a t-test but can compare more than 2 groups. However, I recently saw someone including an interaction term in an ANOVA test. What exactly is this doing? Do I interpret the ANOVA with Interaction effects the same way as a regular ANOVA? (simply compare the means of the groups, where one of those groups is the interaction term)?
ANOVA with interactions
CC BY-SA 4.0
null
2023-05-03T20:03:49.510
2023-05-03T20:03:49.510
null
null
355204
[ "anova" ]
614815
1
null
null
0
12
We're working with Wooldridge Econometrics without matrix algebra. My professor introduces a simple static time series model: $$y_t = \beta_0 +\beta_1x_t+u_t$$ In the presence of serial correlation, we adjust for variance with the presence of autocorrelation: $$Var(\sum_{t=1}^nw_tu_t | X) = \sum_{t=1}^nVar(w_tu_t|X) + \sum_{t=1}^n\sum_{s\neq t}^nCov(w_tu_t, w_su_s | X)$$ I'm good up to here, but how do we adjust this formula and get 'autocorrelation adjusted' variances in the case of a multivariate model with more than one parameter (Where the formula for $\beta_i$ may be a bit more involved (Regression Anatomy))?
HAC Robust Errors - Simple Static Time Series Regression
CC BY-SA 4.0
null
2023-05-03T20:07:14.807
2023-05-03T20:07:14.807
null
null
386405
[ "time-series", "multivariate-regression", "neweywest", "hac" ]
614816
2
null
614751
2
null
The R package dendextend implements some of the methods you are looking for. You can implement the baker's gamma statistic. You could find the reference to it in the dendextend paper: [https://academic.oup.com/bioinformatics/article/31/22/3718/240978](https://academic.oup.com/bioinformatics/article/31/22/3718/240978) If you get to implement it in python, please mention it here for me and others in the future to know about it.
null
CC BY-SA 4.0
null
2023-05-03T20:25:15.737
2023-05-03T20:25:15.737
null
null
253
null
614817
2
null
596807
1
null
Expanding on [@dx2-66's answer](https://stats.stackexchange.com/a/596826/378211), here is a complete code example that also draws the point where the threshold lies: ``` from sklearn.metrics import PrecisionRecallDisplay, precision_recall_curve, average_precision_score # ... y_true = ... y_pred = ... pos_label = 1 # replace with your positive label name = "My Model". # replace with your desired model name precision, recall, thresholds = precision_recall_curve(y_true, y_pred, pos_label=pos_label) f1_scores = 2 * recall * precision / (recall + precision) best_th_ix = np.nanargmax(f1_scores) best_thresh = thresholds[best_th_ix] average_precision = average_precision_score(y_true, y_pred, pos_label=pos_label) display = PrecisionRecallDisplay( precision=precision, recall=recall, average_precision=average_precision, estimator_name=name, pos_label=pos_label) display.plot(name=name) display.ax_.set_title("Test Data") display.ax_.plot(recall[best_th_ix], precision[best_th_ix], "ro", label=f"f1max (th = {best_thresh:.2f})") display.ax_.legend() ```
null
CC BY-SA 4.0
null
2023-05-03T20:36:47.020
2023-05-07T16:59:43.347
2023-05-07T16:59:43.347
378211
378211
null
614818
1
614835
null
2
54
> Let $\{X_n\}$ be a sequence of independent random variables such that $\mathbb P(X_n=\pm 1)=\frac 14$, $\mathbb P(X_n=\pm n)=\frac 1{4n^2}$ and $\mathbb P(X_n=0)=\frac 12 - \frac 1{2n^2}$ for all $n\ge 1$. Define the triangular array $\{X_{nj}:1\le j\le n\}_{n\ge 1}$ by setting $X_{nj}=\frac{X_j}{\sqrt n}$. Check whether the above triangular array satisfies the Lindeberg condition. I have calculated $$s_n:=\sum_{j=1}^n \frac{X_j}{\sqrt n}$$ and $$\sigma_{nj}^2:=\mathbb E[X_{nj}^2]=\frac 1n$$ and hence $$S_n^2:=\sum_{j=1}^n \sigma_{nj}^2 = 1$$ So, I need to prove $$\sum_{i=1}^n E[X_i^2\mathbf{1}_{|X_i|>\epsilon}]\to 0\;\; \forall \epsilon>0$$ to check the Lindeberg condition. which is clearly false as the expression is a sum. I must have made some mistake somewhere which I can't figure out. Please help me.
Checking the Lindeberg Condition
CC BY-SA 4.0
null
2023-05-03T20:39:37.250
2023-05-04T14:06:17.397
2023-05-03T22:47:38.727
319298
319298
[ "probability", "distributions", "normal-distribution", "random-variable", "convergence" ]
614820
2
null
614794
1
null
For estimating the ATT, overlap means that the control gorup "encloses" the treated group; that is, the support of the treated group is in the support of the control group. The relaxedness of this requirement is that the control group doesn't need to be entirely in the support of the treated group; there can be control units that are vastly different from any treated units; they simply will be down-weighted or unmatched. There is no universal way to assess overlap. For individual variables (including the propensity score), you can ensure that the range of the variable in the treated gorup is within the range of the variable in the control group. Ideally this "range" is not just the largest and smallest point, which may be outliers, but rather the range of the part of the distribution that contains enough data to make useful inferences without interpolating. One way to do this is to create a histogram or kernel density plot of the variable in each treated group and see that they overlap. The distributions don't have to be identical; that comes after matching or weighting. But there shouldn't be significant parts of the treated distribution that are outside the support of the control distribution. There are many tools you can use to make such a plot, but the R package `cobalt` (of which I am the author) make it very easy using the `bal.plot()` function. See the [documentation page](https://ngreifer.github.io/cobalt/reference/bal.plot.html) for examples.
null
CC BY-SA 4.0
null
2023-05-03T20:47:58.603
2023-05-03T20:47:58.603
null
null
116195
null
614821
1
null
null
5
174
Let there be a repeatable real world experiment with two outcomes denoted by $0,1$ for convenience (Tossing a coin for example). Let $X_i$ be the random variable that models the ith repetition of the experiment. It is an assumption of our model of that real world phenomenon that $X_1,X_2,X_3,...$ are independent identically distributed to $B(1,p)$. I noticed in all confidence intervals for $p$ I encountered so far, one basically throws away all information in the sample and only keeps track of the total number of 1(s) (Total number of heads in case of a coin). I came up with the following confidence interval. First let me make clear my defintion of a confidence interval in case we are working with a sample of size $n$. Defintion: A $1-\alpha$ confidence interval for the parameter $p$ above are random variables $L,U$ that are functions of our sample $X_1,X_2,...,X_n$ such that $P(L<p<U)\geq 1-\alpha$. It is almost similar to the defintion of my book. Using this defintion, I proceed to design my own confidence interval. Suppose our sample size is even of size $n=2k$. Set $\overline{X}$ to be the average of $X_1,X_2,X_3,...,X_{2k}$, and set $\overline{Y}$ to be the average of $X_2,X_4,X_6,...,X_{2k}$ By Chebyshev inequality, we have the inequalities below: $$P(\overline{X}-\frac{1}{2\sqrt{k\alpha}}<p<\overline{X}+\frac{1}{2\sqrt{k\alpha}})\geq 1-\frac{\alpha}{2}$$ $$P(\overline{Y}-\frac{1}{\sqrt{2k\alpha}}<p<\overline{Y}+\frac{1}{\sqrt{2k\alpha}})\geq 1-\frac{\alpha}{2}$$ By Inclusion exclusion principle, we get: $$P(\overline{Y}-\frac{1}{\sqrt{2k\alpha}}<p<\overline{X}+\frac{1}{2\sqrt{k\alpha}})\geq 1-\alpha$$. Thus, we get a $(1-\alpha)$ confidence interval which is $]\overline{Y}-\frac{1}{\sqrt{2k\alpha}},\overline{X}+\frac{1}{2\sqrt{k\alpha}}[$ Ofourse one could even consider more interesting statistics (more interesting than $\overline{Y}$)from the sample like for example the number of occurrences of the strings $1,0,0,0,1$ in the data collected. Question : Now suppose we use the above confidence interval with confidence 99% for the case of tossing a coin $2\times 10^{30}$ times and it happens that we get the sample realization $0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,.....$, then applying the confidence interval above that I designed will give approximately something like $0.999...<p<0.50001$. I am not sure how to interpret the result of this confidence interval in this case. Should the interpretation be that our model that $X_1,X_2,...$ independent identically distributed is not appropriate ? More generally, does it happen in the literature of statistics that the realization of $L$ happens to be greater than the realization of $U$ for some really critical sample outcomes ? --- $$-----------------------------------------------------$$ Edit: [](https://i.stack.imgur.com/twiCj.png) I added a picture of definition of confidence interval and a clarifying paragraph about it. Question: Let a sample be drawn and the realization of the random variables $L,U$ of the $1-\alpha$ confidence interval turns out to be $l,u$. What happens if $l$ happened to be strictly greater than greater than $u$ ? How does the statistician interpret the result ? @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Edit: One of the answers asked me to clarify my use of the inclusion exclusion principle: $$P(\overline{X}-\frac{1}{2\sqrt{k\alpha}}<p<\overline{X}+\frac{1}{2\sqrt{k\alpha}})\geq 1-\frac{\alpha}{2}$$ $$P(\overline{Y}-\frac{1}{\sqrt{2k\alpha}}<p<\overline{Y}+\frac{1}{\sqrt{2k\alpha}})\geq 1-\frac{\alpha}{2}$$ Denote the event $\{\overline{X}-\frac{1}{2\sqrt{k\alpha}}<p<\overline{X}+\frac{1}{2\sqrt{k\alpha}}\}$ by $A$, and denote the event $\{\overline{Y}-\frac{1}{\sqrt{2k\alpha}}<p<\overline{Y}+\frac{1}{\sqrt{2k\alpha}}\}$ by $B$ . Denote the event $\overline{Y}-\frac{1}{\sqrt{2k\alpha}}<p<\overline{X}+\frac{1}{2\sqrt{k\alpha}}$ by $C$. The result follows from noting that $A\cap B\subseteq C$, hence: $$P(C)\geq P(A\cap B)=P(A)+P(B)-P(A\cup B)\geq 1-\frac{\alpha}{2}+1-\frac{\alpha}{2}-1=1-\alpha$$
Paradoxical Questions about confidence intervals
CC BY-SA 4.0
null
2023-05-03T21:09:55.233
2023-05-05T14:03:18.423
2023-05-05T13:11:08.457
29653
29653
[ "confidence-interval", "paradox" ]
614822
1
614825
null
3
65
> Let $\{X_n\}\xrightarrow{d}X$ and for some $p>0$, we have $$\sup_{n\ge 1} \mathbb E[|X_n|^p]<\infty$$ Show that for any $r\in (0,p)$, we have a. $\mathbb E[|X|^r]<\infty$ b. $\mathbb E[|X_n|^r]\to \mathbb E[|X|^r]$ as $n\to \infty$ [Note: You must not use (b) to prove (a)] I am pretty sure we need to use Skorohod Representation Theorem. Maybe, we also need to use the fact that $$\{X_n\}\xrightarrow{d}X \iff \mathbb E[f(X_n)]\to \mathbb E[f(X)]\;\;\forall f\in \mathcal C_B(\mathbb R)$$ but I can't figure out how to do that. The actual question had $\mathbb E[|X|^r]<\infty$ instead of $\mathbb E[|X_n|^r]<\infty$ which was wrongly written in the first question. So, now I have doubts in part (a) as well. The $1\le r\le p$ case can be tackled using some theorems done in class, but I can't find any argument for the $0<r\le 1$ case.
$\mathbb E[|X_n|^r]<\infty$ and $\mathbb E[|X_n|^r]\to \mathbb E[|X|^r]$ as $n\to \infty$
CC BY-SA 4.0
null
2023-05-03T21:26:53.573
2023-05-04T02:54:28.437
2023-05-04T00:39:03.613
319298
319298
[ "probability", "distributions", "random-variable", "expected-value", "convergence" ]
614823
1
null
null
0
6
I am attempting to classify people as pregnant or non-pregnant. The business case requires that someone be considered in the model if they meet certain criteria (morning sickness, missed ovulation, etc.). Additionally, the business case requires that someone be classified as pregnant if that individual has a terminating event (delivery, miscarriage, etc.). If a member has a terminating event they are classified as pregnant. If a member does not have a terminating event they are classified as non-pregnant. This leads to varying time periods during which classifiers for pregnancy (classifiers being diagnosis codes, procedure codes, prescriptions). These variable are used to train a model for identifying pregnancy. All members that meet criteria are scooped into the model if thought to be pregnant. This leads to variable lengths for which a member is considered within the model -- some exit the model more quickly than others (miscarraige for instance). Members time out if they do not have a pregnancy terminating event with 40 weeks of be considered and therefore classified as non-pregnant. The question is does the variable time period for which a member is considered present a statistical problem in classifying people as pregnant or non-pregnant? In this case we are using a random forest, logistic regression, and boosting algorithm for prediction
Differing Time Periods for Application of Predictors in Classification Model
CC BY-SA 4.0
null
2023-05-03T22:12:57.910
2023-05-03T22:12:57.910
null
null
138931
[ "classification", "predictive-models", "methodology" ]
614824
1
614848
null
1
39
i have a data set that is being generate by a Bernoulli distribution let say $\mathbf{X} \sim x \in \{ 0,1 \}$, where x=1 with probability p. I dont have any information what is the 'p' parameter of this distribution and i am gathering sample to estimate it. My goal is to find the best estimation for 'p' in a sample of N measures. I want to prove that the best estimation after N measures is $f_p = \sum_{i=1}^{N} \frac{x_i}{N}$ Also would be nice to know how many sample i should have in order to estimate p with an error x over a confidence interval C Any references that could guide me?
Best estimator for a binomial distribution
CC BY-SA 4.0
null
2023-05-03T22:27:00.347
2023-05-04T17:24:03.347
null
null
387162
[ "estimation", "inference", "unbiased-estimator", "bernoulli-distribution" ]
614825
2
null
614822
2
null
#### Part (a) I assume you already know how to prove $\sup_n E[|X_n|^r] < \infty$. By continuous mapping theorem, $X_n \overset{d}{\to} X$ implies $|X_n|^r \overset{d}{\to} |X|^r$. It then follows by Skorohod's representation theorem that there exist $\{Y_n\}$ and $Y$ such that $Y_n \overset{d}{=} |X_n|^r$, $Y \overset{d}{=} |X|^r$, and $Y_n \to Y$ with probability $1$. Therefore, by Fatou's lemma, \begin{align} E[|X|^r] = E[Y] = E[\liminf_n Y_n] \leq \liminf_n E[Y_n] = \liminf_n E[|X_n|^r] \leq \sup_n E[|X_n|^r] < \infty. \end{align} #### Part (b) By continuous mapping theorem, $X_n \overset{d}{\to} X$ implies $|X_n|^r \overset{d}{\to} |X|^r$. In view of Theorem 25.12$^\dagger$ (the proof to this theorem indeed uses Skorohod's theorem) in Probability and Measure by Patrick Billingsley, to show $E[|X_n|^r] \to E[|X|^r]$, it suffices to prove $\{|X_n|^r\}$ is [uniformly integrable](https://en.wikipedia.org/wiki/Uniform_integrability#Probability_definition). Indeed, suppose by condition $\sup_n E[|X_n|^p] = M < \infty$, then for any $\alpha > 0$, we have \begin{align} & E[|X_n|^rI_{[|X_n|^r \geq \alpha]}] \\ =& E\left[|X_n|^p\frac{1}{|X_n|^{p - r}}I_{[|X_n| \geq \alpha^{1/r}]}\right] \\ \leq & \frac{1}{\alpha^{(p - r)/r}}E[|X_n|^p] \leq \frac{1}{\alpha^{(p - r)/r}}M \to 0 \end{align} as $\alpha \to \infty$. This completes the proof. --- $\dagger$ > Theorem 25.12. If $X_n \overset{d}{\to} X$ and the $X_n$ are uniformly integrable, then $X$ is integrable and \begin{align} E[X_n] \to E[X]. \end{align}
null
CC BY-SA 4.0
null
2023-05-03T23:04:01.477
2023-05-04T02:54:28.437
2023-05-04T02:54:28.437
319298
20519
null
614826
2
null
593621
1
null
The main reason is to 1) prevent overfitting and 2) ease interpretation. To understand 1), compare two cases; in one case we only use pre-period averages and in another case we use every individual time period. Both cases have very good pre-treatment fit of the dependent variable of interest between the synthetic control and our treated unit. We will be much more trusting in our synthetic control that uses averages because then we have essentially shown that there is co-movement in the outcome of interest across the control units and treated unit, without directly targeting this co-movement to fit on. If we included all time periods, we are ex-ante forcing the model to choose a synthetic control that fit all of the pre-period well, i.e., we at risk of overfitting. That doesn't mean that by fitting all time periods always overfits, but it could be and thus isn't very credible. For 2), note that subject context is extremely important when doing synthetic controls. With enough varied controls we can get a perfect pre-treatment fit but that synthetic control would be garbage. The only time we can use synthetic controls is if we have a case where we don't have a perfect control but rather have a pool of similar non-treated units to compare to. For a credible synthetic control exercise we need to show that the weights/controls make sense logically. The clarity of the exercise is stronger when we have fewer measures. Synthetic controls is not magic, we need to really believe in the validity of the synthetic control we create. There is absolutely no mathematical reason why you could not include every period instead of taking the average! We can treat each time period as a completely separate piece of data (i.e., then the canned code would take a mean over a singleton). For this reason I disagree with Marti's answer.
null
CC BY-SA 4.0
null
2023-05-03T23:17:13.900
2023-05-10T04:43:22.227
2023-05-10T04:43:22.227
242885
242885
null
614827
1
null
null
1
16
I want to conduct a simulation study on double machine learning in Partial linear regression setup as described in Chernozukov's paper for Double machine learning. I want to use lasso and want the number of covariates to be larger than the number of sample . Can you please give a scenario? I can't find any example related to this .
Simulation case for Double machine learning in Partial linear model with lasso where number of covariates is large
CC BY-SA 4.0
null
2023-05-03T23:19:01.220
2023-05-03T23:19:01.220
null
null
387166
[ "machine-learning", "simulation", "lasso", "high-dimensional" ]
614828
2
null
614821
2
null
Your confidence interval (CI) in the example obviously suffers from the construction issue that observations $X_2, X_4, X_6,\ldots$ are given much more weight through $\bar Y$ than $X_1, X_3,\ldots$. Now $\bar Y=1$ will mean that the confidence interval concentrates on the upper half of probabilities, as the fact that the mean of $X_1, X_3,\ldots$ is zero is (mostly) ignored. Even though you defined a technically valid confidence interval, it isn't a good one as optimal use of the information in the data can be made by having the CI dependent on the sufficient statistic $\bar X$ only; the involvement of $\bar Y$ just adds some "noise". Looking at the data, you get an empty confidence interval. Obviously nothing in the definition of CIs stops this from happening, even if the model assumptions are fulfilled, so it doesn't necessarily indicate that model assumptions are violated. The answer of @Flounderer presents a CI that can be empty without giving any information about the data, including whether model assumptions are fulfilled, so in general an empty CI will not indicate that assumptions are not fulfilled. However... > Should the interpretation be that our model that $X_1, X_2,\ldots$ independent identically distributed is not appropriate? All relevant characteristics of a CI are derived under the model assumptions, including the possibility of returning an empty interval. The standard theory of CIs doesn't indicate what happens if model assumptions are not fulfilled, therefore there is no reason in general to infer that model assumptions are violated in case an empty interval is returned. However, looking at the specific definition, one can say something using the correspondence between CIs and tests. For a given CI (including the one in question), one can construct a test by rejecting any $H_0$ (i.e., here, Bernoulli probability $p_0$) that is not covered by the CI. We can also define a (quite conservative) test rejecting any $p_0$ just in case the CI is empty. In fact this test is a test of the i.i.d. Bernoulli(p)-model with arbitrary $p$, because it will reject with probability smaller than $\alpha$ (in fact I suspect the effective level is even $\le\frac{\alpha}{2}$ but I don't take the time to check or prove this) whenever the model holds with whatever $p$. Now this also holds for the analogous test constructed from @Flounderer's trivial CI, but this CI in fact doesn't give any information about any violation of assumptions, as it has the same characteristics regardless of the underlying model (including if assumptions are violated in any way). Your CI however is different. In fact it can be shown that there is a class of models under which the rejection probability, i.e., the power, is larger than $\alpha$. The test will reject with large probability if it is likely that $\bar Y$ is clearly larger than $\bar X$. This happens for example (and most prominently) if the data are not identically distributed, but (in order to make things easy) independently, so that there is a probability $p_1$ for success in $X_1,X_2,X_3,\ldots$ and a probability $p_2$ for success in $X_2,X_4,X_6,\ldots$, and $p_2>p_1$ with a large enough difference, which will depend on $\alpha$ and $k$ (I won't figure this out precisely but it shouldn't be difficult to do it; chances are it's $p_2>p_1+\epsilon$ with $\epsilon\searrow 0$ for $k\to\infty$). So you are right, in principle; the event that the CI is empty can be interpreted as an unbiased test of the i.i.d. model against a model in which $\bar Y$ can be expected to be systematically larger than $\bar X$, and the easiest way to define such a model is above. Note that this relies on precise analysis of the characteristics of the specific CI under both the nominal model and a model for which this test is likely to reject, which under rejection can then be interpreted as a better fitting model as evidenced by the data. The CI of @Flounderer shows that in general this is not always possible. Obviously in the given case, in the first place one should suspect that model assumptions are violated from looking at the data, not from the result of your CI, but anyway, one can legitimately say that the test defined by "CI empty" tests the i.i.d. Bernoulli-model against the non-i.i.d. model defined above, and therefore an empty CI could reject the i.i.d. Bernoulli in favour of the non-i.i.d. model above (not a general non-i.i.d. model though, as if for example $p_1>p_2$ you will see an empty interval even less often than under i.i.d.).
null
CC BY-SA 4.0
null
2023-05-03T23:24:20.757
2023-05-05T11:12:30.313
2023-05-05T11:12:30.313
247165
247165
null
614829
1
null
null
1
15
Network Meta analysis is mostly applied within the medical field. What about applying it to the business or marketing field ? Would that be possible ?
What is the difference between a Network Meta-Analysis compared to a standard meta-analysis, MASEM and Meta-Regression?
CC BY-SA 4.0
null
2023-05-03T23:28:37.943
2023-05-03T23:28:37.943
null
null
387168
[ "meta-analysis" ]
614830
1
null
null
1
8
Consider the following item: 3.1 Resources and expertise availability 4 – High availability of resources and expertise 3 – Moderate availability of resources and expertise 2 – Low availability of resources and expertise 1 - No availability of resources and expertise Respondents have to select one option. Numeric values have been assigned to each option. Can I than use this score data and perform arithmetic operations on it i.e. finding mean, calculating weighted scores etc. Thank you for your help.
Whether asking questions using rating scores constitutes a likert scale or interval data
CC BY-SA 4.0
null
2023-05-03T23:44:45.507
2023-05-03T23:44:45.507
null
null
387169
[ "likert" ]
614831
1
null
null
0
30
I was reading [this paper](https://arxiv.org/abs/2104.02911) and the book "Introduction to Quantum State Estimation" by Yong Siah Teo and I am facing some issues trying to understand how the definition of the cost function given in these two references can match each other and how the likelihood function can rise from this definition. The paper reads: [](https://i.stack.imgur.com/48eSh.png) And the books says: [](https://i.stack.imgur.com/CfmDk.png) Although they seem the same, there are some differences that I cannot understand how to connect one with the other. I mean, the equation $(2)$ seems to be the same one as the $1.2.1$ from the book, except for the summation symbol when taking defining the average cost function. This summation symbol is defined later in the book as: [](https://i.stack.imgur.com/mBQtw.png) My issues are simply that: - I cannot see how to connect these two definitions. This summation symbol that is defining "a conditional average over all possible measurement data" does not seem to enter in any way in the definition of the paper, which seems more plausible since minimizing a cost function is just taking the average "over all possible true configurations" which is the integral the author on the paper stated. - why the likelihood function appears in the average? - How to connect these two definitions considering this summation (average) with the likelihood?
Definition of cost function and likelihood: how one appears from the other
CC BY-SA 4.0
null
2023-05-03T23:44:45.677
2023-05-03T23:44:45.677
null
null
326306
[ "maximum-likelihood", "estimation", "expected-value", "likelihood", "loss-functions" ]
614832
1
null
null
2
35
I would like to model the Value-at-Risk of U.S. sector indices and the U.S. Broad Dollar Index using the variance-covariance method. To achieve this, I model the conditional means and variances of the returns using ARMA-GARCH models. Here is the issue: I first need to determine whether ARMA orders are necessary to eliminate whatever autocorrelation may be present in the returns series. Convention indicates that Ljung-Box tests are in order, however, having researched the topic more, I came across the automatic Portmanteau test for serial correlation as seen [here](https://www.sciencedirect.com/science/article/abs/pii/S0304407609000773), which supposedly addresses some of the shortcomings of the Ljung-Box test, namely the issues of the selection of a superficial lag order, low power, and lack of robustness to heteroskedasticity. The equivalent R package is `vrtest`, and more specifically, the `Auto.Q` command. My anxieties with this statistic lie in the fact that I get vastly different results using it as opposed to the Ljung-Box test through the `Box.test` command. This is true almost across the board with my sector indices, but for instance, the CRSP Real Estate Index yields the following results: ``` > Auto.Q(ts_realestate_is, lags=20) $Stat [1] 1.159207 $Pvalue [1] 0.28163 > Box.test(ts_realestate_is, lag = 10, type = "Ljung") Box-Ljung test data: ts_realestate_is X-squared = 187.59, df = 10, p-value < 2.2e-16 ``` If you follow the results of the Ljung-Box test, then you would come to the conclusion that there is likely serious autocorrelation you need to address with possibly some ARMA model before applying a GARCH-type model, but the Automatic Portmanteau test indicates that the returns series is likely some sort of white noise, or at least that there might not be a reason to apply an ARMA model to address serial correlation. This naturally raises a question: should I trust the automatic portmanteau test over the Ljung-box despite these large disparities? I would greatly appreciate any help on this matter. If it is of relevance, after detecting autocorrelation through the Automatic test, I used auto.afirma to determine optimal ARMA orders by minimizing AIC, and then tested the standardised residuals again with the Automatic test to ensure that no autocorrelation remains. When done exclusively through the Ljung-Box test, I almost always found that none of the optimal models of `auto.arfima` had white noise for the residuals as evaluated by a 10-lag Ljung-Box test. This was another reason why I wanted to use another method; it did not make sense for there to be significant remaining autocorrelation for the residuals of such returns series after fitting ARMA models. Yes, I do understand that ARMA-GARCH orders should ideally be determined in parallel, but I do not currently have a good way of doing so. This is why I am opting to address autocorrelation separately. However, from what I understand from the authors of the test, it should also work for the residuals of some ARMA-GARCH model as well.
Validity of Automatic Portmanteau test for serial correlation vs Ljung-Box Test
CC BY-SA 4.0
null
2023-05-03T23:50:28.310
2023-05-04T08:49:03.653
2023-05-04T08:49:03.653
53690
387167
[ "arima", "model-selection", "residuals", "autocorrelation", "diagnostic" ]
614833
2
null
614073
0
null
The question asks both about standardizing to the 50th percentile and about estimating means, and I think those questions deserve opposite answers. Means I would not be comfortable estimating mean rents from data at 40th and 50th percentiles. There are too many possible changes at the extremes of the distribution which can affect the mean and standard deviation without showing up in the middle percentiles. For example, imagine two datasets of rents, one with and one without summer vacation rentals. These datasets might have similar 40th and 50th percentiles, but the one with summer vacation rentals would have a higher mean, and the 40th and 50th percentile data can't tell you which dataset you have. Medians I'd be more comfortable standardizing to the 50th percentile, using difference-in-differences. For example, consider a model where each state at each time has a lognormal distribution of rents $LN(\mu_{s,t},\sigma_t)$, with auto-correlation across time and positive correlation between states. In this model the dispersion of rents varies by time but is constant across states. Let $L$ be the states with 40th-percentile data in 2005, and $M$ be the states with 50th-percentile data in 2005. Let $\mu_{L,t}$, $\mu_{M,t}$ be the average $\mu$'s for those groups of states at some time. Then we can estimate \begin{align} A&:=\mu_{L,2004}-0.25\sigma_{2004}\simeq\text{mean-log of 2004 data for L's}\\ B&:=\mu_{M,2004}-0.25\sigma_{2004}\simeq\text{mean-log of 2004 data for M’s}\\ C&:=\mu_{L,2005}-0.25\sigma_{2005}\simeq\text{mean-log of 2005 data for L's}\\ D&:=\mu_{M,2005}\phantom{-0.25\sigma_{2005}}\simeq\text{mean-log of 2005 data for M's} \end{align} The difference in differences is $$(A-B)-(C-D)=\mu_{L,2004}-\mu_{M,2004}-\mu_{L,2005}+\mu_{M,2005}-0.25\sigma_{2005}$$ If the states in the two groups changed similarly between 2004 and 2005, then $$(A-B)-(C-D)\simeq -0.25\sigma_{2005}$$ which gives the factor for standardizing 2005 figures from 50th to 40th percentiles or vice versa, relying only on the model being accurate in the middle percentiles.
null
CC BY-SA 4.0
null
2023-05-04T00:28:59.290
2023-05-04T02:10:54.127
2023-05-04T02:10:54.127
225256
225256
null
614834
1
null
null
1
31
I am trying to solve Exercise 4.8 in Cowpertwait-Metcalfe: Introductory Time Series with R. (For self study, not for coursework.) We are given a time series model \begin{align*} x_t & = x_{t - 1} + b_{t - 1} + w_t\\ b_{t - 1} & = 0.167 (x_{t - 1} - x_{t - 2}) + 0.833 b_{t - 2} \end{align*} where $w_t$ is white noise. We are asked to do some algebra to this system of equations, and derive $$ (1 - 0.167 \mathbf{B} + 0.167 \mathbf{B}^2) (1 - \mathbf{B})x_t = w_t $$ where $\mathbf{B}$ is the backward shift (i.e., $\mathbf{B} x_t = x_{t - 1}$). No matter how I try, I just can't get the desired equation. Here is my attempt: Put $\alpha = 0.167$, $\beta = 0.833$, $y_t = (1 - \mathbf{B}) x_t = x_t - x_{t - 1}$. Then we can rewrite the system as \begin{align*} y_t & = b_{t - 1} + w_t\\ b_t & = \alpha y_t + \beta b_{t - 1} \end{align*} or equivalently $\begin{pmatrix} 1 & 0 \\ -\alpha & 1 \end{pmatrix} \begin{pmatrix} y_t\\ b_t \end{pmatrix} = \begin{pmatrix} b_{t - 1} + w_t\\ \beta b_{t - 1} \end{pmatrix}$. Inverting $\begin{pmatrix} 1 & 0 \\ -\alpha & 1 \end{pmatrix} $ gives me $$\begin{pmatrix} y_t\\ b_t \end{pmatrix} = \begin{pmatrix} b_{t - 1} + w_t\\ (\alpha + \beta) b_{t - 1} + \alpha w_t \end{pmatrix}$$ Still, this does not get me any closer to showing $(1 - \alpha \mathbf{B} + \alpha \mathbf{B}^2) y_t = w_t$. Could someone point me in the right direction? Thank you.
Cowpertwait-Metcalfe Introductory Time Series Exercise 4.8 - Derive ARIMA Model
CC BY-SA 4.0
null
2023-05-04T01:12:07.187
2023-05-04T01:12:07.187
null
null
387163
[ "time-series", "arima" ]
614835
2
null
614818
2
null
The general Lindeberg condition is \begin{align} \lim_{n \to \infty}\sum_{j = 1}^n \frac{1}{s_n^2}E[X_{nj}^2I_{[|X_{nj}| \geq \epsilon s_n]}] = 0. \tag{1} \end{align} In your case, as you correctly demonstrated, $s_n^2 = 1$. But $X_{nj} = \frac{X_j}{\color{red}{\sqrt{n}}}$ instead of $X_j$. Therefore, $(1)$ should become \begin{align} \lim_{n \to \infty}\sum_{j = 1}^n \frac{1}{n}E[X_{j}^2I_{[|X_{j}| \geq \epsilon\sqrt{n}]}] = 0. \tag{2} \end{align} For fixed $\epsilon > 0$ and sufficiently large $n$, we have \begin{align} n^{-1}\sum_{j = 1}^n E[X_{j}^2I_{[|X_{j}| \geq \epsilon\sqrt{n}]}] = n^{-1}\sum_{j = \lceil\epsilon\sqrt{n}\rceil}^n j^2 \times \frac{1}{2j^2} = \frac{n - \lceil\epsilon\sqrt{n}\rceil + 1}{2n} \to \frac{1}{2} \end{align} as $n \to \infty$. Hence the Lindeberg condition does not hold for this triangular array.
null
CC BY-SA 4.0
null
2023-05-04T01:44:43.977
2023-05-04T14:06:17.397
2023-05-04T14:06:17.397
20519
20519
null
614836
1
614838
null
2
79
> Show that if $X$ and $Y$ are independent random variables with $X+Y\stackrel{d}{=}X$, then show that $\mathbb P(Y=0)=1$. Can the independence condition be dropped? I could solve the first part using characteristic functions, but I am stuck on the second part where I need to give a counterexample. I think we need to use symmetric random variables, but I can't construct anything concrete.
If $X$ and $Y$ are independent random variables with $X+Y\stackrel{d}{=}X$, then show that $\mathbb P(Y=0)=1$
CC BY-SA 4.0
null
2023-05-04T02:42:00.887
2023-05-04T04:06:30.750
2023-05-04T03:19:34.013
20519
319298
[ "probability", "distributions", "random-variable", "characteristic-function" ]
614837
1
null
null
0
9
I'm looking for where best to start on the following problem. I have a graph of N nodes and each node as a weight. I need to group all nodes into X groups such that the sum of weights in each group are approximately even and that each group is a connected component of the original graph. After some simple searching I came across the lukes partitioning algorithm implemented in python. [https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.community.lukes.lukes_partitioning.html#networkx.algorithms.community.lukes.lukes_partitioning](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.community.lukes.lukes_partitioning.html#networkx.algorithms.community.lukes.lukes_partitioning) This is approximately what I am looking for but performance feels slow for the size of the graph I am working with. Do you have any potentially better ideas that worked for you?
Balanced Graph Communities With Node Weights
CC BY-SA 4.0
null
2023-05-04T02:54:42.693
2023-05-04T02:54:42.693
null
null
387172
[ "mathematical-statistics", "graph-theory" ]
614838
2
null
614836
5
null
If you drop the independence condition, consider the following. $X$ takes $0$ and $1$ with equal probability. $Y$ takes $\pm1$ with equal probability. $X=0$ iff $Y=1$. $X=1$ iff $Y=-1$. Then $X+Y$ takes $0$ and $1$ with equal probability, which is exactly the distribution of $X$.
null
CC BY-SA 4.0
null
2023-05-04T02:54:51.147
2023-05-04T02:54:51.147
null
null
247274
null
614839
2
null
614836
2
null
One counterexample: Let $X \sim N(0, 1)$, $Z$ is a discrete random variable that is independent of $X$ and satisfy $P[Z = 0] = P[Z = -2] = 1/2$. Define $Y = ZX$. Then $X + Y = (1 + Z)X \sim N(0, 1)$ (why?). Hence $X + Y \overset{d}{=} X$. But clearly $P[Y = 0] < 1$: in fact, $P[Y = 0] = P[[Z = 0] \cup [X = 0]] \leq P[Z = 0] + P[X = 0] = 1/2$. --- For the first part, here is a hint of an easier proof that does not call for characteristic function: compute $E[X + Y]$ and $\operatorname{Var}(X + Y)$ respectively.
null
CC BY-SA 4.0
null
2023-05-04T03:33:13.683
2023-05-04T03:33:13.683
null
null
20519
null
614840
2
null
614723
4
null
#### You can look at the a priori inferential properties of estimators (which treats both the data and parameters as random), but this is weaker than standard analysis If you look at a statistical problem from a perspective where both the data and the model parameters are treated as random, you are essentially looking at the a priori properties of estimators. This is an exercise that can be done fruitfully, and it falls within the general class of analysis of the Bayesian properties of estimators. However, performing analysis of this kind is typically weaker than looking at the classical properties of estimators. To see what this type of analysis looks like, suppose we consider some estimation/inferential method, which as you point out, is built on the basis of its properties conditional on the model parameters but unconditional on the data. For example, an exact confidence interval for a model parameter $\theta \in \Theta$ (based on a data vector $\mathbf{x}$) would have the following property (which is essentially the defining property of an exact confidence interval): $$\mathbb{P}(\theta \in \text{CI}( \mathbf{X}, \alpha) | \theta ) = 1-\alpha \quad \quad \quad \text{for all } \theta \in \Theta.$$ Now, if we take any prior distribution $\pi$ for the model parameter then the above property implies the weaker property: $$\mathbb{P}(\theta \in \text{CI}( \mathbf{X}, \alpha)) = \int \limits_\Theta \mathbb{P}(\theta \in \text{CI}( \mathbf{X}, \alpha) | \theta ) \cdot \pi(\theta) \ d\theta = 1-\alpha.$$ As you can see, because the coverage property for a CI holds under all specific parameter values $\theta$ (which is how we analyse estimators/inference methods in classical analysis), this implies that it must also hold (marginally) for any prior distribution over the possible values of $\theta$. Note that the latter is a weaker property than the underlying property defining the exact confidence interval, but it is interesting to note. This tells us that an exact confidence interval formed by classical methods is such that a priori we expect it to have the correct coverage. This is what it looks like to analyse the properties of a statistical estimator treating both the data and the parameter as random. I note your overarching question about whether it would be possible to form a new hybrid approach to estimation/inference by combining classical methods and Bayesian methods. That might be possible in principle, but because the above a priori analysis is weaker than the standard classical approach to looking at estimation, it is unlikely that this would assist you to formulate a better method than existing approaches.
null
CC BY-SA 4.0
null
2023-05-04T03:41:58.523
2023-05-04T03:56:31.307
2023-05-04T03:56:31.307
173082
173082
null
614841
2
null
614836
2
null
Suppose that $X$ has any symmetric distribution with mean $\mu$. Taking $Y \equiv 2(\mu-X)$ (which is obviously not independent of $X$ unless the latter is a constant) then gives: $$X + Y = X + 2(\mu-X) = 2\mu-X= \mu-(X-\mu) \overset{d}{=} X.$$
null
CC BY-SA 4.0
null
2023-05-04T04:06:30.750
2023-05-04T04:06:30.750
null
null
173082
null
614842
1
null
null
8
1464
In my understanding, the ROC curve plots the True positive rate and the False positive rate. [](https://i.stack.imgur.com/WMVsa.png) However, I've also read in other places that the ROC curve helps determine where the threshold for classifying something as "1" should be. Eg. Lets say if the probability of an an object is a "dog" is greater than 50% or 0.5, the classifier would classify it as 1=Dog, and <0.5 = 0 (Not Dog). So where on this ROC curve does it show that threshold, if it only plots the True positive rate and False positive rate? Is the threshold simply where the "elbow" is? And do we pull the x axis' number or the y axis' number? Eg. If we use the x axis' number (false positive rate), the green classifier's threshold should be 0.3 (aka anything above a 30% probability will be classified as 1=Dog)
Where in the ROC curve does it tell you what the threshold is?
CC BY-SA 4.0
null
2023-05-04T04:11:11.957
2023-05-10T19:18:36.767
null
null
361781
[ "roc", "auc" ]
614843
2
null
614842
11
null
It doesn't. If you care about having the highest possible TPR and FPR, the threshold is where the elbow is. If you care more about TPR than FPR, or the other way around, it's something else. If you care about optimizing some other metric, it may be something else.
null
CC BY-SA 4.0
null
2023-05-04T04:29:42.550
2023-05-04T04:29:42.550
null
null
35989
null
614844
2
null
614842
7
null
There is no universal truth to that question. There is always a tradeoff made with any threshold and the ROC visualizes all possible thresholds for you to pick the best. Is it better to err on one side or better to err on the other? That will heavily depend on the topic at hand. A screening test for a disease with the implication to not send your kid to school for a week is less critical with a false positive then a diagnostic test with the implication of removing a limb or an organ. If that is not good enough you may want to look into Youden's J or the Youden Index and the threshold that maximizes it.
null
CC BY-SA 4.0
null
2023-05-04T04:34:38.453
2023-05-04T04:38:26.147
2023-05-04T04:38:26.147
117812
117812
null
614845
1
null
null
1
29
I'm reading on Control chart rules and there is a rule which says: "7 or more consecutive points on one side" in considered "out of control". However, does it take into account the amount of data points? For example, if I have 100 data points (i.e does In-process control tests 100 times), the probability of having <=6 consecutive points on one side is statistically much more higher than if I have 5000 data points (does the tests 5000 times). In a sense, I'm thinking it's similar to coins tosses. The more times you tosses, the more likely you are getting a streak of k heads in a row. And for an expected amount of tosses you are 99% guaranteed to get 7 in a row. Or am i missing something? I expected the rules to be like: Not more than k consecutive points on one side, and k is calculated by some formula which includes the amount of data points. Thank you for your insight.
Does the rules "7 or more consecutive points on one side" in Statistical Process Control take in account the amount of data point?
CC BY-SA 4.0
null
2023-05-04T05:00:05.067
2023-05-04T05:00:05.067
null
null
384895
[ "mathematical-statistics", "quality-control" ]
614846
1
null
null
1
7
In [Bishop's PRML](https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf) on page 259 he discusses a L2 regularizer for each layer of a 2-layer neural network, given by $$ \begin{equation} \frac{\lambda_1}{2}\sum_{w\in W_1}w^2 + \frac{\lambda_2}{2}\sum_{w\in W_2}w^2 \end{equation} $$ He then explains that this corresponds to the prior distribution $$ \begin{equation} p( w| \alpha_1, \alpha_2) \propto \exp{\left( -\frac{\alpha_1}{2} \sum_{w \in (1)}w^2 - \frac{\alpha_2}{2} \sum_{w \in (2)}w^2 \right)} \end{equation} $$ which is "improper because the bias parameters are unconstrained." I'm having trouble understanding why this distribution is improper, and why he considers the biases to be unconstrained?
Unconstrained Biases and Neural Network Regularization
CC BY-SA 4.0
null
2023-05-04T05:35:25.803
2023-05-04T05:35:25.803
null
null
387175
[ "neural-networks", "regularization", "improper-prior" ]
614847
2
null
614723
3
null
As J. Delaney's comment says, the Bayesian approach already allows both the data and the parameters to be random. I think the confusion arises because "the parameters are fixed and the data is random" is not true under the frequentist approach, and "the parameters are random and the data is fixed" is not true under the Bayesian approach either. (See the answers to [this question](https://stats.stackexchange.com/questions/491436/what-does-parameters-are-fixed-and-data-vary-in-frequentists-term-and-parame) for more details.) What is going on? In both cases you choose a family of models, for example a $N(\mu, \sigma^2)$, which could have generated your data $X$. In the Bayesian case, you treat $\mu$ and $\sigma^2$ as random variables and calculate their conditional distribution given your observed data $X$. In order to do this, you must choose a prior distribution for $\mu$ and $\sigma^2$. Sometimes you don't want to do this. In the Frequentist case, you are not allowed to treat $\mu$ and $\sigma^2$ as random variables. Instead, you seek to make statements which are valid no matter what the true values of $\mu$ and $\sigma^2$ happen to be. These statements are constructed by considering what kind of data might have been generated by different values of $\mu$ and $\sigma^2$. But whatever result you get is still conditional on your observed data $X$. It's just that it's not called a conditional distribution in the frequentist case. For example, suppose your frequentist confidence interval for $\mu$ is $[2, 3]$. Then if you had collected a different data set $X'$ on a different day, you would probably end up with a different confidence interval. Similarly, say your Bayesian credible interval for $\mu$ is $[2, 3]$. If you had collected a different data set $X'$ on a different day, you would probably end up with a different credible interval as well.
null
CC BY-SA 4.0
null
2023-05-04T05:46:14.197
2023-05-04T05:46:14.197
null
null
13818
null
614848
2
null
614824
0
null
> My goal is to find the best estimation for 'p' in a sample of N measures. You haven't defined "best", but I'm willing to bet that one of the properties of the Maximum Likelihood Estimators might fit whatever description you're thinking of. When the likelihood is well behaved, as it would be here, estimates are consistent, efficient, and are asymptotically normal. In some cases, they can also be unbiased. > I want to prove that the best estimation after N measures is This is a very simple proof and you can google something like "Maximum Likelihood Estimate of Binomial Distribution" or search this website for something similar. > Also would be nice to know how many sample i should have in order to estimate p with an error x over a confidence interval C There are a lot of confidence intervals for the binomial proportion. I will use the simplest one for now, the Wald Interval, in the hopes you can extend the methodology to a better interval of your choosing. The radius of the interval (what you call $x$ in your comment) is $$ x = z_{1-\alpha/2}\sqrt{\dfrac{p(1-p)}{n}} $$ When using a 95% CI, $z_{1-\alpha/2} \approx 1.96 \approx 2$ for economy of thought. We can very easily solve for $n$ $$ n = \dfrac{4p(1-p)}{x^2} $$ We can further simplify this. Note that the variance of the binomial is bounded by 0.25, and that the variance appears in our expression. This results in $$ n = \dfrac{4}{x^2}p(1-p) \leq \dfrac{4}{x^2}0.25 = \dfrac{1}{x^2} $$ So, in order to guarantee that the resulting confidence interval (regardless of the value of $p$) has a radius smaller than $x$, you need $1/x^2$ samples. Depending on the value of $p$, you might actually need a lot more. But this will guarantee a radius of $x$.
null
CC BY-SA 4.0
null
2023-05-04T06:17:56.867
2023-05-04T17:24:03.347
2023-05-04T17:24:03.347
111259
111259
null
614849
2
null
614842
21
null
Each (FPR, TPR) point on a ROC curve is associated with a threshold. However, the thresholds are not typically drawn on the curve itself. It is possible to reveal them, either adding extra annotation to the curve, or by coloring the curves. Here are some examples generated in R with pROC and ROCR, respectively: ![Plot of thresholds on a ROC curve annotated with pROC](https://i.stack.imgur.com/CIxwM.png)![Plot of thresholds on a ROC curve colorized with ROCR](https://i.stack.imgur.com/uTaxE.png) Here is R code to generate these plots: ``` set.seed(42) truth <- rbinom(30, 1, 0.5) predictor <- rnorm(30) + truth + 1 library(pROC) plot(roc(truth, predictor), print.thres="local") library(ROCR) pred <- prediction(predictor, truth) perf <- performance(pred, measure = "tpr", x.measure = "fpr") plot(perf, colorize=TRUE) ```
null
CC BY-SA 4.0
null
2023-05-04T06:28:15.190
2023-05-05T07:07:09.880
2023-05-05T07:07:09.880
36682
36682
null
614850
1
null
null
0
73
I need to compare the significance of Gender between the two groups. Here is the table: ``` df <- data.frame(No = c(1:15), Group = sample(1:2, 15, replace=T), Gender = sample(c("F","M"), 15, replace=T)) Ge_M <- df %>% filter(Gender == "M") %>% group_by(Group) %>% summarise(Value = n()) %>% pull(Value) Ge_F <- df %>% filter(Gender == "F") %>% group_by(Group) %>% summarise(Value = n()) %>% pull(Value) t.test(Ge_M, Ge_F) ``` Output : ``` Welch Two Sample t-test data: Ge_M and Ge_F t = -0.82199, df = 1.0555, p-value = 0.5561 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -36.65746 31.65746 sample estimates: mean of x mean of y 2.5 5.0 ``` Question: Am I using it in the right way? I want to know if I have significant a difference between the two groups but the output seems not what I want. What's the right method to do the test? Thank you!
Test for significant differences on gender between two groups in R
CC BY-SA 4.0
null
2023-05-04T05:47:33.380
2023-05-04T08:28:21.097
2023-05-04T08:28:21.097
56940
387184
[ "r", "hypothesis-testing", "statistical-significance", "t-test" ]
614851
1
614855
null
0
68
I am working on an attrition dataset which has a large number of categorical parameters. Each categorical parameter has a high cardinality, so one-hot encoding them is out of question. I was looking for models which can handle categorical data with high cardinality and came across CatBoost and LightGBM. Catboost is working as expected. However, in case of LightGBM, I'm unable to use my categorical features. The following lines were picked up from the official documentation of LightGBM and I am struggling to understand those. LightGBM can use categorical features as input directly. It doesn’t need to convert to one-hot encoding, and is much faster than one-hot encoding (about 8x speed-up). Note: You should convert your categorical features to int type before you construct Dataset. How do I convert nominal data to int?! If I follow the documentation, I get the following error ValueError: DataFrame.dtypes for data must be int, float or bool. Did not expect the data types in the following fields: Business, Segment Desc, Family Desc, Class Desc, Job Desc, Site Tag, City Desc, Employee Group, Gender, Marital Status, Award Desc, Shift Schedule
How to use categorical features in lightGBM?
CC BY-SA 4.0
null
2023-05-04T07:05:18.330
2023-05-04T08:14:07.503
2023-05-04T08:14:07.503
86652
387180
[ "classification", "boosting", "cart", "categorical-encoding", "lightgbm" ]
614852
1
null
null
0
15
Suppose I have a model (Model_A) that can predict the net weights of products from an arbitrary input X. `Weight = Model_A(X)` Model_A has a mean absolute error of `a`. --- I have a set of ground truth weights for a bunch of packages with many individual products. My objective is to use `Model_A` to predict the weights of the products in those packages and estimate how much additional weight was added during packaging. And ultimately estimate what percentage of the weight of the package is packaging materials. How would the mean absolute error of `Model_A` influence my final results. In quantifiable terms. Of course there should be some "summed" error, but how exactly does the composite error get computed.
How would the MAE of a set of predictions affect further predictions made with that set of predictions
CC BY-SA 4.0
null
2023-05-04T07:14:30.980
2023-05-04T07:14:30.980
null
null
386952
[ "modeling", "mean-absolute-deviation" ]
614853
1
null
null
1
28
Given that $X\sim\text{Bernoulli}(\nu)$, for some $\nu\in(0,1)$, and $Y\sim N(0,1)$ are independent random variables. What is the entropy $H(CX+Y)$, where $C$ is some fixed constant? I am a bit confused about how to derive this since we are adding discrete and continuous random variables together. Thanks.
What is the sum of $H(CX+Y)$ when $X$ and $Y$ are independent?
CC BY-SA 4.0
null
2023-05-04T07:26:41.030
2023-05-04T09:55:55.063
2023-05-04T09:55:55.063
217249
217249
[ "probability", "mathematical-statistics", "information-theory" ]
614854
2
null
614850
3
null
Answer to the current version It is not clear to me what you are trying to do here since the new `df` misses the response (see below). But one critique may be that you may want to check for the homoscedasticity assumption first, using say `var.test` or any other test for equality of variances. And if the homoscedasticity assumption cannot be rejected, then you can use a t-test with `var.equal = TRUE`. Answer to the original version of the post The problem here is with the use of `df` which is an `R` function (`df` computes the density of an F distribution, run `?df` to check it). Using another name, say `dd`, fixes the issue. Note that in the case of two groups, you can also use the $t$-test. ``` dd <- read.table("data.txt", header = F,row.names = 1) names(dd) <- c("ID", "group", "value") str(dd) > oneway.test(value ~ group, data = dd, var.equal = TRUE) One-way analysis of means data: value and group F = 2.6127, num df = 1, denom df = 18, p-value = 0.1234 # or the equivalent version via t-test > with(dd, t.test(value~group, var.equal = TRUE)) Two Sample t-test data: value by group t = -1.6164, df = 18, p-value = 0.1234 alternative hypothesis: true difference in means between group 1 and group 2 is not equal to 0 95 percent confidence interval: -8.580329 1.118425 sample estimates: mean in group 1 mean in group 2 2.385714 6.116667 ``` Remark Furthermore, as also highlighted in the comments by [Roland](https://stats.stackexchange.com/users/11849/roland), `value` under group 1 has many zeros, i.e. ``` > with(dd, table(value[group == 1])) 0 2 4.3 6.6 8.3 12.2 9 1 1 1 1 1 ``` This casts doubts on the normality of such a variable and thus the p-value of the t-test (or ANOVA) may not be correct.
null
CC BY-SA 4.0
null
2023-05-04T07:47:52.523
2023-05-04T08:09:43.707
2023-05-04T08:09:43.707
56940
56940
null
614855
2
null
614851
0
null
You can assign a integer number for every category (plus to be safe a category for "other" = anything new, where you maybe want to group rare categories). There's already a lot of other answers out there on how one could also represent categorical features in terms of [dimensionality reduction](https://stats.stackexchange.com/questions/584742/does-dimension-reduction-in-more-than-2-or-3-dimension-make-sense/584745#584745) and [ideas like target encoding/random effects/embeddings/](https://stats.stackexchange.com/questions/597580/machine-learning-binary-classifcation-models-with-categorical-variables-how-to/598132#598132). I'd especially consider ideas like target encoding and training a neural network with an embedding layer and then taking the embeddings for the categorical features as a feature for LightGBM. If you have a cold-start problem (i.e. some categories will initially have no data), then [Bayesian target encoding](https://arxiv.org/abs/2006.01317) can be helpful, e.g. if you are predicting a logit-proportion, then having e.g. a Beta(0.5, 0.5) prior for each category and providing LightGBM not just with the mean or median of the distribution, but also inter-quartile range, or the 90th and 10th percentile can tell the model about the uncertainty about the new category (we e.g. used that idea for [predicting drug approvals](https://doi.org/10.1016/j.patter.2021.100312), where there's constantly new drug classes etc.). I can only, again, recommend to look at what people tend to do in Kaggle competitions, where LightGBM is widely used and high-cardinality categorical data (e.g. users, products, shops, locations is common). Besides browsing the forums for solutions to competitions, there's [Kaggle competition GM Thakur's book](https://github.com/abhishekkrthakur/approachingalmost), [the Kaggle book](https://github.com/PacktPublishing/The-Kaggle-Book), the [book](https://github.com/fastai/fastbook/blob/master/09_tabular.ipynb) on the [fast.ai course](https://www.fast.ai/) and an excellent [How to Win a Data Science Competition: Learn from Top Kagglers](https://www.coursera.org/learn/competitive-data-science/home/welcome) course on coursera.org (as of May 2023 inaccessible, if you have not already enrolled, due to the association of the course with Moscow university).
null
CC BY-SA 4.0
null
2023-05-04T07:59:26.570
2023-05-04T07:59:26.570
null
null
86652
null
614857
1
null
null
0
8
i have 2 data frames, an old training data set, and a new training data set. Both have the same features and order of the features used for training the model along with the ground truth column. something like so: ``` f1 f2 f3 ... Ground_Truth v1 v2 v3 . actual_label1 v1 v2 v3 . actual_label2 v1 v2 v3 . actual_label3 . . ``` and the problem is a binary classification task. I want to know if I can do detect data drift based on the label of the data points. Like for the 1s class, I want to see their drift calculations and for the 0s class the same thing. I thought of just filtering the data to the class I want but I m not so sure about this. Any suggestions, please? Note: I have very imbalanced data, the majority class is way significant in terms of the number of observations with respect to the minority class.
is it possible to detect data drift based on the ground truth labels classes?
CC BY-SA 4.0
null
2023-05-04T08:43:34.997
2023-05-04T08:43:34.997
null
null
363384
[ "classification", "unbalanced-classes", "data-drift" ]
614858
2
null
614832
1
null
Some points: - The results of Auto.Q and Box.test may differ not only due to the peculiarities of the test statistics, but also because of the different lag lengths that are used. For a direct comparison between the test results, specify the same lag length in Box.test as you get in Auto.Q. - Ljung-Box test might not be appropriate for diagnostic testing of ARMA models; see "Testing for autocorrelation: Ljung-Box versus Breusch-Godfrey". I have no idea whether the same applies to the Automatic Portmanteau test as well. - When choosing an ARMA model by AIC, you aim for a model that is best at prediction. That need not be the model that has white noise residuals. The additional complexity of the model needed to make the residuals white noise may not be justified from the forecasting perspective. This is about the bias-variance trade-off. AIC-based trade-off will be different than obtained by seeking white noise residuals.
null
CC BY-SA 4.0
null
2023-05-04T08:47:10.113
2023-05-04T08:47:10.113
null
null
53690
null
614859
2
null
614807
1
null
It looks like you are correct. They make the same mistake on the formula for a 3D-CNN. These appear to be misquoted/mis-copied from their cited reference, Ji et al.'s [3D Convolutional Neural Networks for Crop Classification with Multi-Temporal Remote Sensing Images](https://www.mdpi.com/2072-4292/10/1/75), who show the 2D convolutional operation as: $$y_{cd}=\sigma\left(\sum_{n=0}^N\sum_{i=0}^M\sum_{j=0}^M w_{ij,n}x_{(c+i)(d+j),n}+b\right)$$ and the 3D convolutional operation as: $$y_{cde}=\sigma\left(\sum_{n=0}\sum_{k=0}^N\sum_{i=0}^M\sum_{j=0}^M w_{kij,n}x_{(c+i)(d+j)(e+k),n}+b\right)$$
null
CC BY-SA 4.0
null
2023-05-04T08:49:37.080
2023-05-04T08:49:37.080
null
null
354273
null
614860
1
null
null
1
45
I want to correct for alpha error accumulation for my 10 crossed linear mixed models. The random parameters are specified as follows: random intercepts for subject and stimulus and a random slope for factor1,factor2, ... to factor10 in subject are estimated. The R-Code looks like this. ``` m1<-lmer(outcome1 ~ factor1 + (factor1|subject) + (1|stimulus), data=dat) m2<-lmer(outcome2 ~ factor2 + (factor2|subject) + (1|stimulus), data=dat) # ... m9<-lmer(outcome9 ~ factor9 + (factor9|subject) + (1|stimulus), data=dat) m10<-lmer(outcome10 ~ factor10 + (factor10|subject) + (1|stimulus), data=dat) ``` Is a simple Bonferroni-Correction appropriate for this issue?
How to correct for multiple testing in linear mixed models?
CC BY-SA 4.0
null
2023-05-04T09:05:50.240
2023-05-04T12:12:43.427
2023-05-04T12:12:43.427
386762
386762
[ "r", "mixed-model", "lme4-nlme", "bonferroni", "type-i-and-ii-errors" ]
614861
1
615003
null
1
25
I would like to know if there is a statistical model to analyze my problem: I want to test if the location of the tumor is related to the level of a particular biomarker - A patient may have multiple tumours (tumor sites can be correlated but let's ignore this point to start). - It is not a competitive model, since the appearance of a tumor does not prevent the appearance of another in another location (but of course death prevents the appearance of a tumor). - These are survival data: we have the time from diagnosis until the appearance of the tumour. More details: These tumors are the relapse of a primary tumor located in the breast. All patients included have a primary breast tumour. Some of them will develop secondary tumors in other locations. There are about ten possible locations. The question that arises: is the level of the biomarker related to the location? We already know that a high biomarker level decreases the risk of developing secondary tumors I have no idea if there is a model to analyze such data! And if a model exists, an implemented one on R would be ideal :-) Thanks for any suggestion
A survival model with multiple concomitant survival events
CC BY-SA 4.0
null
2023-05-04T09:18:26.450
2023-05-05T13:33:35.653
2023-05-05T08:07:30.017
226087
226087
[ "r", "survival", "cox-model", "competing-risks" ]
614862
1
null
null
0
10
Here is a graph of the 29 patients in my study, with three follow-up periods. Not all patients completed all three follow-ups I want to test the overall trend to see if it is significantly different from 0 and if it is increasing or decreasing. I'm wondering if the best test to do is the Mann-kendall test or if it's the slope that I should test. [](https://i.stack.imgur.com/uUcz5.png)
Should I test the trend by the test of Mann-kendall or by the slope
CC BY-SA 4.0
null
2023-05-04T09:25:43.247
2023-05-04T09:25:43.247
null
null
269691
[ "statistical-significance", "trend" ]
614863
1
null
null
0
10
I have about 200 test items that contribute to 8 sections of a test, called Brain_Test. The 8 sections are converted to standardised scores. The standardised scores are individually interpreted, or added to provide an overall test score and interpretation. I am interested in a small part of a section. Let's call it Section A, Part 2.5. I have conceptualised this part differently, used its scores to view the data in different ways, through creating new variables, for example reverse scoring them. E.g. where a case can score 5 x1 point items, I have given a different weighting to the items of 1,2,3,4,5, a total of 15, and not 5. I am interested in whether these new variables can be used to help predict a proportion of the overall test score. I believe this small part taps a common skill which underlies the overall Brain_Test score and could be useful clinically. However, it would appear I am guilty of the part-whole problem AKA circularity. I have mulled over the problem and sought help but remain befuddled. I have chosen not to split the data into two separate groups, and do not want to go down a complex analysis route. To avoid the apparent circularity I removed Section A completely from the overall Brain_Test score (No_A_Brain_Test), and then computed a [stepwise] regression using my new variables. They are significant at a <.001 level and each have an adj R2 value of between .18 (singlularly) and .41 (combined). Of course I can no longer convert Section A to a scaled score, nor is the overall Brain_Test score interpretable as a standardised score, there is no way to compute it as a whole when you just have the predictor values. However, in using the predictors in the model, I can plug in a case's scores and get an estimate of their overall Brain_Test score. I now intend converting the new No_A_Brain_Test score into z-scores, which will provide standardised values for this sample. I would need to also provide ranges via confidence intervals. Does anyone know how to do that on SPSS? I guess I would need to calculate upper and lower limits. Really interested to know if anyone sees fault with my statistical methodology? Maybe I have gone very wrong somewhere. Appreciate any help.
Feedback on my part-whole analysis, regression
CC BY-SA 4.0
null
2023-05-04T09:26:48.137
2023-05-04T09:26:48.137
null
null
233166
[ "regression", "multiple-regression" ]
614866
1
null
null
1
34
So I am doing a project where the response variable is the number of days in which a product is rebought. Explanatory variables are the weight of the product, serving size etc. So basically I am predicting the number of days in which customers will likely rebuy, given the product weight, and serving size. All products are of the same type, so for example all products are from the bath bombs category. I fitted all the regression models that I could've thought of, and Zero Inflated Negative Binomial fit the best (and a very good fit at that), because there was an excess of zero data points. So I came to the conclusion that the number of days is a count variable, so it makes sense. I showed my work to a subject matter expert in statistics, and he stopped me midway to ask why did I fit a regression model. Why not survival analysis? I thought the 'number of something' is a count variable, and the model was also a great fit. Is this a case of 'It does not make sense statistically, but if it works, it works' ? Are there zero inflated survival analysis techniques? Or are there cases where a number of days to an event act like count variables?
Is Repurchase period in days a count variable?
CC BY-SA 4.0
null
2023-05-04T09:58:06.477
2023-05-14T19:17:41.163
2023-05-14T19:17:41.163
919
384673
[ "survival", "poisson-regression", "negative-binomial-distribution", "zero-inflation" ]
614867
2
null
614812
2
null
I think the way you are thinking about this is conflating different kinds random effects into a single entity. It wouldn't make sense to fit a model like ``` y ~ fZone + (1 | fZone) ``` where we have a parametric fixed effect for zone and a random effect for zone. What one is trying to do with either of these terms is to account for the mean of Y for each zone. The random intercept achieves this with some shrinkage towards the overall model intercept/constant term. But as shown, the pseudo-code model would be including the Zone means twice. However, in the context of your time covariate for `year`, you would want to do something like ``` y ~ cyr + (1 + cyr | fZone) ``` for example, where the `cyr` term is referred to as the "population" level effect of `cyr` and represents that average change in `y` for a unit change in `cyr` (having accounted for variation in the effects of `cyr` on each zone). The `(1 + cyr | fZone)` term models a separate mean value of `y` for each zone, plus a separate "random slope" of year for each zone, which captures the variation in the change in `y` for a unit change in `cyr` over the different zones (each zone gets it's own slope/trend). But you shouldn't think that factor terms can't be both fixed and random. If you had more levels of variation in the data, it could very much make sense to have, say, an overall treatment effect, plus different treatment effects for each level of a grouping variable. ``` y ~ treatment + (1 + treatment | field) ``` where `treatment` is a factor and the experiment was conducted over $f$ `field`s, with replication at the `field` level. The fixed `treatment` effect would capture the average effect of the treatment over the whole data set, but the random effect of `treatment` would allow the effects of the treatment levels to vary between the fields. For example, the effect of a fertilizer treatment over the control would be to increase yields (`y`) in all samples, but that effect could very much vary between fields because of natural (or human-induced) variation in the organic content of the soils; all else equal, we might expect less of an effect of an application of nitrogen fertilizer to a soil that is already rich in N content, compared with the same application of N fertilizer in a soil that is deplete in N. The main take away here is to think about how your covariates vary in the data and at what levels in the data hierarchy the covariates may have effects. It is too simplistic to distill "random effects" down to the sorts of blanket statement your colleague provided. In your specific example of a model (GAM), the term `s(fZone, CYR, bs = "re")` doesn't include the different group means; one would typically do `s(fZone, bs = "re") + s(fZone, CYR, bs = "re")` to get the equivalent of `(1 | fZone) + (0 + CYR | fZone)` (because mgcv isn't fitting correlated random effects). I personally don't think it makes sense to include both Year as a continuous (linear) trend and as a parametric factor effect as you intend - you are basically including the trend part twice. The factor based time effect is essentially a non-linear trend, but that could easily model the linear trend you are setting up in the random effect smooth; at that point your model is likely getting unidentifiable. In general you could model the trend(s) as: - simply a factor time fixed effect - a random effect of the time factor - a linear trend via a continuous time variable - a smooth non-linear trend via a spline of the continuous time variable - a MRF representation of an AR(1) You could have combinations of these; a low-EDF smooth of time plus a random intercept of factor time, for example. All of these could be nested within zones. Which approach you take is highly dependent upon the data you to hand (how many years, how many zones, etc) and the questions you wish to ask.
null
CC BY-SA 4.0
null
2023-05-04T10:01:34.550
2023-05-04T10:01:34.550
null
null
1390
null
614868
1
614886
null
1
49
In an intervention study, I have the following information - % of students passing an exam pre- and post intervention, sample size for control group and intervention group at pre- and post intervention, how can I work out the effect size?
How to calculate effect size based on percentage and sample size?
CC BY-SA 4.0
null
2023-05-04T10:07:57.240
2023-05-08T19:15:17.933
null
null
376453
[ "sample-size", "effect-size", "percentage" ]
614869
2
null
614866
2
null
What do you do, if you are still waiting for some customers? You know it's a number larger than the current days you've waited, but not how much larger. "Censoring"/survival analysis gives you a way of dealing with that by using that information assuming some underlying distribution. The assumed underlying distribution can be many things including various continuous ones (you might treat day = interval censored time that falls within the day), discrete distributions like the one you used and even semi-parametric ones. There's extensions to things like zero inflation or "cure fractions" (= in this case people that never re-purchase). If you instead subset to only those where you've already observed a repurchase, you are severely biasing your estimates (and at the time you are making a prediction, you don't really know that the customer will repurchase).
null
CC BY-SA 4.0
null
2023-05-04T10:12:26.897
2023-05-04T10:12:26.897
null
null
86652
null
614870
2
null
460674
1
null
No, it's not possible to calculate the accuracy solely based on Precision and Recall. Building up on the previous answers, even if you know the sample size $N$, you'd still need more information. Given that: - $N = TP+TN+FP+FN \implies TN = N-(TP+FP+FN)$ - Precision is defined as $P = \frac{TP}{TP+FN}$ - Recall is defined as $R = \frac{TP}{TP+FN}$ - Accuracy is defined as $Acc = \frac{TN+TP}{N}$ We have: $Acc = \frac{TN+TP}{N} = \frac{N-(TP+FP+FN)+TP}{N}=\frac{N-FP-FN}{N}$ In order to continue, we would need to have $FP$ and $FN$, or simply $TP$. We can obtain $FP = \frac{TP \cdot (1-P)}{P}$ from the Precision definition, and $FN = \frac{TP \cdot (1-R)}{R}$ from the Recall definition. But without knowing the value of $TP$, we cannot reach a solution.
null
CC BY-SA 4.0
null
2023-05-04T10:33:49.490
2023-05-04T10:33:49.490
null
null
297163
null
614871
1
614883
null
2
44
The Pareto Type II distribution, also known as the Lomax distribution, has the following density, $$f(x|\alpha,\lambda)=\frac{\alpha\lambda^{\alpha}}{(\lambda+x)^{\alpha+1}}, \qquad x>0,\ \alpha>1,\ \lambda>0$$ with $\lambda$ known. I'm trying to find an approximate confidence interval for $\alpha$. For context this is an old exam question so students would have access to statistical tables, so I'm guessing that the confidence interval will involve either Gaussian or Student T distributed random variable. Work so far: So assume we observe a sample $x_1,\dots,x_n$ from the aforementioned distribution. I found the MLE for $\alpha$ to be $$\hat\alpha=\frac{n}{\sum_{i=1}^n\log\big(\frac{x_i}{\lambda}+1\big)}$$ The following result holds for the MLE, $$\sqrt{\mathbb{E}[\mathcal{I}(\hat\alpha)]}(\hat\alpha-\alpha)\sim \mathcal{N}(0,1)$$ Now $\sqrt{\mathbb{E}[\mathcal{I}(\hat\alpha)]}=\frac{\sqrt{n}}{\hat \alpha}$ so the CI should look something like this, $$\hat\alpha \pm q\frac{\hat\alpha}{\sqrt{n}}$$ where $q$ is a quantile from the standard Gaussian distribution that corresponds to a specific significance level. Is this correct?
An approximate confidence interval for the $\alpha$ parameter of a Pareto Type II distribution when $\lambda$ is known
CC BY-SA 4.0
null
2023-05-04T10:49:46.643
2023-05-04T12:45:03.333
2023-05-04T12:28:36.463
250675
250675
[ "self-study", "confidence-interval", "t-distribution", "approximation", "pareto-distribution" ]
614872
1
null
null
0
15
I have a model for which the lower bound takes the following form $ \log p(x) \geq \mathbb{E}_{recog}[\log \frac{gen}{recog}] + \mathcal{c} $ where recog and gen denote the factorization in the recognition and generative models, respectively, and c is $ \mathcal{c} \propto \log(2), $ i.e., a constant term. Given that the optimization is over the family of variational densities, (I guess that) the constant term can be dropped. However, I cannot find any previous paper, or any other reference, where the lower bound has a constant term. At least not in the field of variational autoencoders. Does anyone know if it is usual to remove constant terms in the lower bound? ideally with a reference.
Constant terms in variational inference and lower bounds
CC BY-SA 4.0
null
2023-05-04T11:08:32.050
2023-05-04T11:08:32.050
null
null
147368
[ "autoencoders", "variational-bayes", "variational-inference" ]
614873
1
null
null
2
17
I want to study the uncertainty propagation through a nonlinear function $Y = f(X)$. I am assuming that $X$ is normally distributed and I am using the moment method approximating $f(X)$ by its first (or second) order Taylor approximation. By using this method I would obtain the output mean and standard deviation $\mu_Y$ and $\sigma_Y$. My ultimate goal is to compute some confidence bounds. I know that the success of this study depends on the truncation order of my moment method and the level of uncertainty at the input $X$. Therefore, I am doing a Monte Carlo simulation to compare the output uncertainties. By doing this, I realize that the output $Y$ isn't normally distributed. So my actual question is: if I know that the output $Y$ won't be normally distributed, is it okay to use $\mu_Y$ and $\sigma_Y$ (from the moment method) to calculate the parameters of a certain (non-normal) distribution and, therefore, compute the confidence bounds. For instance, say that the output distribution follows a [Nakagami distribution](https://en.wikipedia.org/wiki/Nakagami_distribution). Can I calculate the parameters $m$ and $\Omega$ by using the $\mu_Y$ and $\sigma_Y$ that I obtained from the moment method? Thank you in advance! :D
Can be the output distribution non-normal using the moment method?
CC BY-SA 4.0
null
2023-05-04T11:29:33.653
2023-05-04T11:29:33.653
null
null
387201
[ "normal-distribution", "uncertainty", "error-propagation" ]
614874
1
614893
null
1
28
I was wondering if any have advice on what the correct steps/interpretation are with regards to the following questions on Cox regression. As a side note, I am using STATA MP 15.1 to analyse the data and I am quite new to survival analyses. - Are my covariates assumed to be time-variant? I have a dataset that has quarterly info on the diversity composition of a team (gender, age & nationality). The values can be different throughout time whenever a new person joins or leaves the team. However, it is not a fixed change over time such as with a drug who's effect lessens over time for example. - I recently changed the format of my dataset, I believe this formatting is better in case I want to account for time variance. Is this correct? Below the before and after. I also have an observation per quarter, but it might be better to only have an observation when a change occurred in the diversity variables? [](https://i.stack.imgur.com/yoWl5.png) - My predictor variables are continuous, and using Schoenfeld residuals, the PH assumption seems to be violated. What are potential next steps I can take? Thank you in advance for your help! Best regards, Laura
Cox regression: are my covariates time-variant? what with PH assumption violation?
CC BY-SA 4.0
null
2023-05-04T11:39:19.800
2023-05-04T13:49:01.043
null
null
387207
[ "survival", "stata", "time-varying-covariate", "proportional-hazards" ]
614875
1
null
null
-1
48
The formula for the Z-score combination is, $$ Z_w = \frac{\sum_{i=1}^k w_i Z_i}{\sqrt{\sum_{i=1}^k w_i^2}} $$ Euclidean norm was used as the dividend. Why isn't it absolute-value norm ($\sum |w|$)? Though it was called Stouffer's method, it was invented by Liptak, T. (1958). In [the paper](http://real-j.mtak.hu/509/1/MATKUTINT_03.pdf#page=187), I could not find any discussion on why Euclidean norm was used. It first appeared as formula 2.38 in the paper.
Why Euclidean norm was used in Z-score combination?
CC BY-SA 4.0
null
2023-05-04T11:40:08.117
2023-05-05T04:54:41.377
2023-05-04T13:32:22.737
169706
169706
[ "meta-analysis", "z-score" ]
614876
1
null
null
1
24
I have a question with respect to running multiple linear regressions for the entire sample and different subsamples: I have a dataset that includes a dependent variable y and several explanatory variables x1, ..., xN and z. I am interested particularly in the binary explanatory variable z. If I split the entire dataset according to this variable z, I obtain significantly different values of the mean of y for the subsamples conditioning on z = 0 and z = 1. Now, I would like to run 3 regressions of y on all of x1, x2, ..., xN, and (potentially) z. The first regression is for the entire sample, the second regression for z = 0, and the third regression for z = 1. Should I include z as regressor in the first regression if I would like to compare all 3 regressions or should I omit it to have the same regressors for all three regressions? Thank you!
Regression with sample split
CC BY-SA 4.0
null
2023-05-04T11:41:41.993
2023-05-04T12:55:40.853
null
null
383321
[ "regression", "least-squares" ]
614877
1
null
null
0
22
I'm working with the `survreg()` function of the R `survival` package, and I understand that the default scale parameter for the Weibull distribution generated by this function is on the log-linear scale. In the code snippet at the bottom of this post, you can see I transform the scale parameter to the original scale of the Weibull distribution with `exp(coef(wFit))`. I also need to extract the variance-covariance matrix for this Weibull fit, and my understanding is the variance-covariance matrix would also need be transformed to the original scale of the Weibull distribution. From my research I have found different methods for transforming the variance-covariance matrix and I am confused as to whether these methods are correct, and which I should apply for my needs. Basically, I am trying to forecast survival probabilities into future periods. For testing purposes, I truncate a survival curve (as you can see in post [How to generate multiple forecast simulation paths for survival analysis?](https://stats.stackexchange.com/questions/614198/how-to-generate-multiple-forecast-simulation-paths-for-survival-analysis) where the `lung1` object is a hypothetical truncation to 500 periods of the `lung` dataset) and then plot a curve (or curves) for future periods via the fit of the partial curve. Conservative estimates are better; better to understate survival probabilities than it is to overstate. Please, any guidance on the below attempts, listed as Method A, B, and C? Are any of them correct or advisable, given my objectives described above? Using the `lung` dataset from `survival` as the basis for experimentation. Pointing me in the direction of any digestible reference materials will also help! Another method that I have seen is simulation, involving simulating many datasets based on the estimated coefficients and scale parameters, transforming them to the original scale of the Weibull distribution, and then calculating the variance-covariance matrix of the transformed parameters. I'm not ready to try simulation until I better understand the simpler methods! Code: ``` library(survival) wFit <- survreg(Surv(time,status)~1, dist="w", data=lung) scale <- exp(coef(wFit)) shape <- 1/wFit$scale ### Method A ### transform_mat1 <- vcov(wFit) * (scale^2) transform_mat1 ### Method B ### # Outline the transformation matrix transform_mat2 <- matrix(c(1, 0, 0, -1/scale^2), nrow = 2) # Apply transformation to the new transform_mat matrix cov_est_transformed <- transform_mat2 %*% vcov(wFit) %*% t(transform_mat2) cov_est_transformed ### Method C ### # Extract the log-linear scale parameter estimates from the model log_estimates <- coef(wFit) log_scale <- log_estimates # the second parameter is log(scale) # Compute the gradient of the log-scale parameter estimate with respect to the original-scale parameters d_log_scale_d_scale <- 1 / exp(log_scale) # partial derivative of log(scale) w.r.t scale ```
Options for transforming the variance-covariance matrix generated by the survreg() function to the original scale of the Weibull distribution?
CC BY-SA 4.0
null
2023-05-04T11:45:26.913
2023-05-19T08:56:27.207
null
null
378347
[ "r", "survival", "covariance-matrix", "weibull-distribution", "parameterization" ]
614878
1
null
null
1
15
I'm looking for an association between 2 variables before and after manipulation. I ran 2 separate crosstabs and I'm able to see for which cells there's an association. Additionally, I want to see whether the association change from before (part1) to after (part2) manipulation is significant. I'm attaching a screenshot that shows a part of the crosstabs. For example, in part1 the percentage for female HeHc bottom_center is 70.1% and in part2 it is 67.4%. Can I just say there's a decrease of frequency after manipulation for females, or do I need to do an additional analysis, if so, which one? Thank you in advance! [](https://i.stack.imgur.com/tNdrj.png)
Comparing the percentage changes of two Crosstabs (Chi-square test of independence/association)
CC BY-SA 4.0
null
2023-05-04T11:47:31.757
2023-05-04T11:47:31.757
null
null
380361
[ "chi-squared-test", "independence", "association-measure" ]
614879
1
null
null
0
11
This code create the linear bondary between two classes. ``` % Töm clear all close all clc x1 = randn(50, 2); x2 = 10 + randn(50, 2); x = [x1; x2]; y = [linspace(1, 1, 50)'; linspace(-1, -1, 50)']; % Träna en linjär SVM med quadprog C = 10; H = (y*y').*(x*x'); H = H + 10*eye(size(H)); f = -ones(size(y)); Aeq = y'; beq = 0; lb = zeros(size(y)); ub = C*ones(size(y)); %alpha = qp([], H, f, Aeq, beq, lb, ub) %alpha = qp([], H, f, [], [], [], [], [], [eye(size(H)); -eye(size(H)); Aeq; -Aeq], [ub; -lb; beq; -beq]) [alpha, solution] = quadprog(H, f, [eye(size(H)); -eye(size(H)); Aeq; -Aeq], [ub; -lb; beq; -beq]); % Beräkna vikterna för linjär SVM w = sum(repmat(alpha.*y,1,size(x,2)).*x,1); % Hitta stödvektorer tol = 1e-4; sv_idx = find(alpha>tol); b = mean(y(sv_idx)-x(sv_idx,:)*w'); % Plotta träningsdata och linjär SVM figure; scatter(x(y==-1,1),x(y==-1,2),'r'); hold on scatter(x(y==1,1),x(y==1,2),'g'); h = ezplot(@(x2,x3) w(1)*x2+w(2)*x3+b); set(h,'Color','k','LineWidth',2); ylim([min(x(:, 1)) max(x(:, 1))]); xlim([min(x(:, 2)) max(x(:, 2))]); legend('Röda','Gröna','SVM-gräns', 'location', 'northwest'); grid on ``` [](https://i.stack.imgur.com/JYdMW.png) This creates a nonlinear boundary between two classes. ``` % close close all clear all % Skapa träningsdata x1 = randn(50, 2); x2 = 10 + randn(50, 2); x = [x1; x2]; y = [linspace(1, 1, 50)'; linspace(-1, -1, 50)']; % Träna en icke-linjär SVM med quadprog och RBF-kärna C = 10; sigma = 1; n = size(x,1); K = zeros(n,n); for i = 1:n for j = 1:n K(i,j) = exp(-norm(x(i,:)-x(j,:))^2/(2*sigma^2)); end end H = (y*y').*K; H = H + 10*eye(size(H)); f = -ones(size(y)); Aeq = y'; beq = 0; lb = zeros(size(y)); ub = C*ones(size(y)); %alpha = qp([], H, f, Aeq, beq, lb, ub); [alpha, solution] = quadprog(H, f, [eye(size(H)); -eye(size(H)); Aeq; -Aeq], [ub; -lb; beq; -beq]); % Beräkna vikterna för icke-linjär SVM w = zeros(1,size(x,2)); for i = 1:n w = w + alpha(i)*y(i)*x(i,:); end w % Hitta stödvektorer tol = 1e-4; sv_idx = find(alpha>tol); b = mean(y(sv_idx)-K(sv_idx,:)*alpha); b % Vektor för hyperplanet - Alternativ w = (alpha .* y)' * x b = mean(y - K*(alpha .* y)) % Plotta träningsdata och icke-linjär SVM figure; hold on; scatter(x(y==-1,1),x(y==-1,2),'r'); scatter(x(y==1,1),x(y==1,2),'g'); [x1_grid,x2_grid] = meshgrid(linspace(min(x(:,1))-10,max(x(:,1))+10,100),linspace(min(x(:,2))-10,max(x(:,2))+10,100)); X_grid = [x1_grid(:),x2_grid(:)]; K_grid = zeros(size(X_grid,1),n); for i = 1:size(X_grid,1) for j = 1:n K_grid(i,j) = exp(-norm(X_grid(i,:)-x(j,:))^2/(2*sigma^2)); end end y_grid = K_grid*alpha; y_grid = reshape(y_grid,size(x1_grid)); contour(x1_grid,x2_grid,y_grid); grid on ``` [](https://i.stack.imgur.com/aCaIC.png) This also create a nonlinear boundary between two classes. ``` % Töm clear all close all clc % Träningsdata x1 = randn(50, 2); x2 = 10 + randn(50, 2); X = [x1; x2]; y = [linspace(1, 1, 50)'; linspace(-1, -1, 50)']; % Polynomisk kärna d = d; K = (1 + X*X').^d; % Kvadratiskt optimeringsproblem H = (y*y').*K; H = H + 10*eye(size(H)); f = -ones(size(y)); A = []; b = []; Aeq = y'; beq = 0; lb = zeros(size(y)); ub = ones(size(y)) * Inf; %alpha = quadprog(H, f, A, b, Aeq, beq, lb, ub); [alpha, solution] = quadprog(H, f, [eye(size(H)); -eye(size(H)); Aeq; -Aeq], [ub; -lb; beq; -beq]); % Vektor för hyperplanet w = (alpha .* y)' * X; b = mean(y - K*(alpha .* y)); % Plotta datapunkterna och hyperplanet x1 = linspace(min(X(:,1)), max(X(:,1)), 100); x2 = linspace(min(X(:,2)), max(X(:,2)), 100); [X1, X2] = meshgrid(x1, x2); XX = [X1(:) X2(:)]; KK = (1 + XX*X').^d; decision_values = KK*(alpha .* y) - b; decision_values = reshape(decision_values, size(X1)); contour(x1, x2, decision_values, [0 0], 'LineWidth', 2) hold on scatter(X(:,1), X(:,2), 50, y, 'filled') colormap('cool') colorbar xlabel('X_1') ylabel('X_2') title(sprintf('Polynomial kernel, degree %d', d)) ``` [](https://i.stack.imgur.com/A0AQ0.png) What they all have in common is that $w$ is a $n$ long array, in this case 2 and $b$ is a scalar. Question: If I only want to use my weights $w$ and bias $b$ to classify the data, can I create for example a nonlinear curve from only $w$ and $b$ that are going to represent the boundary? In this case I'm not using $w$ when I'm using a kernel. The array $w$ is only used when I have a linear kernel. But I want to create a nonlinear boundary in form of an equation. Is that possible?
Will Support Vector Machine always return linear weights and bias?
CC BY-SA 4.0
null
2023-05-04T12:01:50.223
2023-05-04T12:01:50.223
null
null
275488
[ "classification", "svm", "kernel-trick" ]
614880
1
null
null
1
19
I have a sample of stroke patients and I want to find the education effect on their performance in a neuropsychological tool. The total sample is n=19. I have divided the sample into 3 groups according to their years of education, so I have 3 groups (1st group: n=5, 2nd: n=5, 3rd: n=9). Shall i do a Mann Whitney U test or a Welch t-test? Or it isn't meaningful to try any test to such a small sample?
Which test to use for small and unequal samples
CC BY-SA 4.0
null
2023-05-04T12:01:51.030
2023-05-04T12:01:51.030
null
null
387208
[ "sample", "wilcoxon-mann-whitney-test" ]
614882
2
null
614795
0
null
If covariance matrix is diagonal, I think parameters can be found following way: $$\boldsymbol{w}=diag(\boldsymbol{\Sigma})$$ $$\boldsymbol{k}=[1,...,1]^T$$ $$\boldsymbol{\lambda}=(\boldsymbol{\mu} \odot \boldsymbol{\mu}) ⊘ diag(\boldsymbol{\Sigma})$$ $$m=0$$ $$s=0$$ Where "$\odot$" and "⊘" represents elementwise product and divison respectively. And "diag" function just takes diagonal entries of a matrix as a vector.
null
CC BY-SA 4.0
null
2023-05-04T12:25:26.977
2023-05-04T13:19:56.863
2023-05-04T13:19:56.863
387145
387145
null
614883
2
null
614871
1
null
Your MLE is correct. The asymptotic result we want to use here is that the MLE, $\hat\alpha$, converges in distribution to $\mathrm{N}(\alpha, \mathcal{I}(\alpha)^{-1})$ as $n \rightarrow \infty$. The Fisher information is $\mathcal{I}(\alpha)=\frac{n}{\alpha^2}$. At this point, we can approximate $\mathcal{I}(\alpha)$ by $\mathcal{I}(\hat\alpha)$ to obtain the CI you give in your answer, but notice that the quantity $$ \sqrt{\mathcal{I}(\alpha)}(\hat\alpha-\alpha) = \frac{\sqrt{n}}{\alpha}(\hat\alpha-\alpha)=\sqrt{n}\left(\frac{\hat\alpha}{\alpha}-1\right) $$ is approximately standard normal for large $n$. It follows that we can construct a CI of the form $$ \left[ \left(1+\frac{q}{\sqrt{n}} \right)^{-1} \hat{\alpha}, \left(1-\frac{q}{\sqrt{n}} \right)^{-1} \hat{\alpha} \right] $$ where $q$ is a quantile from the standard normal.
null
CC BY-SA 4.0
null
2023-05-04T12:45:03.333
2023-05-04T12:45:03.333
null
null
238285
null
614884
1
null
null
1
31
I need some help understanding how to write a variance equation. I have the general variance equation written as $$ \sigma_{t}^2=\alpha_0+\alpha_1u_{t-1}^2+\beta_1\sigma_{t-1}^2. $$ I am wondering how to make this specific such as if I used returns on the S&P500 index. For example in this study I entered the following into EViews: `SPr c SPr(-1)`
GARCH(1,1) variance equation
CC BY-SA 4.0
null
2023-05-04T12:46:20.563
2023-05-05T13:23:52.837
2023-05-05T13:23:02.647
53690
387212
[ "econometrics", "garch" ]
614885
2
null
614876
1
null
It sounds as though you might be exploring a question of moderation. That is to say, ¿does the relationship between a dependent variable and the predictors change for different groups? As such, the two models you want to explore are the models that predict the dependent variable from all your predictors, $x_1$ ... $x_n$ and your dichotomous grouping variable $z$. Then run the model with the inclusion of the interactions (products) of all your predictors with the dichotomous variable: $z\cdot x_1$ ... $z\cdot x_2$. The significance of the coefficients for this product terms will indicate if there is a different relationship between your two groups or not. Happy to clarify more if needed.
null
CC BY-SA 4.0
null
2023-05-04T12:55:40.853
2023-05-04T12:55:40.853
null
null
199063
null
614886
2
null
614868
3
null
This problem amounts to a $2\times 2$ contingency table (actually 2 such tables if you look at pre- and post-intervention). The variable for the columns could be the pass/fail counts, and the variable for the rows could be the treatment condition (control & intervention). Assuming appropriate conditions have been met, this is a chi-square analysis, and the conventional effect size here is the phi-coefficient: $$\phi = \sqrt{\frac{\chi^2}{N}}$$ (Curiously, this also happens to be the correlation if you code the two variables as ones and zeros.) The convention with this effect size is to classify with the cut-offs of 0.1, 0.3, and 0.5, for small, moderate, and large, respectively. --- Update #1Based on the information provided in the comments, this would better be described as a 4×2 contingency table. The first variable is the treatment condition (dichotomous), and the second variable is the the pre/post status description. You would have 4 conditions for this variable: achieved at pre and failed to achieve at post, failed to achieve at both pre and post, achieved at both pre and post, and failed to achieve at pre and achieved at post (ordered in "best" for substantiating treatment effect). In this case, you would use the Cramer’s $V$ as the effect size measure: $$V = \sqrt{\frac{\chi^2}{N \cdot \text{min}(R-1,C-1)}}$$ where $R$ is the number of rows and $C$ is the number of columns in the table. This effect size is compared to the same cut-offs as the $phi$ coefficient.
null
CC BY-SA 4.0
null
2023-05-04T13:02:13.837
2023-05-08T19:15:17.933
2023-05-08T19:15:17.933
199063
199063
null
614887
2
null
614875
0
null
Suppose we have exactly two $z$ values which are equal and we set the weights equal to one. Then the value of Stouffers method reduces to $\frac{2z}{\sqrt2}$ which is larger than $z$ and hence leads to a smaller $p$-value as we would expect. Now do the same for the $L_1$ norm and you get $\frac{2z}{2}$ which equals $z$ and so having two studies giving the same result does not lead to a smaller $p$-value.
null
CC BY-SA 4.0
null
2023-05-04T13:14:59.497
2023-05-04T13:14:59.497
null
null
101426
null
614888
1
null
null
0
15
Given the IV estimator for $\beta_1$ (From $y = \beta_0 +\beta_1x + u$): $$\beta_1 = \frac{Cov(z, y)}{Cov(z, x)}$$ And it's sample analog: $$\hat{\beta_1} = \frac{\hat{Cov}(z, y)}{\hat{Cov}(z, x)}$$ Which simplifies to: $$\hat{\beta_1} = \frac{\sum_{i=1}^n(z_i-\bar{z})(\beta_0 +\beta_1x_i + u)}{\sum_{i=1}^n(z_i - \bar{z})(x_i - \bar{x})} \rightarrow \beta_1 + \frac{\sum_{i=1}^n(z_i - \bar{z})u_i}{\sum_{i=1}^n(z_i - \bar{z})(x_i - \bar{x})} $$ Surely this estimator is unbiased because $Cov(z, u)=0$ by the assumption of exogeneity when dealing with instruments? Even if the above represents the sample covariance, because the sample covariance is an unbiased estimator of the population covariance, then surely $\hat{\beta_1}$ is still unbiased? Aside from that, why is it necessary to take the conditional expectation on both $X$ and $Z$?. $$E[\hat{\beta_1}] = E[E[\hat{\beta_1}|X, Z]] \rightarrow \frac{\sum_{i=1}^n(z_i - \bar{z})E[u_i|X, Z]}{\sum_{i=1}^n(z_i - \bar{z})(x_i - \bar{x})}$$ I can see why this expression isn't equal to zero (Because of the implication of the conditional on $X$), but why do we have to take that expectation instead of simply $E[\hat{\beta_1}] = E[E[\hat{\beta_1}|X]] $ or $E[E[\hat{\beta_1}|Z]]$? Is it necessary to condition on all of the data within the model?
Bias of Instrumental Variables Estimator
CC BY-SA 4.0
null
2023-05-04T13:15:33.357
2023-05-04T13:15:33.357
null
null
386405
[ "regression", "conditional-probability", "conditional-expectation", "instrumental-variables" ]
614889
1
null
null
0
14
I have some survival data and need to fit survival models to it and extrapolate. I plotted the log cumulative hazard and empirical hazard plot (using muhaz). There appears to be a change in the hazard around 400 days, but the two curves appear to be parallel. I'm considering my options as, fitting standard proportional hazards models (exponential, Weibull, Gompertz), account for the turning point in the hazard with accelerated failure time models (log-logistic, log normal, generalised Gamma) and also using a spline with 1 know around 400 days. I was also considering a piecewise approach with two exponentials as it looks as the hazards are constant before and after the turning point. I have a few questions around my approaches. - Does what I suggested, given the plots below, look reasonable? - The log cumulative hazard plot can determine whether hazards are proportional and a join vs two separate models for each arms are appropriate. What plot exists for modelling AFT models jointly or independently? - Is it possible to extrapolate a spline model? Any help would be really appreciated, thanks [](https://i.stack.imgur.com/SYphE.png) [](https://i.stack.imgur.com/Hoqia.png)
Options on modelling a hazard function with a turning point
CC BY-SA 4.0
null
2023-05-04T13:21:07.430
2023-05-04T13:21:07.430
null
null
211127
[ "survival", "hazard", "proportional-hazards", "accelerated-failure-time" ]
614890
1
null
null
0
8
Find the inverse Fourier transform of: $F_1 = -i\omega e^{-|2\omega|}$ I understand using the formula book to find that the IFT of the $ e^{-|2\omega|}$ is equal to $\frac{4}{\pi} \frac{1}{(4+t^2)}$ but the full answer is equal to $\frac{4}{\pi} \frac{t}{(4+t^2)^2}$ and I don't understand how to get the final answer.
Inverse Fourier transforms
CC BY-SA 4.0
null
2023-05-04T13:24:56.230
2023-05-04T13:25:18.113
2023-05-04T13:25:18.113
387214
387214
[ "fourier-transform" ]
614891
1
null
null
0
58
I have two stationary time series $X(t)$ and $Y(t)$ that I know are correlated to each other but they also each have autocorrelations. Each has a credible interval band for each time series at every time $t$ defined by their respective means $\mu_x(t)$ and $\mu_y(t)$ and standard deviations $\sigma_x(t)$ and $\sigma_y(t)$ (I could assume normality if I have to but do not for the time being). My goal is to describe the time series $Z(t)$ defined as the following weighted sum: $Z(t):=aX(t)+bY(t)$ where $a\in \mathbb{R}$, $b\in \mathbb{R}$. In other words, $X(t)-Y(t)$ is the result of $a=1,b=-1$ and the simple mean $\frac{X(t)+Y(t)}{2}$ is the result of $a=0.5,b=0.5$ etc... The mean (e.g. the point estimate at time $t$) is very straight forward because it is a mean of means: $\mu_z(t)=a\mu_x(t)+b\mu_y(t)$ In order to find $\sigma_z(t)$ I need the square root of the following identity: $\mathbb{V}(aX(t)+bY(t))=a^2\sigma_x(t)^2+b^2\sigma_y(t)^2 + 2\rho_{x,y}\sigma_x(t)\sigma_y(t)$ However, I know the auto-correlations in $X(t)$ and $Y(t)$ can inflate the estimate of the needed Pearson's correlation $\rho_{x,y}$ (spurious correlation and all that) despite my time series being stationary. I want to emphasize that I am not interested (at this point) in merely finding the important lags by prewhitening $X(t)$ and $Y(t)$ in a cross-correlation analysis. I also do not believe that transfer functions are appropriate here because it would be a stretch to assume that $X(t)$ influences $Y(t)$ but $Y(t)$ does not influence $X(t)$. I need a reasonable estimate $\hat{\rho}_{x,y}$ for lag=0 simply to define the uncertainty of my time series $Z(t)$. My questions are: - Provided my time series are long enough and I was ok with a brute force solution, can I simply take the correlation of the thinned time series (like one would do with MCMC) such that I remove elements that are so close as to be in the range of autocorrelations? - Hyndman's fable package in R handles a similar situation where quantile forecasts are combined while compensating for correlations in the errors. I looked at the package implementation and it seems it takes the correlation of the de-autocorrelated residuals in the above formula. This would be great if I could simply fit separate ARMAs (or ARIMA for non-stationary series) to both $X(t)$ and $Y(t)$ and take the correlation of the residuals! However, in Hyndman's situation the ARMA model's are fit to the same time series and the correlation of the residual terms of the two different models are applied to the forecast values to eventually produce quantiles. Furthermore, my Applied Time Series notes from grad school explicitly says that calculating cross-correlations on pre-whitened time series can identify correlations significantly different than zero but the pre-whitening interferes with the magnitude of the linear association. So which is it? Can I fit ARMA models and calculate the correlation of the residuals for $\hat{\rho}_{x,y}$? Does it make a difference if I find the ARMA for $X(t)$, filter $Y(t)$ with those coefficients and take the correlation of the residuals (this is typically how prewhitening is usually applied from what I can tell)? Practical context for my questions This setup would be extremely useful if I had a time series of the reproductive number $R_t$ (average # of people each person infects in a viral outbreak) for the country and also for the locality. They are obviously correlated. I might like to know if the difference between the locality's $R_t$ and the national $R_t$ is different than 0 in a statistically significant way because this identifies patterns in spatial heterogeneity. I also might like to take the average of $R_t$'s from different viral variants weighted by the percentage of each variant in the population.
How to calculate the variance of the weighted sum of two stationary time series with autocorrelation?
CC BY-SA 4.0
null
2023-05-04T13:38:53.660
2023-05-04T13:38:53.660
null
null
298004
[ "time-series", "probability", "correlation", "autocorrelation", "variance-decomposition" ]
614892
1
null
null
0
12
I'm working on a problem statement which involves ranking some short-lived items in an order such that the items expected to sell the most in the next n days are ranked on top - basically ranking based on expected sales. I've labelled historical data for such items. The data includes information about the items themselves - item metadata, sales trends for the item, user affinity towards the item, etc. along with the ideal ranks for different items on any given day. The ideal ranks are obtained by converting sales data in the next n days and sorting them to get ranks. My question is are there any supervised ranking algorithms that expect day-wise and listwise candidate metadata and rank as input? I've come across adarank but from what I understand that is more tailored for information retrieval systems than for ranking problems. One approach I can think of is fitting a regression algorithm to the sales data directly and then computing ranks at inference time by sorting based on the predicted output.
Supervised ranking algorithms
CC BY-SA 4.0
null
2023-05-04T13:46:40.700
2023-05-04T13:46:40.700
null
null
298393
[ "regression", "machine-learning", "time-series", "ranking", "recommender-system" ]
614893
2
null
614874
0
null
First, if the values of covariates are changing over time then you have time-varying covariates. Predictable things like age might be handled by simply including the value at study start, but things like "diversity composition" need to be handled time period by time period. Be warned, however, that the modeling effectively works on individual event times and will only use the covariate values in place for all at-risk cases at each specific time. History of prior values is not considered unless you construct such historical variables. In your case without such historical variables it would be the current "diversity composition" that is associated with risk of failure, not the diversity history. Second, your reformatting of the data set puts it into the "person-period" format used for discrete-time survival analysis, which seems like it might be better for your application than a continuous-time Cox model. That then becomes a binomial regression, with right-censoring taken into account by the omission of cases at times after they are no longer under observation. Using a complementary log-log link instead of a logit link for the binomial regression provides a "grouped" proportional hazards model. See [this page](https://stats.stackexchange.com/q/429266/28500). For a Cox model you could use the alternate format that you suggest, only starting a new data row when the covariate values change, but a Cox model can work with the person-period format. The only disadvantage of the person-period format in your case is that it might increase the size of the data set, but that's probably not a big issue here. Third, there are multiple ways to handle violations of proportional hazard assumptions, with details depending on the specific situation. Chapter 6 of [Therneau and Grambsch](https://www.springer.com/us/book/9780387987842) is a useful resource on that, as are [many pages on this site](https://stats.stackexchange.com/search?q=violat*+proportional+hazard*). I'd first make sure that the continuous predictors have appropriate functional forms in the model. If you incorrectly assume a linear association between log-hazard and a predictor, then that incorrect assumption could end up expressing itself as an apparent violation of proportional hazards. Model the continuous predictors flexibly, for example with regression splines.
null
CC BY-SA 4.0
null
2023-05-04T13:49:01.043
2023-05-04T13:49:01.043
null
null
28500
null
614894
1
null
null
0
8
I developed an ABM to evaluate land use change and I tested 4 different scenarios, each scenarios contains some stochasticity and I therefore run each scenarios 40 times. The 4 scenarios are made of 2 times 2 constrasting scenarios : 2 climate scenarios (1 and 2) and 2 economical scenarios (A and B). We therefore have the following combination: 1A - 1B - 2A - 2B Among other analyses, I'd like to test if the two climatic scenarios lead to different output. The first option I see is to consider each observation from each run as a repeated measure. I could therefore perform a two-way ANOVA with repeated measures. The second option I see is to perform a three-way ANOVA where the "run" is a random factor and therefore we would have only one observation (n=1) for each combination of factors. This is an open question to statistical advices.
Statistical analyses of agent-based models outputs from several scenarios
CC BY-SA 4.0
null
2023-05-04T13:59:19.157
2023-05-04T13:59:19.157
null
null
92091
[ "anova", "repeated-measures" ]
614895
1
null
null
0
13
My question is, can I use linear regression for panel data if one variable is constant for some period? Particulary, I have daily stock prices, and other variable, score for that company, is the same for all year. Thus I have around 252 different observations of stock price and for all of them there is one number corresponding. The same repeats for other year. I have looked and did not find such examples, I am new to this so would really appreciate if you could give me some advice
Can I use linear regression for panel data if one variable is constant for some period?
CC BY-SA 4.0
null
2023-05-04T09:00:10.040
2023-05-04T14:02:34.257
null
null
null
[ "r", "regression", "panel-data" ]
614896
1
null
null
0
30
I recently came across [this study](https://www.mdpi.com/2073-8994/14/7/1422) describing the benefits of the author's relaxed adaptive LASSO regression. The author describes a simple algorithm (which appears to be glmnet, effectively), as well as a newer algorithm which is slightly optimized for speed. It occurs to me that you could possibly do something similar in glmnet like this: ``` # Given some X.train and y.train # Generate Penalties set.seed(999) cv.ridge <- cv.glmnet(X.train, y.train, type.measure = "mae", alpha = 0, # This ensures ridgeiness #parallel = T, standardize = T) penalty <- 1/abs(as.numeric(coef(cv.ridge, s = cv.ridge$lambda.min))[-1]) # Adaptive LASSO Regression w/ Ridge Weights model.meta <- cv.glmnet(X.train, y.train, type.measure = "mae", alpha = 1, # This ensures lassoiness #parallel = T, standardize = T, penalty.factor = penalty, nfolds = 3, relax = T ) ``` However, I just want to verify that this is legit since I can't find any other examples of a relaxed adaptive lasso. Is anyone aware of any issue with using this configuration? Thanks in advance!
Relaxed Adaptive Lasso
CC BY-SA 4.0
null
2023-05-04T14:04:24.617
2023-05-04T19:57:31.597
2023-05-04T19:57:31.597
53690
383976
[ "lasso", "glmnet" ]
614897
1
null
null
3
135
You have a basket of $n$ assets whose returns are multivariate normal with zero mean. The correlation between any pair of assets is 1/2. What is the probability that $k$ of the assets will have positive return? I think the answer should be $\frac{n!}{k!(n-k)!} \frac{1}{2^n}$, regardless of what the correlation is. Is this correct? Simulation: ``` import numpy as np def simu(n, rho): mean = [0] * n cov = np.ones((n,n))*rho + (1-rho)*np.eye(n) # diagonal covariance N = 5000 x = np.random.multivariate_normal(mean, cov, N).T y = [sum([x[i][j]>0 for i in range(n)]) for j in range(N)] print(np.histogram(y)) simu(10,0.5) Output: (array([457, 470, 481, 465, 426, 447, 447, 444, 487, 876]) ``` Using simulation, for $\rho=1$, I get the probability to be 0.5 for $k=0$ and $k=n$ and 0 for other $k$. How to prove this?
Probability that $k$ variables of multivariate Gaussian are positive
CC BY-SA 4.0
null
2023-05-04T14:05:03.677
2023-05-04T20:15:23.690
2023-05-04T15:36:43.157
10907
10907
[ "probability", "normal-distribution" ]
614898
2
null
614669
1
null
If you included the interactions in your model to start with, you should do your post-modeling calculations with all of the coefficients in your model, even if some aren't individually "statistically significant." In particular, the apparent "significance" (in terms of difference from 0) of lower-level coefficients of predictors involved in interactions (like `Family_B`) can depend on how the interacting predictors (like `SES`) are coded. Your general approach seems to be a good start, but it doesn't provide confidence intervals for the specific scenarios you are examining. For confidence intervals you need to use not only the variances of the individual coefficient estimates but also the covariances among them. Regression models typically contain the needed variance-covariance matrix but only directly report the variances of the individual coefficients. You then can use the formula for the [variance of a weighted sum](https://en.wikipedia.org/wiki/Variance#Weighted_sum_of_variables) to get the corresponding variances (and thus standard errors and confidence intervals) for the sums of coefficients like those you are calculating. You should do that work in the original log-odds scales of the coefficients, followed by exponentiation at the end to report results in terms of odds ratios. Instead of doing this by hand yourself, it's less error-prone to employ well-vetted post-modeling tools. In R one good choice is the [emmeans package](https://cran.r-project.org/package=emmeans). In response to comment and edited question The list of pairwise comparisons helps to document which differences between scenarios are statistically significant. If both 95% confidence limits are on the same side of 0 (either both positive or both negative) then that suggests a significant difference. It's not clear whether this display takes into account the multiple comparisons, however, so check the documentation. The `emmeans` package can do a Tukey correction for all pairwise multiple comparisons. If you want overall estimates of the statistical significance of predictors and interactions, then you need to do joint tests on all the corresponding model coefficients. The default of the `Anova()` function (note the capital "A") in the R [car package](https://cran.r-project.org/package=car) is a good choice. That will indicate overall significance of SES, family type, and their interaction across all their levels at once. Even if the overall interaction isn't "significant" on that basis, however, the individual comparisons shown in the table (based on the interaction terms) are still OK provided that the multiple-comparison issue is addressed appropriately.
null
CC BY-SA 4.0
null
2023-05-04T14:15:47.497
2023-05-06T12:25:58.067
2023-05-06T12:25:58.067
28500
28500
null
614899
1
null
null
0
14
I have data which reports the relative performance (in percentage) of several candidate objects with respect to a standard object. In other words, the performance of the standard object is always reported as 100%, while that of the candidate objects could be below or above 100% , i.e. the performance could be 200% ,it could also be 10% but never negative. Apart from the dependent variable , i.e., the relative performance %, there are several other independent variables (continuous and discrete) none of which are percentages or relative percentages. Which statistical modelling techniques are best suited to model the relative performance of the candidate objects? I have gone through some other questions related to modelling percentages via linear regression or by using logit transformation or using logistic regression or beta regression. I could transform my data by subtracting 100 from the relative performance % to arrive at a new transformed independent variable with values between (+$\infty$, -$\infty$) and then use multiple linear regression perhaps? Are there any other methods which I could try?
Modeling relative percentage figures
CC BY-SA 4.0
null
2023-05-04T14:18:51.423
2023-05-04T15:31:26.027
2023-05-04T15:31:26.027
160416
160416
[ "modeling", "percentage" ]
614900
2
null
308397
0
null
There seems to be something obvious not discussed above. If the treatment effect is constant over individuals $Y_i^1-Y_i^0=c$ for every $i$ then ATT and ATE as defined above should be equal. This may be seen as there being no "effect modifiers". My intuition is that balancing effect modifiers between groups would also be sufficient for equality of ATT and ATE. The distribution of treatment effects would then be independent of treatment. Maybe someone already has this formalized?
null
CC BY-SA 4.0
null
2023-05-04T14:22:54.270
2023-05-04T14:22:54.270
null
null
131845
null
614901
1
null
null
0
29
I'm just exploring Generalized Linear Models for the first time, and trying to see if I can correctly fit the simplest model I can think of. So I'm generating random values from some NegativeBinomial distribution and then fitting it with a GLM. ``` np.random.seed(5555555) weeks = np.arange(np.datetime64("2020-01-01"),np.datetime64("2022-12-01"),np.timedelta64(1,'W')) counts = pd.Series(nbinom.rvs(10, .5, size=weeks.shape[0])) counts.index=weeks obs, last = counts[0:-1], counts[-1:] X = np.array(obs.index.values.tolist()) X = np.stack((X, np.ones(X.shape[0])),axis=1) X.shape y = obs mdl = GLM(y, X, family=NegativeBinomialFamily()) results = mdl.fit() results.summary() ``` I would think that the slope for my one independent variable of 'date' should just be 0. But this code always gives me a very small positive value: ``` Generalized Linear Model Regression Results Dep. Variable: y No. Observations: 152 Model: GLM Df Residuals: 151 Model Family: NegativeBinomial Df Model: 0 Link Function: Log Scale: 1.0000 Method: IRLS Log-Likelihood: -504.15 Date: Thu, 04 May 2023 Deviance: 28.334 Time: 10:51:39 Pearson chi2: 24.8 No. Iterations: 4 Pseudo R-squ. (CS): -0.001740 Covariance Type: nonrobust coef std err z P>|z| [0.025 0.975] x1 1.396e-18 5.25e-20 26.594 0.000 1.29e-18 1.5e-18 const 8.594e-37 3.23e-38 26.594 0.000 7.96e-37 9.23e-37 ``` What's going on here? It looks like the model is very confident the model has an itty bitty positive slope. Am I setting up the model wrong? Am I interpreting the output wrong?
Why does this General Linear Model think random Negative Binomal values have a slope?
CC BY-SA 4.0
null
2023-05-04T14:25:29.677
2023-05-04T15:09:35.683
2023-05-04T15:09:35.683
44269
313098
[ "python", "generalized-linear-model", "negative-binomial-distribution" ]
614902
1
null
null
1
54
Working on VARX model and I want to include MA term here but I have not found any package in R to build VARMAX model. MTS package can be used to fit VARMA model but I want to include exogenous variables while fitting the model. is there any R package available for VARMAX modeling? if not then any alternative for this?
VARMAX model in r | Fit VARMA model including exogenous variable
CC BY-SA 4.0
null
2023-05-04T14:25:38.260
2023-05-08T15:23:39.383
2023-05-08T15:23:39.383
387216
387216
[ "r", "time-series", "references", "vector-autoregression", "moving-average" ]
614903
1
null
null
0
15
Say that I have a vector of success rates, which are each ratios bound by [0,1]. I know how to get a normal approximation of this data; however, I'm interested in getting the MLE fit for Beta(a,b) which best described this distribution of rates. I'm well aware of the Beta-Binomial Bayesian model. However, I do not have observed outcomes {0,1}, just the aggregated rates per individual. What is the MLE formula or best way to infer the (a,b) parameters which describe the Beta distribution that best fits my distribution of rates?
MLE fit for beta distribution given vector of rates?
CC BY-SA 4.0
null
2023-05-04T14:29:31.847
2023-05-04T14:29:31.847
null
null
288172
[ "maximum-likelihood", "beta-distribution" ]
614904
1
null
null
0
24
I am trying to measure the relationship between the price of a futures-contract and the current price of the underlying asset on the day of purchasing said futures-contract. For this I have a dataset with daily data on both, amassing about 8000 observations, where 3500 of them have values for both variables. The variable for the price of the underlying asset however, has a skewness of 5.3 and kurtosis of 44, because of some extreme values (i.e. p50 is 32, p99 is 185 and max is 462). When log-transforming this, skewness and kurtosis changes to -0.5 and 7.12. And when doing a log-log regression without the constant, the coeficcient is 0.99 (0.98 R-sq, t=574). Is this a valid result to argument for the current price being very much connected to the price of the futures-contracts? In that case, why does the non-transformed variables have a Pearson and Spearman's coef. of 0.5?
Can I use log-log OLS with no constant to measrue the relationship between two variables in a non-normal distribution?
CC BY-SA 4.0
null
2023-05-04T14:40:52.280
2023-05-04T14:40:52.280
null
null
387218
[ "regression", "logistic", "correlation", "least-squares", "nonparametric" ]
614905
1
null
null
0
15
I am conducting a research study on sex differences in the physical limitations of patients with ankylosing spondylitis (AS), using the Bath Ankylosing Spondylitis Metrology Index (BASMI). The [BASMI](https://www.mdapp.co/basmi-score-bath-ankylosing-spondylitis-metrology-index-calculator-616/) is a composite score that consists of five components, including cervical rotation (measured in degrees), tragus-to-wall distance, intermalleolar distance, modified Schober's test, and lumbar side flexion. Each component is scored on a different scale, with most of them being distances measured in centimeters, but each having a distinct range. For example, intermalleolar distance can be up to 120 cm, whereas tragus-to-wall distance is usually between 5-15 cm. To account for the differences in scale, I am considering two approaches for my analysis: (1) standardizing each component by converting them to z-scores and performing a multivariate linear mixed model analysis, and (2) performing separate analyses for each component (i.e, a univariate analysis). My main research question is focused on analyzing sex differences in the physical limitations of the patients, and how these differences vary across the individual BASMI components over time after treatment. However, I am concerned about how to interpret the regression coefficients if I choose to standardize the components, as well as how to explain the implications of this approach in my research report or paper. On the other hand, performing separate analyses for each component may result in a loss of statistical power due to smaller sample sizes for each component, and may not account for the correlations between the components. I would appreciate any advice or guidance on the advantages and disadvantages of these approaches, and how to choose the most appropriate approach for my specific research question and data.
Navigating Differences in Scale: A multivariate approach with standardization or separate component analysis?
CC BY-SA 4.0
null
2023-05-04T14:54:13.307
2023-05-04T14:54:13.307
null
null
335198
[ "regression", "mixed-model", "interaction", "multivariate-analysis", "standardization" ]
614906
2
null
614884
0
null
The conditional variance equation of a GARCH(1,1) model is $$ \sigma_{t}^2=\omega+\alpha_1u_{t-1}^2+\beta_1\sigma_{t-1}^2. $$ (I use $\omega$ in place of $\alpha_1$ as I think it is more standard.) When you fit the model, you obtain - the parameter estimates $\hat\omega$, $\hat\alpha_1$ and $\hat\beta_1$, - residuals $\hat\varepsilon_1,\dots,\hat\varepsilon_T$, - fitted conditional variances $\hat\sigma_1^2,\dots,\hat\sigma_T^2$. The parameter estimates, the residuals and the fitted conditional variances are all concrete numbers that satisfy the conditional variance equation at each time point.
null
CC BY-SA 4.0
null
2023-05-04T15:22:22.720
2023-05-05T13:23:52.837
2023-05-05T13:23:52.837
53690
53690
null
614907
1
null
null
0
22
I am dealing with a binary classification problem (class 0/1) with class imbalance. Given the vector of predictions, I would like to compute: - F1-Score for class 0 - F1-Score for class 1 - Weighted Average F1-Score - Inverse Weighted Average F1-Score The last one is a custom metric that I think could be useful: it works exactly like Weighted Average F1-Score (weighted average of all per-class F1-Scores) but instead of using weights that are proportional to the support (number of actual occurrences of a class), it uses weights that are inversely proportional (reciprocal) to the support. The idea is to give more importance to the F1-Score for the minority class, as I am interested in predicting well occurrences belonging to this class. First question: is it a good idea to use this custom metric? Is there any downside? Why I can't find anything about it on Internet? Second question: how to implement it in Python? This is what I tried so far: ``` from sklearn.metrics import f1_score import numpy as np y_true = [0, 0, 1, 1, 1, 1, 1, 1] y_pred = [0, 0, 1, 1, 0, 0, 1, 1] f1_class0 = f1_score(y_true, y_pred, pos_label=0) f1_class1 = f1_score(y_true, y_pred, pos_label=1) f1_weighted = f1_score(y_true, y_pred, average = 'weighted') class_counts = np.bincount(y_true) class_weights = class_counts / len(y_true) inverse_class_weights = 1 - class_weights inverse_sample_weights = np.array([inverse_class_weights[label] for label in y_true]) f1_inverse = f1_score(y_true, y_pred, average = 'weighted', sample_weight = inverse_sample_weights) print("F1-Score for class 0:", f1_class0) print("F1-Score for class 1:", f1_class1) print("Weighted Average F1-Score:", f1_weighted) print("Inverse Weighted Average F1-Score:", f1_inverse) ``` output: ``` F1-Score for class 0: 0.6666666666666666 F1-Score for class 1: 0.8 Weighted Average F1-Score: 0.7666666666666667 Inverse Weighted Average F1-Score: 0.8285714285714286 ``` The first three metrics are correctly computed, but the custom metric is not: I would expect a value of `(0.6666666666666666 * 0.75) + (0.8 * 0.25) = 0.7` since support proportion is 0.25 for class 0 and 0.75 for class 1 (consequently "inverse support proportion" are respectively 0.75 and 0.25), while I don't understand how the value `0.8285714285714286` comes from. Can someone please help me to understand what is going on? Did I make some mistakes? And above all, why did no one ever develop this metric?
Inverse Weighted Average F1-Score
CC BY-SA 4.0
null
2023-05-04T15:22:27.133
2023-05-04T15:22:27.133
null
null
377079
[ "machine-learning", "python", "scikit-learn", "weighted-mean", "f1" ]
614908
1
null
null
0
16
I am conducting my SNA research and have collected the first-wave data. For some reasons, recently, I need to analysis this cross-sectional data. So I am looking for software for analysing the cross-sectional network data. But I'm just familiar with R Siena packages for longitudal network analysis. Can R Siena package be used to analysis the cross-sectional data (in which the number of th waves is only 1) ?
R Siena for cross-sectional network analysis
CC BY-SA 4.0
null
2023-05-04T15:33:14.863
2023-05-04T16:36:58.620
2023-05-04T16:36:58.620
384747
384747
[ "r", "machine-learning", "graph-theory", "networks", "social-network" ]
614912
2
null
614897
3
null
Your conjecture of a binomial distribution would only apply when $\rho=0$. For positive $\rho$ it is intuitive to expect a more dispersed distribution, though still symmetric about the mean $\frac n2$. Your simulation is useful, though `np.histogram` is the wrong summary function to use. Here is mine in R, with $n=10$ and $\rho=\frac12$ with one million simulations: ``` library(mvtnorm) set.seed(2023) positivemultivariatenormal <- function(dimen, rho, cases){ sims <- rmvnorm(cases, sigma=diag(dimen)*(1-rho)+rep(rho,dimen^2)) table(rowSums(sims > 0)) / cases } positivemultivariatenormal(dimen=10, rho=1/2, cases=10^6) # 0 1 2 3 4 5 6 7 # 0.090877 0.090430 0.091604 0.090452 0.090728 0.091824 0.090943 0.090504 # 8 9 10 # 0.090822 0.090757 0.091059 ``` This suggests that with $\rho=\frac12$ you get a uniform distribution with probabilities of $\frac1{n+1}$, which with $n=10$ is about $0.0909$. Trying with the same $\rho=\frac12$ and different $n$ seems to confirm this.
null
CC BY-SA 4.0
null
2023-05-04T16:42:31.730
2023-05-04T17:24:24.000
2023-05-04T17:24:24.000
2958
2958
null
614913
2
null
460626
0
null
Yes, If the BC is quite small (as it measures overlap between 2 distributions), this will make the TVD high meaning that the distributions will be quite different. In the case where BC is maximize i.e. high overlap, the TVD will be minimize and the distributions will tend to be similar.
null
CC BY-SA 4.0
null
2023-05-04T16:46:07.473
2023-05-04T16:46:07.473
null
null
387225
null
614914
1
null
null
1
70
How do you explain a box plot with categorical variables on the x-axis? For example, I have these two box plots, how do you interpret relative comparison of each category within the box plot? Sample data: ``` # Create a Pandas dataframe df = pd.DataFrame({ 'demand': ['low', 'medium', 'high', 'extreme'], 'amenities_rating': [3, 2, 3, 4], 'education_rating': [1, 2, 2, 4], 'dem_ratings': [3, 4, 5, 5] }) ``` `plot a` [](https://i.stack.imgur.com/135D8m.png) `plot b` [](https://i.stack.imgur.com/OHIiOm.png) Values on the `y-axis` are ratings 1-5 with 5 being the best.
Interpreting box plots with categorical variables
CC BY-SA 4.0
null
2023-05-04T16:48:18.580
2023-05-05T09:41:56.747
2023-05-05T09:40:21.890
22047
115982
[ "distributions", "categorical-data", "boxplot" ]
614915
2
null
614914
1
null
The box plot is simply illustrating that as demand volume gets more extreme - the size of the box within the plot gets bigger, i.e. the range between the first quartile, median and third quartile (those values contained within the box) becomes larger. The principle remains the same even if the categorical variables were to be on the y-axis - the boxplots would simply be horizontal instead of vertical. Note that the smallest value and largest value in the series lie outside of the box shape in the box plot and are indicated by the minimum and maximum horizontal lines. For this box plot, we can see that the middle line for the extreme box plot is higher than that of low, medium and high - indicating that median volume demand is higher. When looking at plot b - we can see that the dots (which represent the outliers in the dataset) are more numerous and the range is greater - indicating that a wider degree of outliers exists for extreme demand volume.
null
CC BY-SA 4.0
null
2023-05-04T17:01:54.630
2023-05-05T09:41:56.747
2023-05-05T09:41:56.747
22047
137066
null
614916
1
null
null
1
40
Among the binary strings of length $n$, what is the distribution of the lengths of the homogeneous runs ? E.g., for $n=4$ the possible strings and run lengths are $$0000: 4;0001: 1,3;0010: 1^2,2;0011:2^2; \\0100: 1^2,2;0101: 1^4;0110: 1^2,2;0111:1,3; \\1000: 1,3;1001: 1^2,2;1010: 1^4;1011:1^2,2; \\1100: 2^2;1101: 1^2,2;1110: 1,3;1111:4 $$ giving the histogram $$1:24,\\ 2:10,\\ 3:4,\\4: 2.$$ I am also interested in the numbers of runs: $$0000: 1;0001: 2;0010: 3;0011:2; \\0100: 3;0101: 4;0110: 3;0111:2; \\1000: 2;1001: 3;1010: 4;1011:3; \\1100: 2;1101: 3;1110: 2;1111:1 $$ giving the histogram $$1:2,\\ 2:6,\\ 3:6,\\4: 2.$$ Are there closed-form formulas ? --- Update: By shameless brute force computation I observe that the distribution of the distribution of the numbers of runs are the doubles of the Binomial numbers, and the distribution of the lengths follows the double of the sequence [https://oeis.org/A045623](https://oeis.org/A045623), giving the formula $$2\sum_{i=0}^{n-k}(i+2)\binom{n-k}i.$$
Run lengths in a binary string
CC BY-SA 4.0
null
2023-05-04T17:14:46.750
2023-05-05T09:43:12.390
2023-05-04T20:21:41.203
37306
37306
[ "distributions", "combinatorics", "string-count" ]
614917
2
null
614842
5
null
Since you mentioned python, here is an example. It's pretty trivial to plot the threshold along with the ROC using sklearn.metrics.roc_curve. ``` from sklearn import metrics import numpy as np import matplotlib.pyplot as plt fpr, tpr, th = metrics.roc_curve(true, score) fig, ax = plt.subplots() fig.set_size_inches(10,10) ax.plot(fpr,tpr,label='ROC') ##Plotting the threshold. The value of the first index can be above one so I exclude that point when plotting. ax.plot(fpr[1:],th[1:],label='Threshold') ax.plot(fpr,fpr,c='k') ax.xaxis.set_ticks(np.arange(0, 1.05, .1)) ax.yaxis.set_ticks(np.arange(0, 1.05, .1)) plt.grid(True) auc = np.round(metrics.roc_auc_score(true, score),3) plt.title(f' AUC: {auc}',fontsize=25) plt.xlabel('FPR',fontsize=20) plt.ylabel('TPR',fontsize=20) plt.legend(fontsize=20) ``` With the threshold plotted along with the ROC you can see that as the threshold approaches 1 (on the left side of the figure) TPR and FPR both approach 0, and as the threshold approaches 0 the TPR and FPR both approach 1. [](https://i.stack.imgur.com/rTVzf.png)
null
CC BY-SA 4.0
null
2023-05-04T17:39:46.917
2023-05-04T17:39:46.917
null
null
264350
null
614918
1
null
null
0
32
I am performing spearman correlations on activation levels in brain regions of animals exposed to different treatments to see which brain regions are correlated in each treatment. Since each treatment was analyzed separately with Spearman correlations, would I also apply the benjamini-hochberg procedure to each treatment separately? Or would all the p values from each treatment be grouped together? Thank you!
Benjamini-Hochberg on multiple groups
CC BY-SA 4.0
null
2023-05-04T17:40:45.427
2023-05-04T17:40:45.427
null
null
387228
[ "multiple-comparisons", "biostatistics" ]