Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
βŒ€
ParentId
stringlengths
1
6
βŒ€
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
βŒ€
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
βŒ€
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
βŒ€
LastEditorUserId
stringlengths
1
6
βŒ€
OwnerUserId
stringlengths
1
6
βŒ€
Tags
list
616249
2
null
454647
3
null
Lots of questions here, lots of upvotes, but no answers yet. So, I'll offer a partial response. At a high level, I view GMRFs as structural modeling driven. The focus is on "neighborhood" interactions. Only enumerated relationships are parameterized, for instance: - two variable covariance - Dempster, A. P. (1972). Covariance selection. Biometrics, 157-175. (read beyond misleading abstract, method shown iteratively adds complexity to precision matrices) - four neighbor and eight neighbor lattice models - see early 2000's computer vision literature. - broader neighborhoods defined by filters - Zhu, Song Chun, Yingnian Wu, and David Mumford. "Filters, random fields and maximum entropy (FRAME): Towards a unified theory for texture modeling." International Journal of Computer Vision 27 (1998): 107-126. - my paper where "neighborhoods" are defined by stock market portfolios - Keane, Kevin R. "Portfolio Variance Constraints." ICPRAM. 2019. The important thing to emphasize is that in Dempster; Zhu, Wu and Mumford; and my paper, construction of the precision matrix is iterative and additive, where additional constraints (Lagrange multipliers) are imposed subsequent to a simpler model being rejected. The simplest multivariate Gaussian structure is a diagonal precision matrix (all variables are independent). Note that all multivariate Gaussian distributions may be translated, rotated and scaled to an independent standard normal multivariate distribution (matrix operations similar to SVD result in the identity matrix for a precision matrix). To maintain application domain context (original coordinates), the Gaussian graphical model literature in general does not entertain rotation and scaling to a computationally simpler representation. In contrast, GPs are data driven. All variable interactions are parameterized. While typically regularized (or built with a prior), the model is a full multivariate Gaussian model eventually driven by observed data as additional observations are available. My personal preference is to start from a full model, as it seems a high hurdle to impose structural constraints on the precision matrix of phenomenon you are learning about through data acquisition. If you know the truth, you don't need a model; and if you don't know the truth, don't impose restrictive structure on your exploration. This rant is further discussed in my paper with Corso - Keane, Kevin R., and Jason J. Corso. "The Wrong Tool for Inference." (2018), section (2.1) discusses a graphical model "fail" when arguably much structural information was known. The full model outperformed the graphical model in this instance. Lastly, I found this [presentation](https://www.robots.ox.ac.uk/%7Eseminars/seminars/Extra/2012_30_08_MichaelOsborne.pdf) a useful overview of fitting Gaussian processes to data. There is a discussion of "doing classification with GP".
null
CC BY-SA 4.0
null
2023-05-18T15:56:13.970
2023-05-19T14:06:39.547
2023-05-19T14:06:39.547
43149
43149
null
616250
2
null
260164
0
null
I choose to disgaree with the answer given by @Azim. Emphirical research has shown ROC is insentive to class imbalance. This has been extensively discussed by Tom Fawcett, see Section 4.2 of his paper [An introduction to ROC analysis](https://www.sciencedirect.com/science/article/pii/S016786550500303X?casa_token=fabRutQTDEEAAAAA:RJdI_j4P5gdlOQZcPoZ_vYDboOwE4sXbL7ujIqtJcP9PE8QXnEODdAfCHXh7HPwpYTTlD2mHEH8I) > 4.2. Class skew ROC curves have an attractive property: they are insensitive to changes in class distribution. If the proportion of positive to negative instances changes in a test set, the ROC curves will not change. To see why this is so, consider the confusion matrix in Fig. 1. Note that the class distributionβ€”the proportion of positive to negative instancesβ€”is the relationship of the left (+) column to the right (-) column. Any performance metric that uses values from both columns will be inherently sensitive to class skews. Metrics such as accuracy, precision, lift and F score use values from both columns of the confusion matrix. As a class distribution changes these measures will change as well, even if the fundamental classifier performance does not. ROC graphs are based upon tp rate and fp rate, in which each dimension is a strict columnar ratio, so do not depend on class distributions
null
CC BY-SA 4.0
null
2023-05-18T15:56:44.067
2023-05-18T16:02:59.087
2023-05-18T16:02:59.087
261548
261548
null
616251
1
null
null
0
40
I have two random matrices one on the top of the other: $ \begin{bmatrix}\boldsymbol{B_1} \\ \boldsymbol{B_2} \end{bmatrix}$. and they are both of dimension $k \times N$. I have that: $ vec\begin{bmatrix}\boldsymbol{B_1} \\ \boldsymbol{B_2} \end{bmatrix} \sim \mathcal{N}\left(\begin{bmatrix}\boldsymbol{\mu_1} \\ \boldsymbol{\mu_2}\end{bmatrix}, \boldsymbol{\Sigma} \otimes \boldsymbol{S} \right)$. where $\boldsymbol{\Sigma}$ is an $N \times N$ matrix and $2k \times 2k$. I want to find the distribution of $vec(\boldsymbol{\boldsymbol{B}_2})| vec(\boldsymbol{B_1})$, how can I procede? It should be a normal, but how do I get the mean and the variance?
Find the conditional distribution from joint normal distribution with vec operators
CC BY-SA 4.0
null
2023-05-18T15:58:19.200
2023-05-28T11:34:18.670
2023-05-28T11:34:18.670
269632
269632
[ "joint-distribution", "vector-autoregression", "multivariate-normal-distribution" ]
616252
2
null
616234
2
null
$\newcommand{\bt}{\boldsymbol t}$ Without attempting to the reach the generalities as in whuber's answer, below is a proof for $n = 4$ by directly differentiating the MGF $M(\bt)$ of $(X_1, X_2, X_3, X_4) \sim N_4(0, \Sigma)$. For this quartic Gaussian r.v., with $Q = \Sigma^{-1}$, we have \begin{align} M(\bt) = M(t_1, t_2, t_3, t_4) = \exp(-\bt'Q\bt/2), \end{align} and $E[X_1X_2X_3X_4]$ is given by \begin{align} E[X_1X_2X_3X_4] = \left.\frac{\partial^4 M(\bt)}{\partial t_4\partial t_3\partial t_2\partial t_1}\right|_{\bt = 0}. \tag{1} \end{align} To evaluate the order 4 partial derivative in $(1)$, we need to apply the following equations repeatedly: \begin{align} & \frac{\partial \bt'Q\bt}{\partial t_i} = e_i'Q\bt, \; i = 1, 2, 3, 4. \tag{2} \\ & \frac{\partial e_i'Q\bt}{\partial t_j} = e_i'Qe_j, \; i, j \in \{1, 2, 3, 4\}. \tag{3} \\ \end{align} In $(2)$, $e_i$ is the $4$-long column vector with $i$-th entry $1$ and all the other entries $0$. The rest of calculations are based on $(2), (3)$ and the chain rule: \begin{align} & M_1(\bt) := \frac{\partial M(\bt)}{\partial t_1} = -M(\bt)e_1'Q\bt, \tag{4} \\ & M_2(\bt) := \frac{\partial M^2(\bt)}{\partial t_2\partial t_1} = \frac{\partial M_1(\bt)}{\partial t_2} =M(\bt)e_2'Q\bt\cdot e_1'Q\bt - M(\bt)e_1'Qe_2 \tag{5} \end{align} Note that $(5)$ gives the expression of second joint moments $E[X_iX_j] = \left.\frac{\partial M^2(\bt)}{\partial t_j\partial t_i}\right|_{\bt = 0} = e_i'Qe_j$. Continue: \begin{align} & M_3(\bt) := \frac{\partial^3 M(\bt)}{\partial t_3\partial t_2\partial t_1} = \frac{\partial M_2(\bt)}{\partial t_3} \\ =& -M(\bt)e_3'Q\bt\cdot e_2'Q\bt\cdot e_1'Q\bt + M(\bt)e_2'Qe_3\cdot e_1'Q\bt + M(\bt)e_2'Q\bt\cdot e_1'Qe_3 \\ &+ M(\bt)e_3'Q\bt \cdot e_1'Qe_2. \tag{6} \end{align} Finally, we can get $(1)$ from $(6)$: note that the partial derivative of the first term in the right-hand side of $(6)$ is $0$ when evaluated at $\bt = 0$ because there are more than one factor of the type $e_i'Q\bt$. It thus follows that \begin{align} & E[X_1X_2X_3X_4] = \left.\frac{\partial^4 M(\bt)}{\partial t_4\partial t_3\partial t_2\partial t_1}\right|_{\bt = 0} = \left.\frac{\partial M_3(\bt)}{\partial t_4}\right|_{\bt = 0} \\ =& e_2'Qe_3 \cdot e_1'Qe_4 + e_2'Qe_4 \cdot e_1'Qe_3 + e_3'Qe_4 \cdot e_1'Qe_2 \\ =& E[X_2X_3]E[X_1X_4] + E[X_2X_4]E[X_1X_3] + E[X_3X_4]E[X_1X_2]. \end{align} This completes the proof.
null
CC BY-SA 4.0
null
2023-05-18T16:04:12.057
2023-05-19T12:19:44.110
2023-05-19T12:19:44.110
20519
20519
null
616253
2
null
616234
1
null
Let $x \sim N(0, \Sigma)$ and let $\Sigma = LL^T$ be the [Cholesky decomposition](https://en.wikipedia.org/wiki/Cholesky_decomposition) of $\Sigma$, that is, $L$ is lower triangular. Then, we can write $x = L z$ where $z \sim N(0, I_4)$. Now write \begin{align*} \prod_{i=1}^4 x_i = \prod_{i=1}^4 (Lz)_i = \prod_{i=1}^4 \sum_{j=1}^i L_{ij} z_j. \end{align*} Next, switch the order of $\prod$ and $\sum$ (that is expend the product over summation). Then take expectation of both sides and use linearity of expectation and the fact that $z_j$ are zero mean and independent so many terms in the expansion will be zero.
null
CC BY-SA 4.0
null
2023-05-18T16:18:26.733
2023-05-18T16:18:26.733
null
null
21770
null
616254
1
null
null
1
45
Let $\alpha, \beta$ be independent d-states (d > 2) Markov chains with the same transition probability matrix $\pi$, and let $P(\alpha_0 =1)=P(\alpha_0 =d)= \frac{1}{2}$, $P(\beta_0 =2)=P(\beta_0 =dβˆ’1)= \frac{1}{2}$. Find ALL transition matrices $\pi$ such that $\alpha$, $\beta$ may be successfully coupled. My attempt I understand that successful coupling means that, starting at some point $\tau$, the processes are glued ($P(\alpha \neq \beta) = 0, t \geq \tau$). Am I right that here this means chains just have to ever intersect? Then it suffices that there exists N such that the probability of getting into one state is positive after N steps, which can be described as follows: $a_0T (\pi^T)N \pi^N b_0 > 0$. But I can't figure out answer directly in terms of matrix elements, which is required. I understand that $\pi^N > 0$ is enough, but this is too strong condition.
Markov chains with the same transition probability matrix may be successfully coupled
CC BY-SA 4.0
null
2023-05-18T16:38:01.900
2023-05-18T21:39:00.777
null
null
388283
[ "probability", "stochastic-processes", "markov-process" ]
616255
1
616265
null
0
19
I am looking at the proof given in this [paper](https://arxiv.org/pdf/1511.01844.pdf) constituting lines (3) to (6) but am stuck on the first line. The setup is as follows: Consider $X \in \mathcal{X} \subset \mathbb{Z}$ with a discrete probability distribution $P(x)$, uniform noise $U \in [0,1]$ and noisy data $Y=X+U$ with density $p(y)$. Suppose we model $Y$ with density $q(y)$. Then we have (first line of proof): $$\int p(y)\text{log} q(y)dy = \sum_{x} P(x) \int_{0}^{1} \text{log} q(x+u)du$$ I am truly stuck with this inequality; I'm sure there is some rule of probability at work here, but I really just don't see what (other authors citing this proof assert that it is "easy" so I'm assuming there is just some rule I'm missing).
Average Log Likelihood of Sum of Discrete Random Variable plus Continuous Uniform Noise
CC BY-SA 4.0
null
2023-05-18T16:46:47.183
2023-05-18T18:43:12.723
null
null
332763
[ "probability", "distributions" ]
616256
2
null
616239
3
null
The standard deviation is a summary metric which can be calculated for any set of numbers, but that doesn't mean that captures something "useful" about that set of numbers in all cases. What you're seeing here is that you have a set of numbers with little variation for the most part, plus a few outlier samples which are the main contributors to the standard deviation. Summary metrics that are highly dependent on a small set of samples tend to not be very useful, since they say more about that small set of samples than they do about the population as a whole, rather defeating the purpose of the summary. On its face, this doesn't seem like a terribly useful way of categorizing YoY changes, since 90% of the values get categorized the same. The use of "1 SD from the mean" is convenient for normally distributed populations as it's a fixed threshold that provides "reasonably sized" groups on either side of the threshold. But for an arbitrary distribution, 1 SD may or may not have any particular useful meaning at all. Some distributions are entirely contained within 1 SD of the mean, in which case trying to threshold values 1 SD from the mean is pointless. You see a similar effect in other outlier-dependent summary statistics like the mean. For example, the mean net worth in the US is nearly $1M, but this is dragged way up a small number of billionaires, so many would view it as a misleading summary statistic. The median net worth is barely \$100k, which most would view as a better summary of the population. Whenever using summary statistics, it's worthwhile to make sure that it's summarizing a useful aspect of the population as a whole.
null
CC BY-SA 4.0
null
2023-05-18T17:01:42.937
2023-05-18T17:01:42.937
null
null
76825
null
616257
2
null
616254
0
null
We can describe the probabilities for the begin state as the vectors $$\mathbf{x}_0 = [0.5,0,\dots,0,0.5]$$ and $$\mathbf{y}_0 = [0,0.5,\dots,0.5,0]$$ If the proces can be coupled then these probabilities must approach each other. ### Case $\exists t : \mathbf{x}_t = \mathbf{y}_t$ The probabilities for the states after some time $t$ are must be equal $\mathbf{x}_t = \mathbf{y}_t$ such that $$ \mathbf{P}^t \mathbf{x}_0 =\mathbf{P}^t \mathbf{y}_0 $$ And also $$\mathbf{P}\cdot \left(\mathbf{P}^{t-1} (\mathbf{x}_0-\mathbf{y}_0)\right) =\mathbf{0}$$ So a neccesary condition is that the transition matrix has a zero eigenvalue. ### Case $\lim_{t\to \infty} |\mathbf{x}_t - \mathbf{y}_t| = 0$ The 'successful coupling' can mean that the distance between the two chains approaches one (if you wait long enough then the coupling occurs almost surely). Example for a transition matrix $$\mathbf{P} = \begin{bmatrix} \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\\end{bmatrix}$$ We can have the joint process $\mathbf{u}_t,\mathbf{v}_t$ that transitions according to $$\begin{array}{c|ccccccccc} &1,1 & 2,2 & 3,3 & 1,2 & 1, 3& 2,1& 2,3 & 3,1 &3,2\\ \hline 1,1 & \frac{1}{3} & \frac{1}{3} & \frac{1}{3} &\frac{1}{3} & \frac{1}{3} &0&0&0&0\\ 2,2& \frac{1}{3} & \frac{1}{3} & \frac{1}{3}&0&0&\frac{1}{3}&\frac{1}{3}&0&0\\ 3,3& \frac{1}{3} & \frac{1}{3} & \frac{1}{3}&0&0&0&0&\frac{1}{3}&\frac{1}{3}\\ 1,2&0&0&0&\frac{1}{3}& \frac{1}{3}&0&0&0&0\\ 1,3&0&0&0&\frac{1}{3}& \frac{1}{3}&0&0&0&0\\ 2,1&0&0&0&0&0&\frac{1}{3}&\frac{1}{3}&0&0\\ 2,3&0&0&0&0&0&\frac{1}{3}&\frac{1}{3} &0&0\\ 3,1&0&0&0&0&0&0&0&\frac{1}{3}&\frac{1}{3}\\ 3,2&0&0&0&0&0&0&0&\frac{1}{3}&\frac{1}{3}\\ \end{array}$$ This will eventually end up in the states (1,1), (2,2) and (3,3) where $U_t=V_t$ while the marginal distributions of process are the same as $X_t$ and $Y_t$ if we start with $U_0 = X_0$ and $V_0 = Y_0$. However this is not the same as $$P[U_t \neq V_t] = 0$$ instead we have $$P[U_t \neq V_t] = 0.75 \left(\frac{2}{3}\right)^t$$ we have $P[U_0 \neq V_0] = 0.75$ and every step there is a $2/3$ probability that they remain different. So, with this interpretation of successful coupling, it occurs when either both $\alpha$ and $\beta$ have the same stationary state, or have different stationary states with equal probability, or end up in similar cyclic states at the same time. It is not easy to express in a simple way all the matrices that fullfill such condition.
null
CC BY-SA 4.0
null
2023-05-18T17:02:09.933
2023-05-18T21:39:00.777
2023-05-18T21:39:00.777
164061
164061
null
616258
1
null
null
0
18
I would like to find the standard error of the median survival time and do some approximation to estimate the density function around the median survival time according to equation (5) of this: [http://www.med.mcgill.ca/epidemiology/hanley/c609/material/KaplanMeierEstimator.pdf](http://www.med.mcgill.ca/epidemiology/hanley/c609/material/KaplanMeierEstimator.pdf) Is it possible to do it in R? If so, can you give me some example r code for this? Thank you for your time!
standard error of the median survival time using r
CC BY-SA 4.0
null
2023-05-18T17:03:41.610
2023-05-18T17:03:41.610
null
null
111521
[ "r", "survival", "standard-error", "median", "approximation" ]
616259
1
null
null
1
14
I've seen attempts to answer this particular question before, but I'm either not understanding something (often a priori correct), or I'm missing a better way. Essentially, what I want to do is draw n random samples from (say) a normal distribution, with specific mu and sigma, where there is a serial autocorrelation out to some max lag. The two approaches I've seen used are either to (i) use mvrnorm in MASS, or (ii) filter the distribution. The former appeals because you can explicitly specify the var-covar matrix, but the examples I've seen blow up when you want long time-series. For example, from [How to generate series of pseudorandom autocorrelated numbers](https://stats.stackexchange.com/questions/183406/how-to-generate-series-of-pseudorandom-autocorrelated-numbers) ``` tmp.r <- matrix(0.2, n, n) tmp.r <- tmp.r^abs(row(tmp.r)-col(tmp.r)) tmp.r[1:5, 1:5] [,1] [,2] [,3] [,4] [,5] [1,] 1.0000 0.200 0.04 0.008 0.0016 [2,] 0.2000 1.000 0.20 0.040 0.0080 [3,] 0.0400 0.200 1.00 0.200 0.0400 [4,] 0.0080 0.040 0.20 1.000 0.2000 [5,] 0.0016 0.008 0.04 0.200 1.0000 library(MASS) x <- mvrnorm(1, rep(0,n), tmp.r) acf(x, plot=FALSE, lag.max=5) Autocorrelations of series β€˜x’, by lag 0 1 2 3 4 5 1.000 0.246 0.065 0.056 0.032 -0.013 ``` Does the trick, except (1) it assumes N(0,1), which I don't typically want, and (2) the bigger issue, for n large (say, you want 10,000 autocorrelated random numbers), this will blow your machine up in creating tmp.r in second line of the code (as a 10K x 10K matrix would be expected to do). So, perhaps using a filter approach. Something like ``` x <- filter(rnorm(10000), filter=rep(1,3), circular=TRUE) ``` Doesn't require specifying a huge VC matrix, but...I haven't been able to figure out how to specify the autocorrelation I want (no doubt because I don't entirely understand how filtering works in this context). In essence, 10000 random numbers, with a given mean and variance, and specific lagged correlation structure is what I'm after. Pointers to the obvious? Thanks in advance...
auocorrelated pseudo random data in R
CC BY-SA 4.0
null
2023-05-18T17:17:13.277
2023-05-18T17:17:13.277
null
null
212243
[ "r", "time-series", "autocorrelation" ]
616260
1
null
null
0
48
I have 3 dataframes with the same structure (each dataframe includes a different type of tweet). Here are the columns of dataframes: `id`, `tweet_time`, `cascade_lifetime`, `cascade_size`. I would like to compare the `cascade_lifetime` and `cascade_size` of different types of tweets. To do this, I plotted the `empirical CCDF` of the cascade_size and cascade_lifetime for different types of tweets: ``` X1 = df_type1 #type1 tweets X2 = df_type2 #type2 tweets X3 = df_type3 #type3 tweets sns.ecdfplot(data = X1, x= 'cascade_lifetime', complementary=True, label = "type 1", color = 'y') sns.ecdfplot(data = X2, x= 'cascade_lifetime', complementary=True, label = "type 2", color = 'b') sns.ecdfplot(data = X3, x= 'cascade_lifetime', complementary=True, label = "type 3", color = 'r') plt.legend() plt.xscale('log') plt.yscale('log') plt.ylabel('Empirical CCDF') ``` and here are the plots: [](https://i.stack.imgur.com/lx59Pm.png) [](https://i.stack.imgur.com/2e0hQm.png) ### My Questions: - Is CCDF the right method to compare data with different sample sizes? my sample sizes are very different, for example, there are 1700 type 2 tweets, and 8600 type 3 tweets, while I only have 100 type 3 tweets. How should I deal with it? - I am not sure how to interpret the plots. Here is my understanding: Most retweet cascades (any type) have lifetime less than ~2.5 hours(103 s / 3600 = 2.77 h). In general, more number of tweets type 1 (yellow line) have longer lifetime (i.e., lifetime more than a day) compared to other types (105 s / 3600 = 27.7 h). Are my interpretations correct? what else we can get from these plots? NOTES: the distributions of `cascade_size` and `cascade_lifetime` in all dataframes are `not normal`.
Compare CCDF of datasets with different sample size
CC BY-SA 4.0
null
2023-05-18T17:20:55.367
2023-05-30T14:00:05.570
2023-05-26T01:43:43.813
90768
90768
[ "distributions", "data-visualization", "sample-size", "cumulative-distribution-function", "empirical-cumulative-distr-fn" ]
616261
1
null
null
0
14
I have fit a time-series count HGAM with environmental covariates as parametric terms and year as a site-varying smooth term. I am wondering if I need to use an alpha threshold of 0.01 to interpret significant p values for the parametric terms, or is a 0.05 threshold appropriate? Thank you for any help
Must an alpha=0.01 be used to interpret significance of parametric terms in a GAM?
CC BY-SA 4.0
null
2023-05-18T17:25:00.990
2023-05-18T17:25:00.990
null
null
286723
[ "statistical-significance", "generalized-additive-model", "mgcv", "parametric" ]
616262
2
null
616234
2
null
Isserli's Theorem equates various multivariate cumulants of a zero-mean multivariate Normal distribution. However, most of this theorem is a universal result, applicable to all multivariate distributions (with finite relevant moments), which takes on a particularly simple form for zero-mean distributions and an even simpler form for zero-mean Normal distributions. These simplifications arise from the very definitions of "zero-mean" (section 2 below) and "Multivariate Normal," (section 3 below) because those definitions essentially specify that particular cumulants are zero. --- ### 1. Generalities There is a fully general relationship between moments and cumulants for any $n$-variate distribution of variables $(X_1,X_2,X_3,\ldots,X_n)$ (whose relevant moments are defined and finite). Given such a distribution and for any subset of the indexes $B\subset\{1,2,3,\ldots,n\},$ $B = \{t_{1_1},t_{i_2},\ldots, t_{i_p}\},$ let $\kappa(B)$ be the coefficient of $i^pt_{i_1}t_{i_2}\cdots t_{i_p}$ (where $i^2 = -1$) in the power series expansion of the expectation $$K(t_1,t_2,\ldots, t_n) = \log \left(E\left[\exp\left(it_1X_1+it_2X_2+\cdots +it_nX_n\right)\right]\right).$$ The argument of the log is the characteristic function of the distribution and the left hand side is called the cumulant generating function. [Then](https://en.wikipedia.org/wiki/Cumulant#Joint_cumulants) $$E[X_1X_2X_3\cdots X_n] = \sum_{\pi}\prod_{B\in\pi}\kappa(B)\tag{*}$$ where $\pi$ ranges over all [partitions](https://en.wikipedia.org/wiki/Partition_of_a_set) of $\{1,2,\ldots, n\}.$ There are, for example, 15 partitions of $\{1,2,3,4\}.$ Four of them appear in the formulas below. The other 11 partitions all include singleton subsets, such as $\{\{1\},\ \{2,3,4\}\},$ $\{\{1\},\ \{2\},\ \{3,4\}\},$ and $\{\{1\},\ \{2\},\ \{3\},\ \{4\}\}.$ This is a purely formal combinatorial result having essentially nothing to do with distributions or multivariate Normal variables. It's a matter of manipulating formal power series for the exponential and the logarithm. In particular, $E[X_i] = \kappa(\{i\})$ are the means and, when $n=2,$ $(*)$ becomes $$E[X_iX_j] = \kappa(\{i,j\}) + \kappa(\{i\})\kappa(\{j\}) = \kappa(\{i,j\}) + E[X_i]E[X_j]$$ Upon solving for the cumulant this reveals the $\kappa(\{i,j\})$ to be the covariances. --- ### 2. Zero-mean distributions Let's now specialize to a zero-mean distribution. This greatly simplifies $(*),$ because any term for a partition containing a singleton will be a multiple of the expectation $\kappa(\{i\})=E[X_i]=0$ and thereby drop out; and the covariances likewise reduce to the expected products, $\operatorname{Cov}(X_i,X_j) = E[X_iX_j] = \kappa(\{i,j\}).$ With $n=4$ the equation $(*)$ reduces to $$\begin{aligned} &E[X_1X_2X_3X_4]\\ &= \kappa(\{1,2,3,4\}) + \kappa(\{1,2\})\kappa(\{3,4\}) + \kappa(\{1,3\})\kappa(\{2,4\}) + \kappa(\{1,4\})\kappa(\{2,3\})\\ &= \kappa(\{1,2,3,4\}) + E[X_1X_2]E[X_3X_4] + E[X_1X_3]E[X_2X_4] + E[X_1X_4]E[X_2X_3]. \end{aligned}$$ --- ### 3. Multivariate Normal distributions (with zero means) Finally, the multivariate Normal distribution is defined by having all higher-order cumulants (with three or more $t_{i}$ involved) equal to zero. That is, [its cumulant generating function is merely quadratic.](https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Equivalent_definitions) Consequently $\kappa(\{1,2,3,4\})=0$ and the $n=4$ case (as well as all other cases) of Isserli's theorem is immediate, QED.
null
CC BY-SA 4.0
null
2023-05-18T17:34:23.217
2023-05-18T17:49:03.230
2023-05-18T17:49:03.230
919
919
null
616263
1
null
null
1
42
I have a situation where I compute a posterior probability using the vanilla Bayes' Theorem: $P(A|B) \propto P(B|A)P(A)$ As a concrete example, $P(B|A)$ is a Normal distribution establishing the likelihood that a measured value plus random noise is consistent with a known expectation value (and variance), and $P(A)$ is a uniform prior that the measured value is a false positive. In this situation, large outliers will have a posterior probability of zero. Suppose I'd like to say "I can't really be so sure that the outlier is due to noise that I understand (the Normal distribution), so I want to hedge my bets and always leave the posterior with a tiny, minimum probability. Naively, I can just clip/clamp the posterior to lie between $[\epsilon,1-\epsilon]$, but this won't pass the sniff test of people (like you) who clearly know much more than I do about probability, and will tell me "this is prior information so include it as a prior!" I just don't know enough about this field to see how to do it on the right-hand side of the equation. I'd love some guidance.
How to properly put lower/upper limits on a Bayesian posterior
CC BY-SA 4.0
null
2023-05-18T17:57:57.390
2023-05-19T18:18:13.993
2023-05-19T18:18:13.993
216166
216166
[ "bayesian" ]
616265
2
null
616255
1
null
Intuitively, this should make sense : to compute the expected value of $\log q(Y)$, you can look at the expected value of $\log q (x+U)$ for each $x\in \mathcal X$ and average the result by the "mass" of each $x$. The mathematical equivalent of this intuitive explantion is provided by the [law of total expectation](https://en.wikipedia.org/wiki/Law_of_total_expectation) (LTE), according to which we have $$\int\log q(y) p(y)\ dy=:\mathbb E[\log q(Y)] \stackrel{(LTE)}{=} \mathbb E_X \big[\mathbb E_U[\log q(X + U)\mid X]\big] $$ Now, since $U\sim\text{Uniform}([0,1])$, we can compute for all $x\in\mathcal X$ : $$\mathbb E_U[\log q(X + U)\mid X=x] = \int_0^1 \log q(x+u )\ du$$ From which it follows that $$\begin{align*} \mathbb E[\log q(Y)] &= \mathbb E_X \big[\mathbb E_U[\log q(X + U)\mid X]\big] \\ &= \sum_{x\in \mathcal X} P(x)\mathbb E_U[\log q(X + U)\mid X=x]\\ &= \sum_{x\in \mathcal X} P(x) \int_0^1 \log q(x+u )\ du \end{align*} $$ As desired.
null
CC BY-SA 4.0
null
2023-05-18T18:43:12.723
2023-05-18T18:43:12.723
null
null
305654
null
616266
1
null
null
0
21
I would be grateful for a cross-check on my understanding of the criteria for valid ANOVA analysis. The orthodoxy I've always used is the Hartley’s Fmax test, if that fails ANOVA is simply no-go. The question is in two parts: - Could D’Agostino-Pearson test for normality alone be used to justify ANOVA analysis? If this is a concern that what safe guards would be requested that the homogeneity of variance is sufficient for valid ANOVA analysis? - Secondly, I never considered D’Agostino-Pearson test as a compelling basis to perform parametrics per se, so in particular I'd look at a Q-Q test. Is that view correct?
ANOVA: testing homogeneity of variance via D’Agostino- Pearson test?
CC BY-SA 4.0
null
2023-05-18T18:45:43.777
2023-05-19T18:09:44.793
null
null
227740
[ "anova", "f-statistic", "skew-normal-distribution" ]
616267
1
null
null
1
74
Background: I did an experiment in which 30 people watched 3 movie clips ("A", "B", and "C") in a random order. For each song and participant I measured one continuous neural variable named SNR (1 value for each participant and for each song) and one continuous behavioral variable named RT (1 value for each participant and for each song). I am interested in knowing whether there is a linear relationship between SNR and RT for each movie. Also, I am interested in knowing the strength of this relationship to compare across movies. In another words, I want to know whether the greater the SNR the greater the RT for each movie clip, and whether this is enhanced in some movies compared to others. Here is some dummy data that follows the same structure that I have: ``` res_table <- tibble( ID = rep(seq(1,30,1),3), # participant identifier MOVIES = rep(c("A","B","C"),each=30), SNR = sample(35:65, 30*3, replace = TRUE), RT = sample(50:100, 30*3, replace = TRUE) ) res_table$ID <- factor(res_table$ID, labels = c("1","2","3","4","5","6","7","8","9","10","11","12","13","14","15","16","17","18","19","20","21","22","23","24","25","26","27","28","29","30")) res_table$MOVIES <- factor(res_table$MOVIES, levels = c("A","B","C")) ``` Problem: Although I initially performed Pearson Correlations between SNR and RT for each movie, someone told me that to explore these relationships it would be best to "show a scatter plot of the association between SNR and RT, including the linear model and the confidence range for the regression". I fitted a mixed model as follows: ``` model.01 <- lmer(SNR~RT*MOVIES + (1|ID), data = res_table) ``` But from here on I am lost on what I should do. How can I know whether the relationship between SNR and RT at each movie level is significant or not?
Mixed effects model to explore the linear relationship between two variables for each level of the repeated-measures in Rstudio using lme()
CC BY-SA 4.0
null
2023-05-18T18:46:22.633
2023-05-19T08:19:59.123
2023-05-18T19:35:30.200
79696
385617
[ "r", "regression", "mixed-model", "lme4-nlme", "repeated-measures" ]
616268
2
null
616263
1
null
The uniform distribution is often applied to situations where $w \in [l,u]$ represents a segment on the real line, with no preferred value for $w$ within this segment. However, on the one hand, this assumes that we are ignorant of any value of $w$, while, on the other hand, we seem to know precisely its boundary values. This is a contradiction in our state of knowledge. Therefore, one should be careful while using it.
null
CC BY-SA 4.0
null
2023-05-18T19:00:54.420
2023-05-18T19:00:54.420
null
null
382413
null
616269
1
null
null
1
32
I found this plot in a paper. I have found similar plots in other papers from the same team. It is basically the comparison of MODIS C6 and C5 products with AERONET. I wanted to understand what that 'Frequency' color coding means and how was that calculated. I also have a data frame with AOD and AERONET data, but even if I am creating bins using AOD or AERONET column or even Bias (= AOD - AERONET) and counting the number of points within each of them and then color coding it, the result is not the same. Most points are at the bottom and hence they should have higher frequency, but that is not what is happening with me. [](https://i.stack.imgur.com/7QIgJ.png) I used AOD (on the y-axis) to create the bins. As can be seen, my clusters are like bands, theirs is like a circle. I want to understand how they are doing it and if possible, some suggestions for functions in R on how to achieve it. I don't know if I was able to express the issue correctly, so please let me know. # ----- Code to create clusters. ``` df <- df %>% mutate(cluster = cut(AOD, breaks = 10, labels = FALSE, include.lowest = TRUE)) df_cluster_freq <- df %>% group_by(Sensor, cluster) %>% count() %>% ungroup() %>% rename(cluster_frequency = n) df <- df %>% left_join(df_cluster_freq, by = c("Sensor", "cluster")) ``` # ---- Code for plotting. ``` ggplot(data = df) + geom_point(aes(x = AERONET, y = AOD, color = cluster_frequency), size = 1.2) + geom_smooth(aes(x = AERONET, y = upperEE), method = "lm", se = FALSE, linetype="dashed", color = 'red', size = 1.1) + geom_smooth(aes(x = AERONET, y = lowerEE), method = "lm", se = FALSE, linetype="dashed", color = 'red', size = 1.1) + geom_smooth(aes(x = AERONET, y = AOD), method = "lm", se = FALSE, color = 'black') + theme_bw() + common_legend + #scale_color_distiller(palette = 'Spectral') + scale_color_viridis_c(option = "turbo") + theme(plot.margin = unit(c(0.8,0.8,0.8,0.8), "cm")) + xlab("AERONET@550nm") + ylab("AOD@550nm") ``` [](https://i.stack.imgur.com/wEURZ.png)
I want to know how this graph is being created?
CC BY-SA 4.0
null
2023-05-18T19:08:13.387
2023-05-18T19:43:35.963
2023-05-18T19:43:35.963
388287
388287
[ "r", "frequency", "scatterplot" ]
616272
1
616294
null
4
143
I am trying to implement sampling from GEV distribution without using external libraries (except numpy) and this is what I came up with: ``` import numpy as np def gev_rvs(c=-0.583, loc=68.44, scale=66.05, size=1000): u = np.random.uniform(size=size) sample = loc + scale / c * (np.power(-np.log(u), -c) - 1) return sample ``` I am using the equation for the quantile function for GEV as in wikipedia. However when I compare it with the results from ``` from scipy.stats import genextreme as gev sample = gev.rvs(c=-0.583, loc=68.44, scale=66.05, size=1000) ``` they give much different distributions of samples. What could be the reason? Update: The corrected version of Ben's equation seems to be the same as the one in wikipedia, however they are both far from the one provided by scipy: [](https://i.stack.imgur.com/AAbLx.png) Attaching below the code used for histogram in case I made a mistake somewhere: ``` def wikipedia_gev_rvs(c, loc, scale, size): u = np.random.uniform(size=size) sample = loc + scale / c * (np.power(-np.log(u), -c) - 1) return sample def ben_gev_rvs(c, loc, scale, size): u = np.random.uniform(size=size) sample = loc + scale / c * (np.power(np.abs(np.log(u)), -c) - 1) return sample def scipy_gev_rvs(c, loc, scale, size): sample = scipy.stats.genextreme.rvs(c, loc, scale, size=size) return sample sample_w = wikipedia_gev_rvs(-0.583, 68.44, 66.05, size=1000) sample_b = ben_gev_rvs(-0.583, 68.44, 66.05, size=1000) sample_s = scipy_gev_rvs(-0.583, 68.44, 66.05, size=1000) plt.hist(sample_w, bins=range(-200, 1500, 20), alpha=0.5, label='wikipedia') plt.hist(sample_b, bins=range(-200, 1500, 20), alpha=0.5, label='ben') plt.hist(sample_s, bins=range(-200, 1500, 20), alpha=0.5, label='scipy') plt.legend() plt.show() ```
Implementing sampling from GEV distribution from scratch
CC BY-SA 4.0
null
2023-05-18T19:56:59.427
2023-05-19T06:42:43.197
2023-05-19T06:42:43.197
372864
372864
[ "distributions", "python", "sampling" ]
616273
2
null
615718
2
null
You need to multiply your answer by the number of ways in which you can make the four committees, i.e. the number of ways in which 1,4,4,2 can be clubbed with A, B, C, D. This can be done in 12 ways (= the number of 4 digit numbers that you can make with 1,4,4,2 = 4!/2!) So the final answer should be = 34650 * 12 = 415800
null
CC BY-SA 4.0
null
2023-05-18T19:57:49.633
2023-05-18T19:57:49.633
null
null
369552
null
616274
2
null
268934
1
null
Usually you split your data into train and test set when you want to evaluate the performance of your model in the train and test set. In order to do this, you need to know a priori in which cluster every observation belongs to, which means that you know exactly how many clusters there are in your data (equivalently the number of clusters isn't stochastic). Consequently, this means that you have data from let's say k populations. As you can see this isn't a clustering problem but a classification problem.
null
CC BY-SA 4.0
null
2023-05-18T20:22:06.973
2023-05-18T20:22:06.973
null
null
369381
null
616275
2
null
329652
0
null
> knowing the sign is very very useful To me, this says it all. If you have reason to believe that knowing just the sign is enough to contribute to your trading strategy, then go ahead and predict the sign. There are all of the usual caveats about overfitting and the difficulty of making financial predictions, but if you are careful in developing your model and do produce one that you verify is reliable in predicting useful information, that seems like it would be regarded as an accomplishment! I have my doubts about how useful it is to know just the sign, however, despite this appearing to be a topic in the quantitative finance literature. - If you just buy when the prediction is a gain and sell when the prediction is a loss, you have no idea how large those are and if they will be devoured by trading fees (perhaps taxes, too). You could get it right every time yet lose money (or at least underperform a benchmark). - Even setting aside trading fees and taxes, you can get almost every sign prediction right yet still lose money if they few times you are wrong are big misses. For instance, if you predict right the first four days of the week and make $1$ each day and then predict wrong on Friday and lose $7$, you are in the red, despite what seems like a decent accuracy of $80\%$. These two concerns are related to the fact that the cost of decisions is an important consideration, and your classifier does not give that. [This answer](https://stats.stackexchange.com/a/312124/247274) discusses this in more detail (granted, in a different context). My answer [here](https://stats.stackexchange.com/a/615428/247274) gets into some other issues I see with binning the outcome variable as is proposed in this question.
null
CC BY-SA 4.0
null
2023-05-18T20:32:41.780
2023-05-18T20:38:24.120
2023-05-18T20:38:24.120
247274
247274
null
616276
2
null
615114
1
null
The first question you have to ask yourself when you are facing a problem like this is "What exactly am I trying to estimate?", and be precise. This will usually start to guide you in the right direction. These are two (of several) possible estimands that may be of interest in an experiment like this: - The causal effect of seeing a push notification on the outcome of interest. (Similar to Intention to Treat Effect [ITT]) - The causal effect of clicking a push notification on the outcome of interest among people who would click on it if they were to see it. (Similar to Complier Average Causal Effect [CACE]) Typically, you would want to estimate the values of these estimands using techniques based on the randomized assignment of an intervention to a treatment group (as in a clinical trial). The challenge with your experiment is that the the original treatment (sending the push notification) was assigned to everyone, and thus there is no control group for this paticular intervention. Therefore you need to rely on methods for causal inference in observational studies because the only possible "treatment" for which you can construct a control group is a person seeing the push notification. I think this is more reasonable than having a treatment and control group based on whether or not someone clicked the push notification. Thus, under this framework, your treatment group is comprised of everyone who sees the push notification, and the control group is comprised of everyone who does not. However, there are multiple complications: First, obviously, the assignment to seeing or not seeing the push notification is not random, and is likely dependent on characteristics of the individual and their environment. In this scenario, many people will rely on methods associated with the propensity score to establish treatment and control groups that are "similar" in their probability of having been assigned the intervention of interest (seeing the push notification in this case). It is important to understand the assumptions underlying to the propensity score in order to justify its use. For example, you need to assume: - Everyone in your sample had a non-zero probability of seeing the push notification - The set of variables you have that describe the individual and their environment is sufficient to create independence between the potential outcomes and whether or not they saw the notification. (See strongly ignorable treatment assignment in section 5.1 ) Second, you have to decide which of the two estimands, written above as 1. and 2., you want to estimate. The first simply describes the effect of seeing the push notification on the outcome of interest, regardless of click status. The second requires you to consider who in the control group would have clicked on the push notification were they to have seen the push notification. This is an important distinction, and the consideration is necessary because it doesn't make sense to estimate the effect of clicking on the notification among people who would never click on it, regardless of whether they saw it or not. This is often called the "complier average causal effect" (See [here](https://en.wikipedia.org/wiki/Local_average_treatment_effect#:%7E:text=In%20econometrics%20and%20related%20empirical,assigned%20to%20their%20sample%20group.)). There is a lot of work that has gone into the appropriate methods for estimating each of these two estimands, however it is important to understand which one is more suitable to answer your question of interest. That is something you must decide based on your goals.
null
CC BY-SA 4.0
null
2023-05-18T21:00:14.513
2023-05-18T21:04:09.483
2023-05-18T21:04:09.483
388290
388290
null
616277
1
null
null
0
18
I don't have a strong background in statistics, I am reading two papers which mention Wilcoxon tests, signed-rank and rank-sum, and they are applied on two different situations. I found a bit cumbersome to understand what they do, and how to chose between them for appropriate significance testing. Could you please understand these three steps, based on the assumptions of the tests ? - https://www.statisticssolutions.com/free-resources/directory-of-statistical-analyses/assumptions-of-the-wilcox-sign-test/ - https://data.library.virginia.edu/the-wilcoxon-rank-sum-test/ -- - One paper uses signed-rank to test if there is a true difference between two machine learning classifiers. The researcher applies the test to the pairs of scores (f1-scores) of the two classifiers, for each K-fold in a cross-fold validation. As far as I understand, like this: ``` for each K-fold: compute f1-score for A compute f1-score for B compute signed-rank for (A, B) ``` In this example, I would like to understand: - why a statistical testing of this kind is necessary or useful, to appreciate that a machine learning model is consistently better then another one ? - Why comparing the metrics is not enough, and how the signed-rank address the issue? - One paper uses rank-sum test, to test if sequences in syllables of animals follows the same distributions. But unfortunately, it does explain what is the actual input. For my purpose, I would like to replicate the result, but unsure what to pass as input and which test chose. Consider the situation I have one animal emitting sequences of sounds, in two different contexts. The two context are independent, that is an assumption for rank-sum. But the emitter is the same, so I could consider the sequences are dependent. - Am I misunderstanding meaning of independence / dependence? - What kind of input could I use to compare sequences of syllables, according the the assumptions of the tests : shall frequencies of syllables be representative value, to test that order of syllables in sequences is drawn from similar distributions ?
Wilcoxon test - which is difference and applicability of signed-rank and rank-sum?
CC BY-SA 4.0
null
2023-05-03T22:05:28.337
2023-05-23T16:45:04.470
2023-05-23T16:45:04.470
11887
107116
[ "hypothesis-testing", "wilcoxon-mann-whitney-test" ]
616278
1
null
null
1
15
I am currently working with a VAR model in R using the second difference of some variables (it only becomes stationary after differencing twice). So far I'm trying to see if the model fits one of the variables which seems to be the case to a reasonable amount. However, when I'm trying to transform the fitted values back to levels they look terrible (basically a straight downtrend). I managed to transform the differenced y values used by the model back to their level scale without problems. I'm now trying to assess if the model simply fails to fit or if I missed something. Below I included the code I've used: ``` model <- VAR(df_diff2[3:nrow(df_diff2),], p = 1, type = "none", ic = "SC") fit <- model$varresult$Variable_of_interest$fitted.values fit_levels <- diffinv(x = fit, differences = 2, xi = df_raw[c(1,2),2]) ``` In the first line, I start with the third row of the data as the first two rows contain NAs due to the differencing. In the last line using the diffinv function, I add back the first two values of the variable of interest from the raw (level) data. Applying the same function to the y values used in the model object returns the undifferenced series. Below I attach what the predictions (red) for the variable of interest look like with the original series included in black (first in second differences and then transformed back to levels). [](https://i.stack.imgur.com/AK2bZ.png) [](https://i.stack.imgur.com/HtBHP.png) Can you spot any problems with my approach or is there something wrong with the model? Please let me know if I can help you with any further information and I'm looking forward to your support!
Transform Second Difference Predictions from VAR model back to levels
CC BY-SA 4.0
null
2023-05-18T21:58:45.450
2023-05-18T21:58:45.450
null
null
310256
[ "r", "time-series", "data-transformation", "vector-autoregression", "differences" ]
616279
1
null
null
4
627
I'm trying to figure out how to get the probability that you reach a certain point in my game. The way it works is that you start off with 22 things (including yourself), and each time you press enter a random thing explodes and you get a point. The twist is that if something explodes, that thing cannot be exploded again. So now there are 21 options. If you explode yourself, the game obviously ends. Here's my simplified code: ``` #Dynamite simulator import random, time killOptions = ["Yourself","A Bush","Rodeen","A Tree","Your Laptop","Your Water","Your Friend","John Paul","Your Teacher","A Rastin","Peter Yao","The Earth","The ISS","Armagedon","Thor","Jacob's IPad","The Bomb","A Random Short Person Who's Name Is Coincidentally Jehan","Grass","The Imposter","Bob The Builder","Math Equations"] killLength = len(killOptions) points = 0 def GameOver(): time.sleep(3) print("\nYou blew yourself up, so you died. Duh.") if points != killLength: print(f'\nYou got {points} points.') else: print(f'\nYou got the maximum score of {killLength}. Lucky...') quit() def BlowUp(): print(random.choice(boomOptions)) thingToBlowUp = random.choice(killOptions) killOptions.remove(thing) print(f'\nYou killed {thing}.') if thing == "Yourself": GameOver() print('Press Enter To Blow Dynamite') while True: input() BlowUp() points += 1 print(f'There are {len(killOptions)} more things to blow up.') ``` I'm kinda bad at math, so how would you get the probability of someone getting n points? > I asked at stack overflow, but they redirected me here
Probability of a Program
CC BY-SA 4.0
null
2023-05-18T23:23:36.573
2023-05-19T16:06:45.870
2023-05-19T08:13:30.050
362671
388297
[ "probability" ]
616280
1
null
null
2
29
I'm processing several time series in an effort to identify meaningful trends over time. More specifically, I'm computing several time- and frequency-domain metrics (e.g. RMS, Peak-to-Peak, Integral, Median and Peak frequencies) over sequential same-length windows/segments (no overlap) and performing a 1st order polynomial fit (using least-squares method). However, I'm struggling to understand how to objectively quantify the statistical significance of the polynomials I've obtained. For instance, a specific metric, computed for 4 separate series, over 31 intervals varies like: [](https://i.stack.imgur.com/cpwXX.png) How to I check if these trends are statistically significant? 1 and 3 are visually increasing, but the original data (from which metrics are computed) is inherently noisy, and therefore the evaluation metrics fluctuate between subsequent repetitions, which may influence the slope of these lines. - What kind of preliminary tests should I perform on either the original time series or the metric value distribution (regarding normality and autocorrelation of the time series and resulting metrics distribution)? - Which statistical test(s) are indicated to evaluate the existence (or not) of significant trends in the metric values? I've looked into the Mann-Kendall test (which, in the example above, results in very low p-values for sets 1 and 3, moderate for 2, and high for 4), which tests for monotonic trends without making any assumptions on its normality, but requires the data to not have any autocorrelation. In addition, the original 4 time series are in truth rows in the coefficient matrix obtained after factorization (NNMF is used) of higher dimensional (10) sensor data, and I'm unsure as to what implications this has on the independence of each curve/trend.
Which statistical tests to use for identifying trends in multiple time series?
CC BY-SA 4.0
null
2023-05-18T23:25:53.377
2023-05-18T23:25:53.377
null
null
388119
[ "time-series", "statistical-significance", "trend" ]
616282
2
null
268934
1
null
The sample is not normally split into training and test set in clustering, because, as said in other answers, the test set will not have "true labels" available, so you can't check predictions from the training set on it. There are however some uses for data set splits in cluster analysis. Particularly one may be interested in whether a clustering structure that is found on one part of the data set corresponds to what goes on in the other half. This isn't quite as easy to formalise though as in supervised classification. We have a paper on this [here.](https://wires.onlinelibrary.wiley.com/doi/full/10.1002/widm.1444) Ullmann, T., Hennig, C., & Boulesteix, A.-L. (2022). Validation of cluster analysis results on validation data: A systematic framework. WIREs Data Mining and Knowledge Discovery, 12( 3), e1444. [https://doi.org/10.1002/widm.1444](https://doi.org/10.1002/widm.1444) There is also some work that uses cross-validation, i.e., data set splitting, for estimating the number of clusters, although this is more sophisticated than just using a single training/test split, see, e.g., Wang, J. β€œConsistent Selection of the Number of Clusters via Crossvalidation.” Biometrika 97, no. 4 (2010): 893–904. [http://www.jstor.org/stable/29777144](http://www.jstor.org/stable/29777144), Fu, W. & Perry, P. O. (2020) Estimating the Number of Clusters Using Cross-Validation, Journal of Computational and Graphical Statistics, 29:1, 162-173, DOI: 10.1080/10618600.2019.1647846 A note on terminology: In some answers you read that "clustering is not classification", but that's a somewhat inappropriate use of terminology. Clustering classifies and can therefore well be called classification, as is done in some literature. A better distinction is between supervised and unsupervised classification, the latter being clustering.
null
CC BY-SA 4.0
null
2023-05-19T00:05:26.923
2023-05-20T09:59:42.517
2023-05-20T09:59:42.517
247165
247165
null
616283
2
null
401322
0
null
Here is the [MICE algorithm](https://stefvanbuuren.name/fimd/sec-FCS.html), from the book that one of the MICE authors wrote about using MICE. A detailed explanation is above my pay grade, but the gist is: - Initially populate the missing values with random draws of non-missing values. - Iterate (up to maxit): For each variable: Take draws of new and (hopefully) improved values, conditioned on this variable's observed data, current "complete"-by-imputation other variables' data, and a draw from the phi distribution (not sure about that part... I think it's the currently estimated distribution of the variable currently being imputed. Like I said, above my pay grade.). Also, another section from the book, about [convergence](https://stefvanbuuren.name/fimd/sec-algoptions.html), which also discusses iterations and the order that the variables are visited for the imputation looping. Reference: Flexible Imputation of Missing Data, Second Edition. 2018. Stef van Buuren.
null
CC BY-SA 4.0
null
2023-05-19T00:11:35.923
2023-05-19T02:24:37.820
2023-05-19T02:24:37.820
312101
312101
null
616284
1
null
null
1
23
I am trying to write the neural network as written in the book "Neural Networks and Deep Learning" by Michael Nielsen. I have finally gotten the code to run but every epoch results in the same output no matter where it starts it gets stuck at a similar, very inaccurate value. The code and results are below. ``` import random import numpy as np def sigmoid(z): return 1/(1-np.exp(z)) def sigmoid_prime(z): return sigmoid(z)*(1-sigmoid(z)) class Network(object): def __init__(self, sizes): self.num_layers = len(sizes) self.sizes = sizes self.biases = [np.random.randn(y, 1) for y in sizes [1:]] self.weights = [np.random.randn(y, x) for x, y in zip(sizes[:-1], sizes[1:])] def feedForward(self, a): for b, w in zip(self.biases, self.weights): a = sigmoid((np.dot(w, a)+b)) return a def update_mini_batch(self, mini_batch, eta): """Update the network's weights and biases by applying gradient descent using backpropagation to a single mini batch. The "mini_batch" is a list of tuples "(x, y)", and "eta" is the learning rate.""" nabla_b = [np.zeros(b.shape) for b in self.biases] nabla_w = [np.zeros(w.shape) for w in self.weights] for x, y in mini_batch: delta_nabla_b, delta_nabla_w = self.backprop(x, y) nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)] nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)] self.weights = [w-(eta/len(mini_batch))*nw for w, nw in zip(self.weights, nabla_w)] self.biases = [b-(eta/len(mini_batch))*nb for b, nb in zip(self.biases, nabla_b)] def SGD(self, training_data, epochs, mini_batch_size, eta, test_data=None): """Train the neural network using mini-batch stochastic gradient descent. The "training_data" is a list of tuples "(x, y)" representing the training inputs and the desired outputs. The other non-optional parameters are self-explanatory. If "test_data" is provided then the network will be evaluated against the test data after each epoch, and partial progress printed out. This is useful for tracking progress, but slows things down substantially.""" training_data = list(training_data) n = len(training_data) if test_data: test_data = list(test_data) n_test = len(test_data) for j in range(epochs): random.shuffle(training_data) mini_batches = [training_data[k:k+mini_batch_size] for k in range(0, n, mini_batch_size)] for batch in mini_batches: self.update_mini_batch(batch, eta) if test_data: print ("Epoch {}: {} / {}".format(j, self.evaluate(test_data), n_test)) else: print ("Epoch {} complete".format(j)) def backprop(self, x, y): """Return a tuple ``(nabla_b, nabla_w)`` representing the gradient for the cost function C_x. ``nabla_b`` and ``nabla_w`` are layer-by-layer lists of numpy arrays, similar to ``self.biases`` and ``self.weights``.""" nabla_b = [np.zeros(b.shape) for b in self.biases] nabla_w = [np.zeros(w.shape) for w in self.weights] # feedforward activation = x activations = [x] # list to store all the activations, layer by layer zs = [] # list to store all the z vectors, layer by layer for b, w in zip(self.biases, self.weights): z = np.dot(w, activation)+b zs.append(z) activation = sigmoid(z) activations.append(activation) # backward pass delta = self.cost_derivative(activations[-1], y) * \ sigmoid_prime(zs[-1]) nabla_b[-1] = delta nabla_w[-1] = np.dot(delta, activations[-2].transpose()) # Note that the variable l in the loop below is used a little # differently to the notation in Chapter 2 of the book. Here, # l = 1 means the last layer of neurons, l = 2 is the # second-last layer, and so on. It's a renumbering of the # scheme in the book, used here to take advantage of the fact # that Python can use negative indices in lists. for l in range(2, self.num_layers): z = zs[-l] sp = sigmoid_prime(z) delta = np.dot(self.weights[-l+1].transpose(), delta) * sp nabla_b[-l] = delta nabla_w[-l] = np.dot(delta, activations[-l-1].transpose()) return (nabla_b, nabla_w) def evaluate(self, test_data): """Return the number of test inputs for which the neural network outputs the correct result. Note that the neural network's output is assumed to be the index of whichever neuron in the final layer has the highest activation.""" test_results = [(np.argmax(self.feedForward(x)), y) for (x, y) in test_data] return sum(int(x == y) for (x, y) in test_results) def cost_derivative(self, output_activations, y): """Return the vector of partial derivatives \partial C_x / \partial a for the output activations.""" return (output_activations-y) def sigmoid(z): return 1.0/(1.0-np.exp(z)) def sigmoid_prime(z): return sigmoid(z)*(1-sigmoid(z)) ``` Output in Shell ``` net.SGD(training_data, 10, 10, 3.0, test_data=test_data) Epoch 0: 1010 / 10000 Epoch 1: 1010 / 10000 Epoch 2: 1010 / 10000 Epoch 3: 1010 / 10000 Epoch 4: 1010 / 10000 Epoch 5: 1010 / 10000 Epoch 6: 1010 / 10000 Epoch 7: 1010 / 10000 Epoch 8: 1010 / 10000 Epoch 9: 1010 / 10000 ```
Neural Network is always giving same result
CC BY-SA 4.0
null
2023-05-19T01:05:16.470
2023-05-19T09:32:53.753
2023-05-19T09:32:53.753
60613
388301
[ "neural-networks", "python", "error" ]
616285
2
null
616279
5
null
This is described by the negative hypergeometric distribution, but you can also derive this by hand. Example: - The probability you explode on turn 1 is $P(t_1)= 1/22$. This gives you 0 points. - To explode on $t_2$, you have to first not explode at $t_1$. The probability of that is $P(!t_1)=21/22$ (the ! sign means "not"). The probability you explode on turn 2, is $P(t_2) = P(t_2 | !t_1)P(!t_1)$, which is the probability you explode on turn 2 given (the | sign) you didn't explode on turn 1, multiplied by the probability you didn't explode at turn 1. Overall, this is $P(t_2) = P(t_2 | !t_1)P(!t_1) = \frac{1}{21}\frac{21}{22}=1/22$. This gives you 1 point. - The equations get a bit longer at this stage (exploding on $t_3$, $t_4$ etc.), but basically - the probability that you explode on any stage is 1/22 - or $1/N$ assuming that there is only one failure case (exploding yourself) out of $N$ things to explode. Overall, the probability of getting $x$ points (where $x$ ranges from the minimum, 0 points, to the maximum, 21 points), is always $1/22 = 0.0454545\cdots$.
null
CC BY-SA 4.0
null
2023-05-19T01:12:58.047
2023-05-19T01:12:58.047
null
null
369002
null
616286
2
null
616187
1
null
It appears the fixed effects fully absorb your policy variable. This may boil down to a simple coding error. > $EXPANSION_{ist}$ is a binary treatment variable that equals 1 for states that adopted Medicaid expansion and 0 for states that did not Not quite. I have two concerns. First, the subscripts suggest you have a policy that varies at the level of the individual. But, in fact, this is a state level policy change, which exhibits variation across states and time. Thus, it should be $s$- and $t$-subscripted (e.g., $EXPANSION_{st}$). In short, just drop the $i$ and you're good to go. Second, this variable is not 1 for states that adopt, 0 otherwise. If I am taking this literally, then you instantiated a policy dummy with no variation over time; software would definitely drop it in the presence of the state fixed effects. To code it up properly, the variable should "switch on" (i.e., change from 0 to 1) when a treated state $s$ adopts in year $t$, 0 otherwise. In other words, it should only change from 0 to 1 in those state-year combinations when the policy is active, and strictly 0 otherwise. For example, say California adopts in 2015. For this state, it is 0 in the pre-period, then changes from 0 to 1 in 2015 and stays equal to 1 until 2019. Similarly, if Colorado adopt in 2018, it equals 0 from 2011 through 2017, then changes from 0 to 1 in 2018 and stays equal to 1. For the non-adopters, they consistently equal 0; we do not know when any non-expanding state would have adopted. Note how we have variation across states and time, whereas your earlier description suggests the variable will only equal 1 for "treated" states (i.e., equal to 1 in all time periods). The language matters, as I suspect this is what went awry. As a final word, I highly recommend comparing TWFE with methods proposed by, e.g., [Callaway and Sant'Anna 2021](https://www.sciencedirect.com/science/article/abs/pii/S0304407620303948). > I suspect that could be due to the fact that my data is repeated cross-sections and maybe by aggregating the variable by calculating mean of each variable at state and year might help. This is not necessary, though nothing is stopping you from aggregating your data up to a higher geographic level. By aggregating the data up to the state level you create a "pseudo-panel" of sorts, but this will not absolve you of the collinearity issue. Note, DiD methods may be used with repeated cross-sections of individuals, so long as you have a state level policy. In short, keep the repeated cross-sections of individuals and focus on the coding of the policy variable.
null
CC BY-SA 4.0
null
2023-05-19T02:39:53.937
2023-05-19T16:02:09.067
2023-05-19T16:02:09.067
246835
246835
null
616287
2
null
615114
1
null
The attrition issues you are facing here are common issues in trials involving a treatment imposed by the researchers. In many contexts this is dealt with by considering causal effects on the basis of [intention-to-treat](https://en.wikipedia.org/wiki/Intention-to-treat_analysis) rather than on the basis of progression into the substantive treatment. This is a trade-off, insofar as you ask a less valuable question (i.e., what is the causal effect of the initial notification), but you can make a robust inference about this effect that does not hinge on assumptions that progression to the substantive treatment is independent of confounding effects. Having said this, your big problem here is not attrition --- it is the fact that you do not appear to have a proper control group. If you want to proceed using an RCT with intention-to-treat analysis, you ought to have a larger group that is divided randomly into a control and treatment group, and you would only send the notification to the treatment group. My recommendation would be to rethink the underlying experimental design, and consider adopting an alternative design where you have a control group composed of people who do not receive the notification for treatment.
null
CC BY-SA 4.0
null
2023-05-19T04:19:36.043
2023-05-19T04:19:36.043
null
null
173082
null
616288
1
616326
null
2
32
[](https://i.stack.imgur.com/wA83Z.png) In the Definition 3.4.1 of Pearl's causal inference book (Primer), the second rule for the front door criterion is "There is no backdoor path from $X$ to $Z$". But from my understanding, there EXISTS one backdoor path from $X$ to $Z$: $X \leftarrow U \rightarrow Y \leftarrow Z$. And this backdoor path is blocked by the collider $Y$. Can anyone help me understand this rule? Thank you.
How to understand the second rule of front door criterion?
CC BY-SA 4.0
null
2023-05-19T04:43:14.120
2023-05-19T13:29:33.307
2023-05-19T05:09:04.433
44269
382269
[ "causality", "graphical-model", "causal-diagram" ]
616289
2
null
607006
0
null
Here are some links showing the use of survival model with the `lung` dataset from the `survival` package: [Correctly simulating an extreme value distribution for survival analysis?](https://stats.stackexchange.com/questions/616068/correctly-simulating-an-extreme-value-distribution-for-survival-analysis) and [How to appropriately model the uncertainty of the exponential distribution model when running survival simulations?](https://stats.stackexchange.com/questions/615657/how-to-appropriately-model-the-uncertainty-of-the-exponential-distribution-model)
null
CC BY-SA 4.0
null
2023-05-19T05:01:18.207
2023-05-19T05:01:18.207
null
null
378347
null
616290
1
null
null
0
40
I am interested in learning about how to calculate the Standard Deviation of the Weighted Mean. From Wikipedia ([https://en.wikipedia.org/wiki/Pooled_variance](https://en.wikipedia.org/wiki/Pooled_variance), [https://wikimedia.org/api/rest_v1/media/math/render/svg/0224c1c53591c619794682f2bc3560dc86530e2b](https://wikimedia.org/api/rest_v1/media/math/render/svg/0224c1c53591c619794682f2bc3560dc86530e2b)), I obtained the formula for the Weighted Mean: $$\mu_x = \frac{1}{\sum_i N_{x_i}} \left(\sum_i N_{x_i} \mu_{x_i}\right)$$ $$\sigma_x = \sqrt{\left(\frac{1}{\sum_i N_{x_i}} - 1\right) \sum_i \left[(N_{x_i} - 1)\sigma^2_{x_i} + N_{x_i}\mu^2_{x_i}\right] - \left[\sum_i N_{x_i}\right]\mu^2_x}$$ Since the Variance of the Mean is equal to the Variance of X divided by sample size, I think you should be able to adapt this formula to calculate the Standard Deviation of the Weighted Mean like this: $$ \sigma_{\bar{x}} = \frac{\sigma_x}{\sum_i N_{x_i}} = \frac{\sqrt{\left(\frac{1}{\sum_i N_{x_i}} - 1\right) \sum_i \left[(N_{x_i} - 1)\sigma^2_{x_i} + N_{x_i}\mu^2_{x_i}\right] - \left[\sum_i N_{x_i}\right]\mu^2_x}}{\sum_i N_{x_i}}$$ Now, I am trying to confirm if what I have done is correct. Suppose I have this dataset in R which contains the sample mean, variance of the sample mean and sample size for a group of cities: ``` df = structure(list(N_xi = c(4120L, 1625L, 4885L, 4095L, 3560L, 3940L, 3390L, 2245L, 4615L, 4315L, 3330L, 5085L, 6395L, 2515L, 2595L, 3400L, 3260L, 4560L, 2320L, 2755L), mu_xi = c(71100L, 109000L, 74600L, 133200L, 128000L, 91300L, 102500L, 169000L, 96400L, 103600L, 117600L, 74900L, 88500L, 97800L, 137000L, 103400L, 94200L, 83800L, 97900L, 109000L), sigma2_xi = c(17340.37563, 34357.14686, 28331.53559, 65836.85976, 45266.8106, 46644.72409, 34488.63432, 142796.9963, 105193.3962, 69626.62771, 71367.20046, 32616.67612, 79785.14219, 39352.22193, 86820.86048, 191484.2953, 34418.06533, 18949.11971, 38041.609, 65462.82934)), class = "data.frame", row.names = c(NA, -20L)) ``` Part 1: I can calculate the Standard Deviation using the adapted formula from above: ``` mu_x <- sum(sapply(1:nrow(df), function(i) df$N_xi[i] * df$mu_xi[i])) / sum(df$N_xi) factor1 = (df$N_xi - 1) * (df$sigma2_xi) + df$N_xi*df$mu_xi^2 factor2 = sum(df$N_xi) * mu_x^2 part1 = sqrt((sum(factor1) - factor2) / (sum(df$N_xi) - 1)) part2 = sqrt(sum(df$N_xi)) final = part1/part2 [1] 81.47042 ``` Part 2: To me this number seemed a bit low - so I decided to try and use bootstrapping to estimate the Standard Deviation: ``` library(dplyr) my_list = list() for(i in 1:100000) { sampled_df <- df %>% sample_n(nrow(df), replace = TRUE) mu_x_i <- sum(sapply(1:nrow(sampled_df), function(i) sampled_df$N_xi[i] * sampled_df$mu_xi[i])) / sum(sampled_df$N_xi) my_list[[i]] = mu_x_i print(i) } f = do.call(rbind, my_list) #quantile(f, c(0.05, 0.95)) sqrt(var(f)) [1] 4897.309 ``` The estimate from Part 2 seems a lot more reasonable compared to the estimate from Part 1. I am now trying to understand why there is such a large difference between the estimate from Part 1 compared to Part 2. The only thing that comes to mind is that perhaps I incorrectly derived the formula for the Variance of the Weighted Mean - and in fact, I need to divide by the "number of total cities" and not the "total number of observations in each city", i.e. ``` mu_x <- sum(sapply(1:nrow(df), function(i) df$N_xi[i] * df$mu_xi[i])) / sum(df$N_xi) factor1 = (df$N_xi - 1) * (df$sigma2_xi) + df$N_xi*df$mu_xi^2 factor2 = sum(df$N_xi) * mu_x^2 part1 = sqrt((sum(factor1) - factor2) / (sum(df$N_xi) - 1)) part2 = sqrt(nrow(df)) final = part1/part2 [1] 4922.223 ``` Now the estimates from both approaches seem to be closer to one another - however, I am still not sure if what I am doing is correct. Can someone please help me understand this? Thanks!
Variance of Weighted Mean: Formula vs Bootstrapping
CC BY-SA 4.0
null
2023-05-19T05:38:08.050
2023-05-19T06:39:01.127
2023-05-19T06:39:01.127
77179
77179
[ "r", "variance", "mean" ]
616291
2
null
616272
3
null
Consider the GEV distribution for the standardised value $s = (x-\mu)/\sigma$. This distribution has CDF given by: $$F(s|\xi) = \begin{cases} \exp(-\exp(-s)) & & & \text{for } \xi=0, \\[6pt] \exp(-(1+\xi s)^{-1/\xi}) & & & \text{for } s > -1/\xi \ \text{ and } \ \xi \neq 0, \\[6pt] 0 & & & \text{for } s \leqslant -1/\xi \ \text{ and } \ \xi > 0, \\[6pt] 1 & & & \text{for } s \leqslant -1/\xi \ \text{ and } \ \xi < 0. \\[6pt] \end{cases}$$ Inverting this function gives the corresponding quantile function for the standardised value: $$Q(p|\xi) = \begin{cases} -\log(-\log(p)) & & & \text{for } \xi=0, \\[6pt] \frac{1}{\xi} (|\log(p)|^{-\xi}-1) & & & \text{for } \xi \neq 0. \\[6pt] \end{cases}$$ Substituting the original parameters then gives the quantile function for the non-standardised value: $$Q(p|\mu, \sigma, \xi) = \begin{cases} \mu - \sigma \log(-\log(p)) & & & \text{for } \xi=0, \\[6pt] \mu + \frac{\sigma}{\xi} (|\log(p)|^{-\xi}-1) & & & \text{for } \xi \neq 0. \\[6pt] \end{cases}$$ The mathematical form in your code does not appear to correspond to this function, so I suspect that this is the source of the problems you are having.
null
CC BY-SA 4.0
null
2023-05-19T05:40:12.193
2023-05-19T06:27:05.660
2023-05-19T06:27:05.660
173082
173082
null
616292
1
null
null
2
27
Does there exist a point process $X$ in the plane with the following two properties? - $X$ is hard core. Discs of radius $h$ can be centered on the points in $X$ without overlapping. - $X$ is encountered as Poisson arrivals. Propel a disc of radius $r \leq h$ at constant velocity along a randomly selected line across the plane. For any time that the disc does not contain a point from $X$, let $T$ be the time to the disc's next encounter with a point from $X$. Then $T$ has an exponential distribution. Property (2) on its own is satisfied by a Poisson point process. Indeed in this case the caveat of "For any time that the disc does not contain a point from $X$" can be weakened to "At any time". My intuition is that a suitably specialized Strauss process might do the job? But I can't see how to establish this, or discussion in literature. Many thanks.
A hard core spatial point process that a vehicle encounters as Poisson arrivals?
CC BY-SA 4.0
null
2023-05-19T05:46:17.347
2023-05-29T02:13:08.580
2023-05-29T02:13:08.580
388228
388228
[ "stochastic-processes", "spatial", "poisson-process", "point-process" ]
616293
1
null
null
1
37
Could you please help to understand if this interpretation for OR 1.318 and 95 % CI 0.72-2.42 is correct. Especially, I want to understand how we determine the positive or negative direction, do we focus on 1? whether it is included in CI or not? the odds of depression among personnel who worked for more than 8 hours per day is 1.318 times the odds of those who worked per 8 hours a day, considering the range of observed effect from moderately negative to strongly positive. Thanks in advance!
Interpretation of 95 % CI
CC BY-SA 4.0
null
2023-05-19T06:15:51.380
2023-05-19T06:15:51.380
null
null
388315
[ "interpretation" ]
616294
2
null
616272
3
null
Probably the difference is in the sign of $c$. This is explained in the notes of the scipy `genextreme` function (source on [GitHub](https://github.com/scipy/scipy/blob/main/scipy/stats/_continuous_distns.py)). ``` Notes ----- For :math:`c=0`, `genextreme` is equal to `gumbel_r` with probability density function .. math:: f(x) = \exp(-\exp(-x)) \exp(-x), where :math:`-\infty < x < \infty`. For :math:`c \ne 0`, the probability density function for `genextreme` is: .. math:: f(x, c) = \exp(-(1-c x)^{1/c}) (1-c x)^{1/c-1}, where :math:`-\infty < x \le 1/c` if :math:`c > 0` and :math:`1/c \le x < \infty` if :math:`c < 0`. Note that several sources and software packages use the opposite convention for the sign of the shape parameter :math:`c`. `genextreme` takes ``c`` as a shape parameter for :math:`c`. ```
null
CC BY-SA 4.0
null
2023-05-19T06:28:07.210
2023-05-19T06:28:07.210
null
null
164061
null
616295
2
null
616267
0
null
Using the data you provided: look at the model summary first: ``` summary(model.01) Linear mixed model fit by REML. t-tests use Satterthwaite's method ['lmerModLmerTest'] Formula: SNR ~ RT * MOVIES + (1 | ID) Data: res_table REML criterion at convergence: 636 Scaled residuals: Min 1Q Median 3Q Max -1.5020 -0.7392 -0.1719 0.8393 1.7881 Random effects: Groups Name Variance Std.Dev. ID (Intercept) 13.03 3.610 Residual 62.24 7.889 Number of obs: 90, groups: ID, 30 Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 38.51328 6.96568 80.81879 5.529 3.84e-07 *** RT 0.15065 0.09162 80.03000 1.644 0.10404 MOVIESB 29.10257 10.32089 77.10354 2.820 0.00611 ** MOVIESC 16.89379 10.63170 80.86384 1.589 0.11596 RT:MOVIESB -0.38552 0.13743 77.85638 -2.805 0.00635 ** RT:MOVIESC -0.24316 0.13840 81.47951 -1.757 0.08268 . --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 ``` The summary suggests that the relation between RT and SNR is different for movie A vs. movie B at least. However you probably want to compare the slope of RT for each different movie. You can do this via emmeans: ``` library(emmeans) em<-emtrends(model.01, "MOVIES", var="RT") em MOVIES RT.trend SE df lower.CL upper.CL #RT.trend gives you the RT coefficient A 0.1507 0.0934 80.2 -0.0352 0.3365 #separately for each movie B -0.2349 0.1058 80.3 -0.4454 -0.0244 C -0.0925 0.1047 80.3 -0.3008 0.1158 #then you can compare the trends by pairs(em) contrast estimate SE df t.ratio p.value A - B 0.386 0.140 78.2 2.756 0.0197 A - C 0.243 0.141 81.6 1.722 0.2033 B - C -0.142 0.151 82.5 -0.945 0.6135 ``` This suggests that 1) RT is only predicting SNR significantly for movie type B, and 2) RT effect on SNR for movie A is statistically significantly different from RT effect on SNR for movie B (at p < .05), however there is no significant difference in RT effect between A and C or B and C.
null
CC BY-SA 4.0
null
2023-05-19T07:20:49.397
2023-05-19T08:19:59.123
2023-05-19T08:19:59.123
357710
357710
null
616298
1
null
null
1
23
I am new to using mixed effect models and all the information online has me quite confused. I hope someone can help me. I have data of patients who did the same test at three different timepoints. I want to use a mixed effect model (lme4 package) to look at effect of time on change in score. I added an example data set below. I have 3 variables. (1) The person who did the test (2) the change in score (3) if the change in score was between timepoints 1 and 2 (t12) or 1 and 3 (t13). I fitted the mixed effect model below with 'Timepoint' as a fixed effect and 'Patient' as a random effect (hoping this is correct). ``` model <- lmer(Change_score ~ Timepoint + (1|Patient), data=Example_data) ``` If im correct I now have the model but im struggling to plot the standardized residuals and do the Levene's test to see if the data has homoscedasticity. Example dataset: ``` structure(list(Patient = c(1, 1, 2, 2, 3, 3, 4, 4, 5, 5), Timepoint = structure(c(1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L), levels = c("t12", "t13"), class = "factor"), Change_score = c(10, 0, 7, 15, -3, 13, 9, -4, 2, NA)), row.names = c(NA, -10L), class = c("tbl_df", "tbl", "data.frame")) ``` To plot the standardized residuals I found some information online suggesting I should do use the code below. This does give me a plot but I don't understand at all what im seeing on the plot and I feel like this is not what im looking for. I want to visualise the residuals from the model to asses homoscedasticity. ``` plot(model, resid(., type="pearson") ~ fitted(.), abline=0) ``` I tried doing the Levene's test (see code below) but due to missing values I get the error "Error in model.frame.default(form) : variable lengths differ (found for 'Example_data$Timepoint')" and I can't find a way to do the analysis with missing data. ``` Example_data$Timepoint <- as.factor(Example_data$Timepoint) leveneTest(residuals(model) ~ Example_data$Timepoint) ``` I hope someone can help me with some sugestion on how to visualise the residuals and test for homoscedasticity in my data.
How to check for homoscedasticity in a mixed effect model with longitudinal data in r
CC BY-SA 4.0
null
2023-05-19T07:35:25.977
2023-05-19T07:35:25.977
null
null
388321
[ "r", "mixed-model", "lme4-nlme", "heteroscedasticity", "levenes-test" ]
616299
1
null
null
0
34
I'm writing code to calculate if the correlation between two random variables is significant. I've recently come across Fisher's z transformation as a method for finding significance. But from reading around: - SAS on Fisher's z - Wikipedia on Fisher's z it seems this transform only applies to normal variables. A lot of the variables I'm working with aren't normal. Is there a corresponding transform for non-normal random variables? ## Background The variables I'm dealing with - Most of my variables have some amount of skew and so are not perfectly normally distributed. - My dataset also has binary indicator variables, with Bernoulli distributions. The excerpt from Wikipedia I'm concerned about > If $(X, Y)$ has a bivariate normal distribution with correlation ρ and the pairs $(X_i, Y_i)$ are independent and identically distributed, then $z$ is approximately normally distributed with mean $${1 \over 2}\ln \left({{1+\rho } \over {1-\rho }}\right),$$ and standard error $${1 \over {\sqrt {N-3}}},$$
Is there a z transformation for the correlation of non-normal distributions?
CC BY-SA 4.0
null
2023-05-19T08:23:45.867
2023-05-19T09:27:08.320
2023-05-19T09:27:08.320
363857
363857
[ "correlation", "sampling", "z-statistic" ]
616300
2
null
616173
2
null
Autocorrelation as a function: is that function random, deterministic? It's from the point of view of the autocorrelation function and its nature that one could shed some light on the idea of autocorrelated time series. When I first read the notion brought in the question, I thought that, maybe like the OP, talking about a time series being autocorrelated was awkward at best, both mathematically and from a semantic point of view. Autocorrelation, as has been reminded by the other answer's authors, refers to the correlation of that series with a time- (or sample-) shifted copy of itself. How could 2 identical signals not be correlated? In a long gone (and missed) past I worked in physical optics, where one measures the duration of ultra fast laser pulses using an autocorrelation operation; it was only possible because each laser pulses were exactly correlated with one another (they were copies of themselves). So that a time series (or more generally a signal or a function) be correlated with itself sounded obvious: there's no difference between it and itself. But as hinted, that is ignoring that autocorrelation should more accurately be referred to as an autocorrelation function. If that function is predictable (or even, deterministic) then the signal can be said to be autocorrelated. The article actually provides a good definition: > When a time series is autocorrelated, this means that the current value of the series parameter is dependent on preceding values, and can be predicted (at least in part) on the basis of knowledge of those values. I'd like to illustrate the idea with few plots of time series and their autocorrelation function plots. - A pure sine wave [](https://i.stack.imgur.com/FcOYf.png) - the same sine wave with added amplitude Gaussian noise [](https://i.stack.imgur.com/Ff78c.png) - the same pure sine wave with added same amplitude Gaussian noise and a Gaussian phase noise. [](https://i.stack.imgur.com/RzcFY.png) As expected, the autocorrelation function has its peak when the shift (lag) is 0, for all versions of the sine wave (pure and noisy ones). For the pure sine wave, the autocorrelation has a cosine waveform. The interesting observation is for the autocorrelation function of the noisy sine waves: while the plots are also noisy, they do also have an attenuated cosine waveform, which could be predicted (following proper filtering and/or approximated,...); this shows that samples in the time series at a time t have a certain degree of correlation with samples at earlier times. Following the definition of autocorrelated time series proposed in the article, one can say the 3 sine wave time series shown above are all autocorrelated. For a final visual illustration, here is the autocorrelation of the amplitude noise that was added to the pure sine wave. The autocorrelation is very low, this time series is not autocorrelated. [](https://i.stack.imgur.com/W5c8m.png)
null
CC BY-SA 4.0
null
2023-05-19T08:46:22.270
2023-05-19T08:51:44.953
2023-05-19T08:51:44.953
205027
205027
null
616301
1
null
null
1
47
Let $P$ be a [finitely additive probability measure](https://link.springer.com/chapter/10.1007/978-0-387-21659-1_5). I learn from my friend that: $[P(X>Y)=1 \implies \mathbb E_P[X]>\mathbb E_P[Y]]\iff$ $P$ is countably additive. Seems to be a very useful result. Is this result covered in textbooks or literatures?
First order stochastic ordering implies countably additive probability measure?
CC BY-SA 4.0
null
2023-05-19T08:49:15.960
2023-05-20T07:08:24.507
2023-05-20T07:08:24.507
270937
270937
[ "probability", "references", "measure-theory", "stochastic-ordering" ]
616302
2
null
328798
1
null
Ensemble diversity is a measure of variation in the ensemble members, with respect to changes in their training data, or other random variables involved in their construction. It is arguable that it can only be referred to as "diversity" when the measure is independent of the target variable y. The notion of correlation being high/low/independent is the special case that applies only for the squared loss function. In the more general case of other losses, it turns out the formulation of diversity is only possible [1] when the ensemble combination is the so-called "centroid" combiner rule, derived directly from the loss function. In this case, the formalisation of diversity is exactly the same as in the bias-variance decomposition, but the diversity is an extra dimension created by their agreements/disagreements. It turns out that for 0-1 loss (the loss of most interest with ensemble classifiers) such a decomposition is provably not possible. To quote the paper referenced below: > "Overall, we argue that diversity is best understood as a measure of model fit, in precisely the same sense as bias and variance, but accounting for statistical dependencies between ensemble members. With single models, we have a bias/variance trade-off. With an ensemble we have a bias/variance/diversity trade-offβ€”varying both with individual model capacity, and similarities between model predictions." [1] Wood et al, 2023. "A Unified Theory of Diversity in Ensemble Learning arXiv:2301.03962v1, [https://arxiv.org/pdf/2301.03962.pdf](https://arxiv.org/pdf/2301.03962.pdf) Full disclosure: I'm an author.
null
CC BY-SA 4.0
null
2023-05-19T08:52:15.863
2023-05-19T08:59:13.750
2023-05-19T08:59:13.750
388324
388324
null
616303
2
null
614877
0
null
Below is how I take EdM's advice per his comments above, where I work with the variance-covariance matrix using the standard `survreg()` format and transform to the original scale using the `weibCurve()` function per the below code. See for complete explanation: [How to generate multiple forecast simulation paths for survival analysis?](https://stats.stackexchange.com/questions/614198/how-to-generate-multiple-forecast-simulation-paths-for-survival-analysis) Code: ``` library(survival) simNbr <- 10 # converts from survreg() log-linear scale to standard parameterization used by dweibull() weibCurve <- function(time, survregCoefs) { exp(-(time / exp(survregCoefs[1]))^exp(-survregCoefs[2])) } # Fit the Weibull model to the dataset fit <- survreg(Surv(time, status) ~ 1, data = lung, dist = "weibull") # Generate bootstrap samples and fit Weibull models to each sample bootstrap_fits <- lapply( 1:simNbr, function(i) { sample_data <- lung[sample(nrow(lung), replace = TRUE), ] fit <- survreg(Surv(time, status) ~ 1, data = sample_data, dist = "weibull") return(fit) } ) # Generate random Weibull parameter estimates for simulations simParams <- sapply(bootstrap_fits, function(fit) { newCoef <- fit$icoef params <- MASS::mvrnorm(1, mu = newCoef, Sigma = vcov(fit)) return(params) }) # Simulate uncertainty in the parameters of the Weibull distribution and plot results simPaths <- sapply( 1:simNbr, function(i) { params <- simParams[, i] return(weibCurve(time, params)) } ) # Create the plotSims dataframe and label column headers plotSims <- data.frame(time = time, simPaths) colnames(plotSims)[-1] <- paste0("surv", 1:simNbr) ```
null
CC BY-SA 4.0
null
2023-05-19T08:56:27.207
2023-05-19T08:56:27.207
null
null
378347
null
616304
1
616308
null
0
21
I need help to statistically test a uniform distribution of a toss of 7 independent coins. Here is the table : ``` Coin_nb Heads Tails Nb_toss Coin1 86 73 159 Coin2 42 63 105 Coin3 52 62 114 Coin4 53 55 108 Coin5 73 48 121 Coin6 60 77 137 Coin7 66 43 108 ``` How could I test if in globally in the different coins were distributed evenly on both sides ?
Test if different coins are distributed evenly on both sides in R
CC BY-SA 4.0
null
2023-05-19T09:08:09.547
2023-05-19T10:03:45.480
null
null
197361
[ "r", "statistical-significance", "bernoulli-distribution" ]
616305
1
616310
null
1
25
I have the intuition that applying a deterministic function to a pair random variables cannot increase their mutual information, because the function can only decrease each of their entropies. I would like to know if this is true and prove it. So far I've been trying to expand the mutual information definition to reach to the inequality with no success.
Mutual information I(X, Y) >= I(f(X), f(Y)) for deterministic f?
CC BY-SA 4.0
null
2023-05-19T09:45:18.547
2023-05-19T10:23:54.143
2023-05-19T10:23:54.143
204068
155110
[ "markov-process", "information-theory", "mutual-information" ]
616306
2
null
616279
4
null
Let $k$ be the number of points You collect. To collect exactly $k$ points You must survive $k$ steps and explode in $k+1$ step, so it's a sequence of events: `NN...ND` where N is 'not dead', D means 'dead' and 'N' is present $k$ times in the sequence. At each step the probability of 'N' gets smaller as the number of items decreases. The events are independent, I assume. At first step there's $\frac{21}{22}$ chances to get N and $\frac{1}{21}$ to get D. At the second step, there's $\frac{21}{22} \cdot \frac{20}{21}$ chances of getting NN sequence and $\frac{1}{20}$ to get D, and so on. At $k$-th step You get $\frac{21}{22} \cdot \frac{20}{21} \cdot \dots \cdot \frac{22 - k}{22 - k + 1} \cdot \frac{1}{22 - k}$. As You may notice the neighbor terms will cancel their nominators and denominators. Finally, You'll get the formula $\frac{22 - k}{22} \cdot \frac{1}{22 - k} = \frac{1}{22}$. So in general, the probability of getting k points is $\frac{n - k}{n} \cdot \frac{1}{n - k} = \frac{1}{n}$, where n is the number of items at the beginning. The probability is constantly equal $\frac{1}{22}$. $k$ seen as a random variable is uniformly distributed on discrete scale 0, 1, ..., 22.
null
CC BY-SA 4.0
null
2023-05-19T09:49:34.583
2023-05-19T09:49:34.583
null
null
311545
null
616308
2
null
616304
1
null
Run `prop.test` for each row of Your data table: ``` lapply(seq_len(nrow(coins_tbl)), function(i) { prop.test(x = as.matrix(coins_tbl[i, c(2, 3)]), n = coins_tbl$Nb_toss[i], p = 0.5) }) ``` As a result You'll get a list of proportion confidence intervals and `p-values` like this: ``` [[1]] 1-sample proportions test with continuity correction data: as.matrix(coins_tbl[i, c(2, 3)]), null probability 0.5 X-squared = 0.90566, df = 1, p-value = 0.3413 alternative hypothesis: true p is not equal to 0.5 95 percent confidence interval: 0.4602640 0.6194908 sample estimates: p 0.5408805 ``` ```
null
CC BY-SA 4.0
null
2023-05-19T10:03:45.480
2023-05-19T10:03:45.480
null
null
311545
null
616309
2
null
225380
0
null
As correctly said in the previous answer, overfitting occurs when your model is not learning, but it is memorizing data. This leads in poor performances on the test set. If the training set is small the risk of overfitting is very high. Data augmentation has the purpose to include variability in your dataset. I am more familiar with images, but the concept is the same. For each epoch you apply "some transformation" to your input data to prevent memorization. In the case of images, you can apply rotations or add noise. The informative content is not changed, but the model will not "see" the same image, it will not memorize, but you are aiding generalization. I will use a metaphor. During school a common strategy is to memorize formulas and other stuffs to pass an exam. But if you do not learn the logic behind an equation, how to obtain a term instead of another one, you probably will fail the exam despite the fact that you have memorized everything. Data augmentation is the equivalent of doing exercises on the same topic but changing each time the point of view in order to learn a global path and not memorize the information.
null
CC BY-SA 4.0
null
2023-05-19T10:09:46.867
2023-05-19T10:09:46.867
null
null
379875
null
616310
2
null
616305
1
null
Yes, your intuition is correct. See [this](http://people.ece.cornell.edu/zivg/ECE_5630_Lectures10.pdf) source, Property 4. It follows from the Markov chain property, i.e. If we have a markov chain of the following form: $$X\rightarrow Y\rightarrow Z$$ then, $$I(X; Z)\leq I(X;Y)$$ This can be specialized to $Z=f(Y)$. The same property can be applied for the other random variable, $X$, as well, and you get $$I(X;Y)\geq I(f(X);g(Y))$$ in general, and letting $f=g$ yields what you ask.
null
CC BY-SA 4.0
null
2023-05-19T10:23:39.337
2023-05-19T10:23:39.337
null
null
204068
null
616311
2
null
602442
2
null
The inputs $\mathbf x \in\mathcal X$ are not random variables, but deterministic "indices" : for instance, if $\mathcal X =[0,T]$, then they could represent different times, or if $\mathcal X= \mathbb [0,1]^2$, they could represent different positions on a map. As you said, in Gaussian process regression, the values $f(\mathbf x)$ for each $\mathbf x \in \mathcal X$ are assumed to be random variables instead of deterministic (that is, we assume that $\big(f(\mathbf x)\big)_{x\in\mathcal X}$ is a [Gaussian process](https://en.wikipedia.org/wiki/Gaussian_process)). Therefore, the "sources of randomness" in the regression problem will be both the target $f(\mathbf x_i)$ and the noise $\varepsilon(\mathbf x_i)$ for all $1\le i\le N$. With that being said, I understand your confusion about the notations commonly found in the literature. The short story is simply that when we write $p(\mathbf y \mid\mathbf x,D)$, the $x$ in the conditioning should be mostly understood as a notation to mean that we are looking at the conditional distribution of $y$ specifically at point $\mathbf x$, and not over the whole space $\mathcal X$. To illustrate my point, consider the standard example : let $\mathbf x^*\in \mathcal X$ be a "new point" (i.e. not in the dataset), and consider the problem of finding the conditional distribution of $ f(\mathbf x^*)$ given our observations $D$. Using standard notations, we denote $f(\mathbf x^*)\equiv f^*$ and $$\mathbf f \equiv \left[f(\mathbf x_1),\ldots,f(\mathbf x_N)\right]^T \sim \mathcal N(\boldsymbol\mu_N, \Sigma_N) $$ So now, we can write the conditional distribution of $f^*$ given $D$ by marginalizing as follows : $$P(f(\mathbf x^*)\mid D) \equiv P(f^*\mid \mathbf x^*,D) = \int P(f^*\mid \mathbf f, \mathbf x^*,D) P(\mathbf f\mid \mathbf x^*,D)\ d\mathbf f $$ Now to compute this integral, we need to express the two factors in the integrand in closed-form. For the second factor, $\mathbf f$ is not related to $\mathbf x^*$ hence it simplifies to $ P(\mathbf f\mid \mathbf x^*,D)= P(\mathbf f\mid D)$, which may readily be found using Bayes' rule and the known distributions of $\mathbf f$ and $(y_i)_{1\le i \le N}$. For the first factor however, we have $$\begin{bmatrix}\mathbf f\\ f^*\end{bmatrix} \sim \mathcal N(\boldsymbol \mu(\mathbf x^*),\Sigma(\mathbf x^*))$$ From which $ P(f^*\mid \mathbf f, \mathbf x^*,D)$ can be readily computed using properties of Gaussian distributions. So, as you can see, "conditioning on $\mathbf x^*$" is mostly a notational device which helps keeping track of what is varying and what isn't, and can be helpful in computations, but everything would work the exact same without it. If you still have doubts, having a look at the first two pages of [these notes](https://www.csie.ntu.edu.tw/%7Ecjlin/mlgroup/tutorials/gpr.pdf) should hopefully help clarify some of the above explanations.
null
CC BY-SA 4.0
null
2023-05-19T10:28:31.617
2023-05-19T10:28:31.617
null
null
305654
null
616312
1
null
null
0
25
I am looking for help with what statistical test I should use to analyse my data. I want to know if there is a statistically significant difference between the animals behaviour before and after interaction with a tour vessel There are five types of behaviour, and for each encounter I know what the behaviour was pre approach and post approach. I have made a table of straight Y/N whether there was a change, and also the proportion of encounters where each behaviour was occurring pre and post. But i'm not sure where any means fit in and how I could analyse this.
Statistical test for categorical data dealing with behaviour pre/post
CC BY-SA 4.0
null
2023-05-19T10:33:40.400
2023-06-01T05:07:02.077
2023-06-01T05:07:02.077
121522
388330
[ "categorical-data", "pre-post-comparison" ]
616313
1
null
null
1
11
I have a database of defects on plastic films. Defects are burns, holes, and similar things. Some defects are direction specific. A vertical sign on the material represents a scratch done on the material during the extrusion process. Data augmentation is a useful tool, but can be harmful for these defects. I was thinking about the possibility of implement a custom data augmentation related to the label of the training set. For example: a hole can be rotated without problems, but a scratch not. So I tell to the preprocessing layers how to do the augmentation by passing also the labels. Implementing this solution gives me a big doubt: this label-bounded data augmentation can cause a leakage of information?
Data augmentation specific per class
CC BY-SA 4.0
null
2023-05-19T10:46:25.943
2023-05-19T10:46:25.943
null
null
379875
[ "data-augmentation", "data-leakage" ]
616314
1
null
null
3
99
Is it possible, using custom cost functions or otherwise, to run xgboost regression for a single target variable, but rather than outputting just the conditional mean (conditioned on the feature values) of the variable as a prediction, to output condition distribution parameters. For example, the way to interpret Poisson regression is that if xgboost predicts 0.67 based on your feature values, that you would expect the true value of your target variable to be distributed according to a Poisson distribution with $\mu=0.67$. In this case, that is also the expected value which in some sense masks some of the richness of what is going on under the hood. But what if instead, we wanted xgboost to take our features and output $(p(x),r(x))$ and we would then interpret this as our target variable's true value being negative binomially distributed according to $f(y;p(x),r(x))$ The built-in cost functions include gamma which allows the user to do "gamma-regression". But when you do a gamma regression an then hit `reg.predict(x)`, you only get 1 value, rather than two as one might expect (see [here](https://stats.stackexchange.com/a/484567) for more details) As a reminder, this is the update equation for determining the optimal value to output for a node generated by any given partitioning (see [docs](https://xgboost.readthedocs.io/en/stable/tutorials/model.html)): $w_{k} = - \frac{\sum_{i \in \mathbf{R}_{k}}\frac{\partial \ell}{\partial \hat{y}}|_{\hat{y}_{i}, f_{t}(x_{i})}}{\lambda + \sum_{i \in \mathbf{R}_{k}}\frac{\partial ^{2} \ell}{\partial \hat{y}^{2}}|_{\hat{y}_{i}, f_{t}(x_{i})}}$ which now, if we assume each node outputs two numbers $(w_{k}, v_{k})$ and we generally define the loss function as $L = \sum_{i=1}^{N}\ell(y_{i}, \theta_{1}(x_{i}), \theta_{2}(x_{i}))$ (c.f. $L=\sum_{i=1}^{N}\ell(y_{i}, \hat{y}(x_{i}))$) I think a very similar update equation can be derived along the lines of $\left(\begin{matrix}w^{*}_{k} \\ v^{*}_{k}\end{matrix}\right)=\left(\begin{matrix}\sum_{i \in \mathbf{R}_{k}}(\frac{\partial ^{2}\ell}{\partial \theta_{1}^{2}})+\lambda & \sum_{i \in \mathbf{R}_{k}}\frac{\partial ^{2}\ell}{\partial \theta_{1}\partial \theta_{2}}\\ \sum_{i \in \mathbf{R}_{k}}\frac{\partial ^{2}\ell}{\partial \theta_{1}\partial \theta_{2}} & \sum_{i \in \mathbf{R}_{k}}(\frac{\partial ^{2}\ell}{\partial \theta_{2}^{2}})+\lambda\end{matrix}\right)^{-1}\cdot \left(\begin{matrix}\sum_{i \in \mathbf{R}_{k}}\frac{\partial \ell}{\partial \theta_{1}} \\ \sum_{i \in \mathbf{R}_{k}}\frac{\partial \ell}{\partial \theta_{2}}\end{matrix}\right)$ which clearly reduces to the original equation if there is no $\theta_{2}$ derivative. Is this something which is implemented somewhere in a dark and poorly documented corner of the xgboost project and if not, is there a reason why this might be difficult to do in practice (or other statistical reason why this isn't a desirable thing to do)? Clearly this does require the inversion of a matrix for every possible split point, but this matrix has dimensions `num_params x num_params` which should be tiny (for realistic cases, always <=4), to the point where this could be replaced with a hard-coded analytical expression for the inverse of a matrix. I don't know what fraction of the compute time in single-parameter xgboost is spent passing the individual derivative and hessian terms into different sides of the split as different partitions are evaluated, Vs evaluating the xgboost update equation vs calculating the implied loss, but I'm guessing that calculating the update equation is not completely dominant, so introducing a new update equation which performs a few more floating point operations would not be completely prohibitive.
Is there an implementation of xgboost for a single target variable but using multiple regression parameters
CC BY-SA 4.0
null
2023-05-19T11:19:17.483
2023-05-19T16:51:37.767
null
null
103003
[ "maximum-likelihood", "boosting", "gamma-distribution", "open-source" ]
616316
1
null
null
0
19
I have multivariate time series (VAR), and i found a break near the end (84.7%). I wanted to use a dummy variable in this situation, but even if I choose 90% training set, the dummy only trains on 100 values (i have 1900 values in total).Π‘ould this skew the results?
Structural break near the end of training sample
CC BY-SA 4.0
null
2023-05-19T11:39:46.000
2023-05-19T11:39:46.000
null
null
361080
[ "time-series", "vector-autoregression", "change-point" ]
616317
1
null
null
0
34
I plotted some time series data that looks non-linear as can be seen below. ![Text] [![Open stock prices over time period](https://i.stack.imgur.com/z94Rc.png)Looks pretty non-linear, but I decided to implement a linear regression model for learnings sake. ``` train_data = df["High"].iloc[:math.ceil(10382*0.8)].to_numpy().reshape(8306, 1) y_data = df["High"].iloc[1:math.ceil(10382*0.8)+1].to_numpy().reshape(8306, 1) ``` As you can see, I have actually decided to try using the exact same series data as the X and Y values. Just I have shifted the positions of data by 1. ``` from sklearn.linear_model import LinearRegression model_lr = LinearRegression().fit(train_data, y_data) ``` When I scored the training results, `model_lr.score(train_data, y_data)`, the coefficient of determination is good (0.9996742517879881), nearly 1. I thought it was just overfitting but when I predicted values, ``` pred = model_lr.predict(test_data) model_lr.score(pred, y_test_data) ``` The coefficient is good as well (0.9992371497078137). If the data I used was non-linear, is the problem here that I am using the same series data to model it? As I don't think a linear model should be performing well on non-linear data. I thought it would be ok using the same series data to model if it mimicks actual inflow of stock price data. So as data flows in, just plug that into the model. Is the linear regression model simply taking the previous series data and adding a small bias only? Is that how it is getting so accurate? I plotted the predictions to the test data, and it looks like this weird straight line. Note also that this is not multiple regression. [](https://i.stack.imgur.com/3ScHe.png)
Too good results with linear regression on a non-linear dataset due to training on seen data?
CC BY-SA 4.0
null
2023-05-19T11:40:20.183
2023-05-19T12:10:59.657
null
null
388336
[ "regression", "linear-model", "supervised-learning" ]
616318
2
null
616240
1
null
The unique rows of your fixed effects design matrix under your first contrast specification can be obtained via ``` library(MASS) condition <- gl(3, 1, length = 6, labels = c("predicted", "implausible", "plausible")) contrasts(condition) <- cbind(c(1/3, -2/3, 1/3), c(1/3, 1/3, -2/3)) highlow <- gl(2, 3, labels = c("high", "low")) X_1 <- model.matrix(~ condition * highlow) rownames(X_1) <- paste(condition, highlow, sep = "_") fractions(X_1) # (Intercept) condition1 condition2 highlowlow condition1:highlowlow condition2:highlowlow # predicted_high 1 1/3 1/3 0 0 0 # implausible_high 1 -2/3 1/3 0 0 0 # plausible_high 1 1/3 -2/3 0 0 0 # predicted_low 1 1/3 1/3 1 1/3 1/3 # implausible_low 1 -2/3 1/3 1 -2/3 1/3 # plausible_low 1 1/3 -2/3 1 1/3 -2/3 ``` Each row of `X_1` shows how the corresponding (population) cell mean, indicated by the row name, is a linear combination of the (fixed) regression coefficients. To interpret the regression coefficients in terms of the cell means, we need the inverse of `X_1` ``` fractions(solve(X_1)) # predicted_high implausible_high plausible_high predicted_low implausible_low plausible_low # (Intercept) 1/3 1/3 1/3 0 0 0 # condition1 1 -1 0 0 0 0 # condition2 1 0 -1 0 0 0 # highlowlow -1/3 -1/3 -1/3 1/3 1/3 1/3 # condition1:highlowlow -1 1 0 1 -1 0 # condition2:highlowlow -1 0 1 1 0 -1 ``` Now, each row shows the representation of the corresponding regression coefficient as a linear combination of the cell means. This confirms your output interpretation except for the estimated interaction coefficients, which (reading off from the corresponding rows) are estimates of \begin{align} &-1 \cdot \mu_\text{predicted, high} + 1 \cdot \mu_\text{implausible, high} + 1 \cdot \mu_\text{predicted, low} -1 \cdot \mu_\text{implausible, low} \\ &= \left(\mu_\text{implausible, high} - \mu_\text{predicted, high} \right) - \left(\mu_\text{implausible, low} - \mu_\text{predicted, low} \right) \end{align} and \begin{align} &-1 \cdot \mu_\text{predicted, high} + 1 \cdot \mu_\text{plausible, high} + 1 \cdot \mu_\text{predicted, low} -1 \cdot \mu_\text{plausible, low} \\ &= \left(\mu_\text{plausible, high} - \mu_\text{predicted, high} \right) - \left(\mu_\text{plausible, low} - \mu_\text{predicted, low} \right), \end{align} respectively. The output for your second contrast specification ``` contrasts(highlow) <- c(-0.5, 0.5) X_2 <- model.matrix(~ condition * highlow) rownames(X_2) <- paste(condition, highlow, sep = "_") fractions(X_2) # (Intercept) condition1 condition2 highlow1 condition1:highlow1 condition2:highlow1 # predicted_high 1 1/3 1/3 -1/2 -1/6 -1/6 # implausible_high 1 -2/3 1/3 -1/2 1/3 -1/6 # plausible_high 1 1/3 -2/3 -1/2 -1/6 1/3 # predicted_low 1 1/3 1/3 1/2 1/6 1/6 # implausible_low 1 -2/3 1/3 1/2 -1/3 1/6 # plausible_low 1 1/3 -2/3 1/2 1/6 -1/3 fractions(solve(X_2)) # predicted_high implausible_high plausible_high predicted_low implausible_low plausible_low # (Intercept) 1/6 1/6 1/6 1/6 1/6 1/6 # condition1 1/2 -1/2 0 1/2 -1/2 0 # condition2 1/2 0 -1/2 1/2 0 -1/2 # highlow1 -1/3 -1/3 -1/3 1/3 1/3 1/3 # condition1:highlow1 -1 1 0 1 -1 0 # condition2:highlow1 -1 0 1 1 0 -1 ``` shows that your interpretation of the coefficient estimate `HighLowLow` (which I interpret as `HighLow1`) is off by a factor of $2$ (it should be the same as with the first contrast specification) and your interpretation of the estimated interaction coefficients is off by a factor of $-1$ (it should, again, be the same as with the first contrast specification).
null
CC BY-SA 4.0
null
2023-05-19T11:47:58.243
2023-05-19T11:47:58.243
null
null
136579
null
616319
2
null
616317
1
null
> Looks pretty non-linear, but I decided to implement a linear regression model for learnings sake. [...] As you can see, I have actually decided to try using the exact same series data as the X and Y values. Just I have shifted the positions of data by 1. Then your initial plot is quite useless. The function $E[y|x] = f(x)$ is supposed to be [linear in parameters](https://stats.stackexchange.com/questions/279424/gauss-markov-theorem-what-do-you-mean-by-linearity-in-parameters). It's irrelevant that $E[y|t] = g(t)$, where $t$ is time, is non-linear. So the plot you should look at is $y$ against $x$, so in this case against lagged $y$. > When I scored the training results, model_lr.score(train_data, y_data), the coefficient of determination is good (0.9996742517879881), nearly 1. Now try calculating $R^2$ where `y_pred = train_data`, without any model. Likely the result would be equally good. In time-series like this $y_t$ and $y_{t-1}$ are highly correlated, so you can easily get a "highly performing" model that didn't learn anything useful. Also, keep in mind that taking the last value as your prediction is a valid way of forecasting, it's called the [naΓ―ve forecasting method](https://otexts.com/fpp3/simple-methods.html#na%C3%AFve-method), the only problem with it is that if you try to predict more than one step ahead, you would be predicting a constant. > Is the linear regression model simply taking the previous series data and adding a small bias only? Is that how it is getting so accurate? Adding small bias, or not adding anything at all, as discussed above. > I plotted the predictions to the test data, and it looks like this weird straight line. Note also that this is not multiple regression. Because you used the wrong plot. Use a scatter plot instead. There is no need whatsoever to "connect the dots". From the plot, we already can see that the points need to lie in a straight line.
null
CC BY-SA 4.0
null
2023-05-19T12:10:59.657
2023-05-19T12:10:59.657
null
null
35989
null
616320
1
null
null
0
23
Suppose the research problem is like this: I have some n number of high school kids, each taking a math test over 10 weeks (every week a different test). Test is graded on 1-7 point scale (1-bad, 7-very good). Starting from week 6, special preparatory courses became available for kids, which they can attend as many times as they want before taking a math test. So there are 3 groups of kids whose test outcomes I am interested with: - kids who on the week 4 and 5 received HIGH points (3-4) on test AND started attending preparatory courses; - kids who on the week 4 and 5 received LOW points (1-2) on test AND started attending preparatory courses; - kids who on the week 4 and 5 received HIGH points (3-4) on test BUT did not attend preparatory courses. I want to know the rates of improvement for every group in the weeks 6-10 (after preparatory courses became available) RELATIVE TO the other groups. It looks as a regression model with an interaction term. The variables are: Y - 1-7 points outcome of a math test (continuous variable); Initials – the average of an outcome of a math test in weeks 4 and 5 (continuous variable); Preparatory – the number of preparatory courses which a kid attended in weeks 6-10 (continuous variable ranging from 0 to infinity). The model is: Y ~ B0 + B1Initials + B2Preparatory + B3Initials*Preparatory Suppose the outcome is as follows: Y ~ B0 + B1(results are insignificant) + B2(results are insignificant) + 0.1Initials*Preparatory Would I be right if I interpret these results in the following way: For a kid in a group 1, who received an average score of 3 in weeks 4 and 5, and attended in total 6 preparatory courses between weeks 6-10, the rate of improvement RELATIVE TO the kids from the other two groups is: B0 + 0 + 0 + 0.1*(3)*(6) = 1.8 points per week,in weeks 6-10; For a kid in a group 2, who received an average score of 1 in weeks 4 and 5, and attended in total 6 preparatory courses between weeks 6-10, the rate of improvement is: B0 + 0 + 0 + 0.1*(1)*(6) = 0.7 points per week, in weeks 6-10; For a kid in a group 3, who received an average score of 3 in weeks 4 and 5, and attended in total 0 preparatory courses between weeks 6-10, the rate of improvement is: B0 + 0 + 0 + 0.1*(3)*(0) = B0 (intercept) points per week, in weeks 6-10; Do you guys think this interpretation is correct? I have doubts regarding the interpretation for a kid in a group 3
help in interpreting results from a regression model with an interaction term
CC BY-SA 4.0
null
2023-05-19T12:28:45.447
2023-05-19T12:28:45.447
null
null
351794
[ "bayesian", "interaction", "interpretation", "multilevel-analysis" ]
616321
2
null
616314
3
null
This is definitely desirable to have, but not trivial to get. The way XGBoost, LightGBM, CatBoost et al. are created is with a single output at a leaf node in mind. When there's extra distributional parameters (e.g. shape for accelerated failure time models), these are hyperparameters. There's a few options with the way the package is, but these all involve multiple model fits (e.g. [quantile regression](https://developer.ibm.com/articles/prediction-intervals-explained-a-lightgbm-tutorial/), but it's hard to enforce that the quantiles stay ordered if it's not from a single model, fitting for different hyperparameter values and then taking the distribution to be based on how well the different hyperparameters fit the data etc.). In general, I think it's hard because of how boosting works: you fit a model, then fit another model to the residuals, again and again. While you tend to sample observations for each model you create, because of the way each model builds on the previous one, the left-out observations don't directly provide an estimate of out-of-bag error (like e.g. in random forest, which can more easily give prediction intervals). There's been data science competitions that targeted such an output (from whatever model you wanted) such as [this one](https://www.kaggle.com/competitions/osic-pulmonary-fibrosis-progression/overview/evaluation) (see also [discussions like these](https://www.kaggle.com/competitions/osic-pulmonary-fibrosis-progression/discussion/181632)). These offer sources of inspiration for how people approach this kind of problem.
null
CC BY-SA 4.0
null
2023-05-19T12:32:12.080
2023-05-19T12:32:12.080
null
null
86652
null
616322
1
null
null
0
21
In my project I generate synthetic energy demand data using a neural network. I want to check the cross-correlation of those two time-series: the original and the synthetic one. I found, that the detrended cross-correlation coefficient is a good measurement to determine cross-correlations between two non-stationary time series. I want to calculate the cross-correlation coefficient rho using the R package "SlidingWindows" (in Python) based upon the work of Zebende: ``` rhodcca_SlidingWindows = SlidingWindows.rhodcca_SlidingWindows(y1, y2, w = 98, k = 10,nu=0) ``` The function asks for parameters for WINDOW SIZE (w), AN VALUE INDICATING THE BOUNDARY OF THE DIVISION (N/k) and POLYNOMIAL ORDER (nu) - how do I find the optimal parameters? Is it sort of trial and error like when tuning hyperparameters in machine learning models or is there a precise way how to calculate them? One idea was to use a training set of 10 years, optimize it using the optuna Python package and test the parameters on the test set of 2 years. Does that make sense?
How do I choose parameters for WINDOW SIZE and POLYNOMIAL ORDER to calculate the detrended cross-correlation coefficient rho?
CC BY-SA 4.0
null
2023-05-19T12:34:25.023
2023-05-19T12:35:47.230
2023-05-19T12:35:47.230
387998
387998
[ "time-series", "optimization", "cross-correlation" ]
616323
1
null
null
1
16
For a given data $y_{i}$, with $i=1, \dots, n$, we consider the following signal approximation: $$ \hat{y} = \arg \min_{w}\sum_{i=1}^{n}(y_{i}-w_{i})^{2} + \lambda \sum_{(i,j)\in E}|w_{i} - w_{j}|, $$ where $E$ is the set of edges corresponding to the two-dimensional grid. The question: for image denoising what are the typical metrics and the typical way to choose $\lambda$ parameter?
Fused lasso for image denonising
CC BY-SA 4.0
null
2023-05-19T13:17:43.613
2023-05-19T16:26:28.623
2023-05-19T16:26:28.623
262948
262948
[ "optimization", "lasso", "regularization", "image-processing", "signal-processing" ]
616324
2
null
616266
1
null
You'll see this discussed repeated and in depth on this site. In general using hypothesis tests for normality or homogeneity of variance, as criterion to deciding whether anova is appropriate, is not a good idea. See perhaps [towardsdatascience.com/stop-testing-for-normality-dba96bb73f90](https://towardsdatascience.com/stop-testing-for-normality-dba96bb73f90). So, yes, you are much better off examining the q-q plot of residuals and the plot of residuals vs. predicted values.
null
CC BY-SA 4.0
null
2023-05-19T13:18:13.497
2023-05-19T18:09:44.793
2023-05-19T18:09:44.793
166526
166526
null
616325
2
null
616314
2
null
I don't think this is immediately possible with XGBoost as you would have to write a multi-output / multi-parameter boosting variant of it. At best, XGBoost (and other usual boosting routines learners) are able to do multi-output predictions (for example estimating the parameters for a Gamma distribution) by having one model for each target and then putting meta-estimators on top. That said, what you primarily want is [NGboost](https://stanfordmlgroup.github.io/projects/ngboost/) and the concept of "natural gradients" (NGs). NGs allow us to take into account the geometry of the underlying probability space, they are implemented in the package [ngboost](https://pypi.org/project/ngboost/). In a standard NGBoosts application, we use Maximum Likelihood Estimation as our scoring function to try and minimize the [Kullback-Leibler (KL) divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) between the current distribution and the target distribution. NGBoost explores a divergence (scoring function) -specific statistical manifold where each point in the manifold corresponds to a probability distribution. This is a significant change to standard point estimation and allows our booster to output a full probability distribution over the entire outcome space instead of a point estimate. We can then use the NGBoost provided `pred_dist` function such that we get estimated distributional parameters. With `ngboost` for example, for a Gamma distribution we estimate the $\alpha$ (shape) and $\beta$ (rate) parameters aiming to minimize the KL-divergence. In contrast, XGBoosts (and other boosters) if minimising a Gamma distributed target focus directly on the mean Gamma deviance ([mean_gamma_deviance](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_gamma_deviance.html) of the point estimates.)
null
CC BY-SA 4.0
null
2023-05-19T13:19:06.170
2023-05-19T16:51:37.767
2023-05-19T16:51:37.767
11852
11852
null
616326
2
null
616288
1
null
I can understand your confusion. In the Front-Door Criteria of Definition 3.4.1, the original text for #2 read, "There is no unblocked path from $X$ to $Z$." The book's errata changed that to "There is no backdoor path from $X$ to $Z$." I admit it's a bit confusing either way: Pearl was a little more careful in his 2009 book Causality: Models, Reasoning, and Inference, 2nd Ed. On page 82 of that book, Definition 3.3.3 (Front-Door), the second item on the list reads: "there is no unblocked back-door path from $X$ to $Z;...$" It certainly is the case that $Y$ satisfies the Backdoor Criterion (Def. 3.3.1 on page 61) relative to $(X,Z).$ This is what enables Pearl to say, in the derivation of the Front-Door criterion, that $$P(z|\operatorname{do}(x))=P(z|x).$$ That plus Def. 3.4.1, #1 and #3 are sufficient to enable the Front-Door procedure to work.
null
CC BY-SA 4.0
null
2023-05-19T13:29:33.307
2023-05-19T13:29:33.307
null
null
76484
null
616327
2
null
616212
0
null
### Decomposing the weight based on stages of sampling/nonresponse The overall weight for a person is meant to represent the overall probability that a person in your target population had of taking the survey, accounting for random sampling as well as the probability that they responded to the survey, given that you asked them to. To determine this overall probability for person $i$, denoted $p_i$, we have to break the survey process into four distinct stages, each of which impacts the probability someone had of taking the survey. Stages: - Random selection to invite a person to join the panel. $p_{1,i}$ - Joining or declining to join the panel, given that they were selected $p_{2|1,i}$ - Random selection of panel members who are requested to complete the particular survey, given that they joined the panel $p_{3|2,i}$ - Response or nonresponse to the particular survey $p_{4|3,i}$, given that you requested that they complete the survey As the survey designer, you control the random selection probabilities (stages 1 and 3) but you don't control the response/nonresponse probabilities (stages 2 and 4). The overall probability for individual $i$ is thus: $$ p_i = p_{1,i} \times p_{2|1,i} \times p_{3|2,i} \times p_{4|3,i} $$ The weight for this person is then $w_i = 1/p_i$. This weight is used to make sure that your estimates aren't biased as a result of oversampling some groups or because of some groups being more or less likely to take the survey. It's common practice though to further adjust this weight using raking adjustments, which can help further reduce nonresponse bias as well as potentially reduce other kinds of bias such as coverage error. ### Determining Stage-specific Probabilities Because you designed the survey, you should know the value of $p_{1,i}$ and $p_{3|2,i}$ for everyone in your sample. For the stages of nonresponse, you have to estimate response propensities: that is, you have to estimate $p_{2|1,i}$ and $p_{4|3,i}$. Common approaches include logistic regression or using observed response rates for key demographic groups (e.g., if a person is from a racial/ethnic group with a 15% response rate, then you estimate that person's response propensity to be 15%). This R package vignette gives an example of how to estimate response propensities and do weighting adjustments in R. > https://bschneidr.github.io/svrep/articles/nonresponse-adjustments.html ### References Chapter 2 of β€œApplied Survey Data Analysis” provides an excellent overview of survey weighting to account for sampling probabilities and nonresponse. > Heeringa, S., West, B., Berglund, P. (2017). Applied Survey Data Analysis, 2nd edition. Boca Raton, FL: CRC Press. Chapter 13 of β€œPractical Tools for Designing and Weighting Survey Samples” also provides an excellent overview of weighting methods to account for nonresponse. > Valliant, R., Dever, J., Kreuter, F. (2018). Practical Tools for Designing and Weighting Survey Samples, 2nd edition. New York: Springer.
null
CC BY-SA 4.0
null
2023-05-19T13:33:53.623
2023-05-19T13:33:53.623
null
null
94994
null
616328
2
null
616312
1
null
Based on your description of the data, I am assuming these behaviors aren't ordered, and that only one behavior was recorded for each animal at pre and post exposure. If this is the case, you have "nominal paired data". You may want to consider what is called a marginal homogeneity or Stuart-Maxwell test. The null hypothesis of this test in the context of your experiment would be that the probability of observing each behavior before the interaction with a tour vessel is the same as the probability of observing each behavior after the interaction with a tour vessel. You also are make the assumption that the behaviors of one animal are independent of the behaviors of other animals, so they should not be effecting each other's outcome. [Here](http://statkat.com/stat-tests/marginal-homogeneity-stuart-maxwell-test.php) is a solid reference on this test.
null
CC BY-SA 4.0
null
2023-05-19T13:45:25.180
2023-05-19T13:45:25.180
null
null
388290
null
616329
1
616334
null
0
25
I'm working with a GNN model for link prediction and using `precision_recall_curve` and `roc_auc_score` from the `sklearn.metrics` functions. For the GNN definition and use, I'm using [Pytorch geometric](https://pytorch-geometric.readthedocs.io/en/latest/). The last layer of my GNN is a dot product between aggregated node features, which I'm considering as the score. It means I have a final tensor with a score (float value) indicating if I have a link (class 1) or not (class 0) together with a `ground_truths` containing the actual classes (0 or 1). Assuming that my model is well-defined, I'm using the following code for evaluation: ``` # val_loader is an iterator for the validation dataset, and the model is the actual GNN from sklearn.metrics import roc_auc_score from sklearn.metrics import precision_recall_curve preds = [] ground_truths = [] for sampled_data in tqdm.tqdm(val_loader): with torch.no_grad(): sampled_data.to(device) preds.append(model(sampled_data)) ground_truths.append(sampled_data[node1, link, node2].edge_label) pred = torch.cat(preds, dim=0).cpu().numpy() ground_truth = torch.cat(ground_truths, dim=0).cpu().numpy() # Metric 1 - ROC auc = roc_auc_score(ground_truth, pred) print(f"Validation ROC_AUC: {auc:.4f}") # Metric 2 - Precision Recall precision, recall, _ = precision_recall_curve(ground_truth, (torch.sigmoid(torch.from_numpy(pred)))) pr_auc = area_curve (recall, precision) ``` After this code, I'm using these values to calculate the optimal threshold for predicting the missing links in my graph. It's possible to see in the code that for `roc_auc_score,` I'm using the scores (not sure if I can call them the logits, but I guess so) returned by my model (`preds` variable), while in `precision_recall_curve` I'm using the probabilities instead, passing by a `sigmoid` function. Looking at the functions documentation ([1](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) and [2](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_curve.html)), I understand that I can use them interchangeably. My question is: Is this assumption correct independently of my GNN model? If so, how this works since they are completely different values (the score is a float tensor with no apparent meaning while the probabilities are actual values between 0 and 1 that I can consider as probabilities of being the class 1 - an existent link)? An extra problem is: Is there any difference when calculating the optimal threshold if I used scores or probabilities, or is there an equivalence too?
Are there any difference using scores or probabilities for roc_auc_score and precision_recall_curve functions?
CC BY-SA 4.0
null
2023-05-19T13:52:19.103
2023-05-19T14:18:02.927
2023-05-19T14:00:49.950
103715
103715
[ "machine-learning", "scikit-learn", "roc", "precision-recall", "threshold" ]
616330
1
null
null
0
16
I'm trying to think of a descriptive statistic in the context of sports betting. Some background: Sportsbooks post odds such as -150, +200, etc... for moneyline bets (aka betting on winner). These odds can be converted to explicit probabilities of who the winner will be. Source: [https://www.sportsbookreviewsonline.com/sportsbettingarticles/sportsbettingarticle10.htm](https://www.sportsbookreviewsonline.com/sportsbettingarticles/sportsbettingarticle10.htm) I'd like to measure how well a sportsbooks probabilities have behaved in the long run historically. I can think of binning these values into various probabilities, and seeing how well they behave. - So have games with explicit winning probability of 55-60% won 58% historically? Surely there will be some variation between sportsbooks - maybe some sportsbooks have it at 50% and others 60% over some period of time. - I can do the above for all increments of 5% from 0%-100%. Example of dataset I'm using [](https://i.stack.imgur.com/guVoC.png) Is there a good measure to answer the above question without binning the data and keeping it continuous?
Describing historical performance of probability predictions
CC BY-SA 4.0
null
2023-05-19T13:53:43.883
2023-05-19T15:28:49.007
2023-05-19T15:28:49.007
368625
368625
[ "probability", "descriptive-statistics" ]
616331
1
616336
null
0
37
In ordinary least squares linear regression, given a set of data points $(x_1,y_1),(x_2,y_2),...(x_N,y_N)$, that we want to fit to the function $y=\beta_0 + \beta_1 x$, we would usually write the linear model as the matrix equation $$ \begin{align} \mathbf{Y} &= \mathbf{X}\boldsymbol{\beta} + \boldsymbol{\varepsilon} \\ \begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_N \end{pmatrix} &= \begin{pmatrix} 1 & x_1 \\ 1 & x_2 \\ \vdots & \vdots \\ 1 & x_N \end{pmatrix} \cdot \begin{pmatrix} \beta_0 \\ \beta_1\end{pmatrix} + \begin{pmatrix} \varepsilon_1 \\ \varepsilon_2 \\ \vdots \\ \varepsilon_N \end{pmatrix} \end{align} \tag{1} $$ where $\beta_0$ and $\beta_1$ are the intercept and slope of the model, respectively, and $\varepsilon_k$ are the residuals between the model and each data point. Minimizing the sum of the squares of these residuals (by differentiating with respect to $\boldsymbol{\beta}$ and setting the result equal to zero) then gives the equation $$ \mathbf{X}^\textrm{T} \mathbf{Y} = (\mathbf{X}^\textrm{T}\mathbf{X})\boldsymbol{\beta} \tag{2} $$ and so the typical equation to solve to find the best fit parameters is $$ \boldsymbol{\beta} = (\mathbf{X}^\textrm{T}\mathbf{X})^{-1}\mathbf{X}^\textrm{T}\mathbf{Y} \tag{3} $$ (for example, see [this link](https://towardsdatascience.com/the-matrix-algebra-of-linear-regression-6fb433f522d5)). This is fine for me so far. However, I am confused that it seems like the exact same result is obtained by solving instead $$ \mathbf{Y} = \mathbf{X}\boldsymbol{\beta} \tag{4} $$ The following Matlab script illustrates this, where the outputs for `beta1` and `beta2` are identical: ``` x_data = [0.1 ; 3.6 ; 5.2 ; 8.1]; y_data = [0.15 ; 3.5 ; 5.4 ; 7.6]; X = [ones(length(x_data),1) x_data]; Y = y_data; % First method beta1 = (X.'*X) \ (X.'*Y); % instead of backslash, can also do beta = inv(X.'*X)*(X.')*Y; % Second method beta2 = X \ Y; ``` Equation (2) arises from minimizing the residuals, whereas Eq. (4) seems to have just neglected the residuals altogether? Is it as simple as the fact that the $\mathbf{X}^\textrm{T}$ can simply cancel on both sides of Eq. (2)? In which case, why do we see the form given in Eq. (2) used so much? (It is obviously much more computationally expensive to do method 1, compared with method 2.) What am I missing here? Thanks
How to reconcile these two matrix equations for obtaining the coefficients for a linear least squares fit?
CC BY-SA 4.0
null
2023-05-19T13:57:03.023
2023-05-19T15:11:05.143
null
null
290032
[ "regression", "least-squares", "linear-model", "matrix", "fitting" ]
616332
1
616342
null
2
215
Let $\mathcal{L}(\theta\mid x_1,\ldots,x_n)$ be the likelihood function of parameters $\theta$ given i.i.d. samples $x_i$ with $i=1,\ldots,n$. I know that under some regularity conditions the $\theta$ that maximizes $\mathcal{L}$ converges in probability to the true value $\theta_0$ as the sample size becomes infinite (i.e., the MLE is consistent). However, this does not mean that the likelihood for other values of $\theta$ is $0$: it just means that all those other values will be less than the one attained at $\theta=\theta_0$. My question is whether the likelihood actually becomes $0$ in the limit when $n \to \infty$ for all values $\theta \neq \theta_0$. I suppose that in order for this to hold (if it does), some regularity conditions would be required (possibly the same ones required to claim that the MLE is consistent and asymptotically efficient). Feel free to consider just the cases in which such conditions are met. EDIT: Maybe the question is too general to be answered, so I want to specify that I am working with the covariance matrix of a multivariate Gaussian, nothing too fancy. In particular, for a one-dimensional random variable, I see that the behavior I described above is apparently true: [](https://i.stack.imgur.com/ISUmA.png) The problem is that I do not know how to describe this behavior mathematically. I mean, considering distributions like the Gaussian that meet the necessary regularity conditions, what does the likelihood function converge to when $n \to \infty$?
What does the likelihood function converge to when sample size is infinite?
CC BY-SA 4.0
null
2023-05-19T14:08:30.593
2023-05-20T01:16:20.813
2023-05-19T15:08:10.550
122077
122077
[ "maximum-likelihood", "convergence", "estimators", "asymptotics", "consistency" ]
616333
1
null
null
0
20
I came across a proof on a statistics book using the following, if$$ S=P(|X|β‰₯\epsilon,|B|≀b)+P(|X|β‰₯\epsilon,|B|β‰₯b) $$then$$ S≀P\left(\frac{X}{|B|}β‰₯\frac{\epsilon}{b}\right)+P(|B|β‰₯b) $$ I want to verify if my thinking is correct. The event $\{\frac{X}{|B|}β‰₯\frac{\epsilon}{b}\}$ is a subset of the event $\{|X|β‰₯\epsilon\cap |B|≀b\}$ which is also true for the second term. The inequality follows since if $X\subset A$ then $$ P(X)≀P(A) $$ Am I right? I also stated that the second equation are subsets the first by intuition. Is there any need to make this more formal? Or is my line of thinking sufficient?
Probability inequality between subsets of events
CC BY-SA 4.0
null
2023-05-19T14:16:44.327
2023-05-19T14:43:27.263
2023-05-19T14:43:27.263
338644
338644
[ "probability" ]
616334
2
null
616329
2
null
The sigmoid is a strictly increasing function, so the ordering of the scores is the same as that of the "probabilities"*. Both the ROC curve and the precision-recall curve only care about the ordering of scores, so no, there will be no difference between the curves generated by the two versions. With thresholds, there's a little more wiggle room, since between any two score values there are infinitely many thresholds that produce the same hard classifier and the same point in ROC (resp. PR) space. But the score thresholds correspond bijectively to probability thresholds (just the sigmoid and its inverse), so any optimal threshold for one will have a corresponding optimal threshold in the other. In many packages, the midpoint between consecutive unique score values gets used as the threshold between them, and those won't quite correspond: $\sigma((a+b)/2) \neq (\sigma(a)+\sigma(b))/2$. *depending on your network architecture, these might not be well-calibrated probabilities
null
CC BY-SA 4.0
null
2023-05-19T14:18:02.927
2023-05-19T14:18:02.927
null
null
232706
null
616336
2
null
616331
2
null
Under the condition that $X \in \mathbb{R}^{n \times p}$ is of full column rank and $Y = X\beta$ has a solution (which is implied by the consistency condition $\operatorname{rank}(\begin{bmatrix}X & Y \end{bmatrix}) = \operatorname{rank}(X)$), you do make a good point: the solution $\hat{\beta}$ to the linear system $Y = X\beta$ indeed coincides with the closed-form least-squares estimate expression $\hat{\beta} = (X^TX)^{-1}X^TY$, which is readily from Eq. (2). However, the reason is slightly more involved than what you proposed "Is it as simple as the fact that the $X^T$ can simply cancel on both sides of Eq. (2)?" Simply cancelling $X^T$ from both sides of Eq. (2) as treating it as a non-zero real number is not rigorous enough. You will need some linear algebra weapon here. One argument goes as follows: another way of viewing the normal equation $X^TX\beta = X^TY$ is \begin{align} X^T(Y - X\beta) = 0, \tag{$*$} \end{align} hence if $\beta$ solves Eq. (2), then $Y - X\beta$ must fall in the null space of $X^T$, whose dimensionality is $p - \operatorname{rank}(X^T) = p - p = 0$ (this is where you need to use linear algebra theory). This implies that if $Y - X\beta$ solves $(*)$, then $Y - X\beta = 0$, i.e., $\beta$ must be a solution to $Y = X\beta$. The converse direction is trivial. An alternative argument is by using QR decomposition of $X$ -- this is the default numerical recipe for R to get least-squares estimate. Suppose the QR decomposition of $X$ is $X = QR$, where $Q \in \mathbb{R}^{n \times p}$ is a column-orthogonal matrix (i.e., $Q^TQ = I_{(p)}$), and $R$ is an invertible (as $\operatorname{rank}(X) = p$) upper-triangular matrix. Eq. (2) can then be rewritten as $R^TQ^TQR\beta = R^TQ^TY$, which in turn by the orthogonality of $Q$ and invertibility of $R$ reduces to $R\beta = Q^TY$. On the other hand, in terms of $Q$ and $R$, Eq. (4) is $Y = QR\beta$, which becomes $Q^TY = R\beta$ after multiplying $Q^T$ on both sides. Therefore both Eq. (2) and Eq. (4) are essentially the equation $R\beta = Q^TY$, hence must yield the same numerical solution. Finally, Eq. (3) is used so much because it gives a explicit expression of the least-squares estimate of $\beta$ in terms of the raw inputs/observations $X$ and $Y$, which is valuable in theory and perhaps in teaching/communication (of course, as you mentioned, in implementation, no sensible software would really try to invert the matrix $X^TX$). By comparison, it is less convenient to call the LSE as, say, "a solution to the linear system $Y = X\beta$". In addition, Eq. (2) helps to reveal (or is a natural consequence of) the nature of LSE: the LSE $\hat{\beta}$ aims to minimize the Euclidean distance $\|Y - X\beta\|^2 = (Y - X\beta)^T(Y - X\beta)$, and you can see that the term $X^TX$ naturally emerges from this objective function.
null
CC BY-SA 4.0
null
2023-05-19T14:59:42.870
2023-05-19T15:11:05.143
2023-05-19T15:11:05.143
362671
20519
null
616338
1
null
null
1
40
In class, my professor that a recurrent neural network is for problems where the state changes as a function of past state and new input, i.e. h_(t+1) = f(h_t, x_t), for any f. Then he said we would have, in each RNN cell, a universal function approximation (e.g. a deep neural network) to learn any f and thus solve all of these problems. But then he said that this actually won't work for many reasons (e.g. too slow and too unstable). Instead, usually only a single matrix multiplication is used, or some variant on this like LSTM/GRU. And to add complexity, multiple RNN layers can be used one after another. I am having a hard time seeing whether this is as powerful as using a deep NN in each cell (if we somehow got it to work). In other words, can a deep RNN learn the recurrence for any f? This seems much more non-trivial to show I am also not seeing what, if any, different inductive biases that this approach has compared with the theoretical deep NN approach.
Expressive power of RNNs and deep RNNs
CC BY-SA 4.0
null
2023-05-19T15:05:16.320
2023-05-20T10:16:17.213
null
null
361233
[ "machine-learning", "neural-networks", "recurrent-neural-network" ]
616339
1
616363
null
1
16
I have a within-subjects study where for each participant I have 10 datapoints from the control condition and 10 datapoints from the treatment condition. The treatment effect is calculated as [mean of outcomes from treatment condition] - [mean of outcomes from control condition]. I am trying to use bootstrap to calculate a confidence interval around the treatment effect. I've only seen bootstrap used on paired data when each participant only has 1 datapoint from the treatment and 1 datapoint from the control. In that setting, you can simply bootstrap the distribution of paired differences. But in my setting, this does not work because there are 10 datapoints from each participant (and individual datapoints do not have natural pairs). Is the following a valid procedure? ``` Repeat n times: For each participant: Bootstrap sample 10 datapoints (from their 20) to be their control datapoints Bootstrap sample 10 datapoints (from their 20) to be their treatment datapoints Calculate the test statistic Calculate the p-value using the distribution of n test statstics ```
Bootstrap confidence interval in within-subjects study
CC BY-SA 4.0
null
2023-05-19T15:16:55.640
2023-05-19T18:54:17.523
null
null
388350
[ "confidence-interval", "bootstrap" ]
616340
1
null
null
0
17
I want to run a linear mixed model (LMM) testing the effects of two continuous variables (FD and FI) over time (Year:FD, Year:FI) on my response variable (QMD). For LMMs I typically use lmer4 but as in this model the variance increases over time, I want to use nlme to weight the model variance by year (using VarIdent=Year). However, when I run the ANOVA table of the model, it seems something is not correct. The Denonminator degrees of freedom (DenDF) do not seem correct. When I compare the estimated DenDF of lmer (lme4) and lme (nlme) models, they are very different. The other estimates vary also, but I assumed this was because, as I said before, in the lme model I weighted the variance by year. Do you know why the two models report so different DenDFs? Here the lmer model without weighting the variance by year ``` mod_lmer <- lmer (QMD ~ FD*Year + FI*Year + (1|Block) + (1|Plot), data=QMD_data, REML=TRUE) ``` [](https://i.stack.imgur.com/cpiae.png) Now, running the lme model - following BenBolker's approach to include crossed random factors in lme models - [enter link description here](https://stackoverflow.com/questions/36643713/how-to-specify-different-random-effects-in-nlme-vs-lme4) ``` mod_lme = lme (QMD ~ FdisPC1 * Inv + CWM_PC1 * Inv, random=list(Dummy = pdBlocked(list(pdIdent(~Plot-1), pdIdent(~Block-1)))), data=QMD_data, weights = varIdent(form = ~1 | Inv),method="REML",control =list(msMaxIter = 1000, msMaxEval = 1000)) ``` [](https://i.stack.imgur.com/B7n9J.png) Does someone know why the estimated DenDF are so different from each other? Also, I realized that when in lmer model the DenDF for FDisPC1 and CWM_PC1 are different than the rest of the variables (34), in lme model both have the same DenDF that the other effects (1591)
Why are the estimated denominator degrees of freedom differ so much between lme and lmer models?
CC BY-SA 4.0
null
2023-05-19T15:37:18.113
2023-05-19T15:44:07.040
2023-05-19T15:44:07.040
360028
360028
[ "r", "mixed-model", "anova", "lme4-nlme" ]
616341
2
null
616155
0
null
The `gls` and `lme` models have an important difference. As this [Stack Overflow answer](https://stackoverflow.com/a/1437343/5044791) quotes from [Pinheiro and Bates](https://www.springer.com/gp/book/9780387989570): > The gls function... can be veiwed as an lme function without the argument random. Although the matrix of error correlations among observations in your `gls` model has no imposed internal structure, it has a very strict structure within the model: except for your allowing different variances within `Timef` groups via the `VarIdent()` you supplied to the `weights` argument, the very same correlation matrix is assumed for all individuals. The `lme` model that eventually worked for you assumes by default no within-individual correlations. It instead models individuals as having random `Timef` coefficients that have a strictly normal distribution within each value of `Timef`. Although in this case the fixed-effect coefficients seem to be the same in the `gls` and `lme` models (as in the linked [Stack Overflow answer](https://stackoverflow.com/a/1437343/5044791)), I don't think that you can necessarily expect the "unstructured" error-correlation matrix in `gls` to be exactly "reproduced" by the variance-covariance matrix of the sets of `Timef` coefficients that are normally distributed among individuals, although they do come close here. (That might be the case asymptotically in the limit of large numbers of observations.) Maybe more important is the difference in the ways to think about the models. [This answer](https://stats.stackexchange.com/a/486980/28500) emphasizes the marginal (`gls`) versus conditional (`lme`) interpretations, and has a link to why some results reported for `lmer` models aren't the same as for corresponding `lme` models.
null
CC BY-SA 4.0
null
2023-05-19T15:40:48.003
2023-05-19T15:40:48.003
null
null
28500
null
616342
2
null
616332
7
null
The [Bernstein–von Mises theorem](https://en.wikipedia.org/wiki/Bernstein%E2%80%93von_Mises_theorem) provides the asymptotic form (under regularity conditions) of the likelihood function. The likelihood itself does not become zero for $\theta \ne \theta_0$, but the posterior probability does. Notice that the posterior probability is essentially the same as as the normalized likelihood in the limit $n \to \infty$, given a prior that is "regular enough" in the neighborhood of $\theta_0$. In other words, the likelihood becomes a Gaussian function centered at the true parameter $\theta_0$ with covariance matrix given by $n^{-1}I(\theta_0)^{-1}$, where $I(\theta_0)$ is the Fisher information matrix. The width of the Gaussian consequentially tends towards zero when $n \to \infty$.
null
CC BY-SA 4.0
null
2023-05-19T15:46:08.467
2023-05-19T15:46:08.467
null
null
348492
null
616343
1
null
null
0
8
I am studying the incidence of different types of stroke in a hospital. I have the information about the time of arrival of each patient and the type of stroke. I divided the whole day into 6 time slots (00-04, 04-08, 08-12, 12-16, 16-20, 20-00) and counted the number of each type of stroke (ischemic stroke, hemorrhagic stroke, transient stroke and mimic) in each time slot. I obtained the following table: |Time slot |Ischemic stroke |Hemorraghic stroke |TIA |Mimic | |---------|---------------|------------------|---|-----| |00-4 |32 |5 |1 |5 | |4-8 |50 |18 |2 |15 | |8-12 |271 |58 |22 |86 | |12-16 |259 |32 |28 |92 | |16-20 |191 |32 |15 |60 | |20-00 |205 |49 |9 |100 | The hypothesis I want to test is whether the frequency of each type of stroke is constant over time. I performed a Chi-squared test, obtaining a p value = 0.00175. However a revisor of a journal says the test is not appropriate to test. He says: "The chi squared test tells you about the correlation of stroke types with each other, but not with the time of day." As far as I know chi squared test could be appropriate for this data. What do you think? Do you think I could perform another type of analysis to improve it? (I also have the raw data)
Contrast for hourly counts between 4 groups
CC BY-SA 4.0
null
2023-05-19T15:49:56.693
2023-05-19T15:49:56.693
null
null
326551
[ "chi-squared-test" ]
616344
2
null
616279
3
null
Suppose that we don't stop the game when you blow up; we just stop accumulating points. This will not change the probability of getting a certain number of points, and it makes it easier to analyze. All games now take exactly 22 turns, and can be described by an ordering of the things that get blown up. There are 22! such orderings, all equally probable. There are 21! orderings in which you blow up on the last turn. So the probability that you get one of those orderings is 21!/22! = 1/22. Even more simply, we don't need to work out all the combinations. We can select the elements in whatever order we want, so long as we aren't permitted to choose something we already blew up, so let's start with the last one instead of the first. Since we haven't eliminated any possibilities yet, there are 22 options (selected from uniformly), and thus a 1/22 chance that that'll be the turn where you blow up.
null
CC BY-SA 4.0
null
2023-05-19T16:01:41.537
2023-05-19T16:06:45.870
2023-05-19T16:06:45.870
126679
126679
null
616348
1
null
null
0
16
Let say I have the below time series data(It is a return data of a financial derivative): ``` x<-structure(c(-0.030727, -0.009815, -0.001215, 0.010281, -0.03471, 0.01946, -0.027811, -0.002559, -0.001176, 8.3e-05, 0.026288, -0.015959, 0.010825, -0.022024, 0.008582, 0.023524, 0.00139, -0.022638, -0.002818, 0.00118, -0.033008, -0.018363, -0.003505, 0.003468, -0.003918, 0.016427, 0.022444, 0.021874, -0.010313, 0.003757, 0.014585, -0.014774, 0.008629, 0.008678, 0.001803, -0.007109, 0.028474, 0.002943, 0.00085, 0.002037, -0.001861, -0.012703, 0.005483, -0.002618, 0.018344, 0.000381, -0.00456, 0.007437, 0.002311, 0.004317, 0.002753, 0.002656, -0.011011, 0.000973, -0.001407, 0.016608, 0.004667, 0.000115, 0.009135, -0.004638, -0.009834, 0.015728, -0.014812, 0.000478, -0.003571, 0.007973, 0.015343, -0.000309, -0.001552, 0.004405, -0.003978, 0.001578, -0.000453, -0.008051, -0.00213, -0.001535, -0.005156, -0.011967, -0.006248, 0.008804, -0.01135, -0.007921, -0.001811, 0.004035, 0.002962, 0.012483, -0.01028, -0.004919, -0.004157, 0.012173, -0.012587, 0.004948, -0.005627, 0.012029, -0.000793, 0.019795, 0.006935, 0.001405, 0.006452, 0.002945, 0.000848, 0.003851, -0.005818, 0.005285, -0.001402, 0.002595, -0.003226, -0.013005, -0.009465, -0.001009, -0.001781, 0.002064, -0.009244, 0.007651, 0.001353, -0.002158, 0.015749, -0.042023, -0.024412, 0.020984, 0.018452, 0.013183, 0.004101, -0.008192, 0.00749, 0.003626, 0.016261, 0.006411, 0.006828, -0.003408, 0.005644, -0.000888, 0.005194, -0.003846, 0.010579, -0.003154, 0.005162, -0.000496, 0.002433, 0.005807, 0.002947, 0.001386, 0.004266, -0.009004, 0.004275, 0.001261, 0.010565, -0.00153, 0.002364, -0.004008, 0.004566, 0.000858, 0.005551, -0.006656, 0.000297, 0.002195, -0.000338, 0.001187, 0.002947, -0.008091, -0.001053, 0.001288, 0.002566, -0.001787, -0.001872, 0.00268, 0.004331, 0.004942, 0.001519, -0.004638, -0.025724, 0.016634, -0.010927, 0.003586, 0.014567, -0.000976, -0.001821, 0.001207, 0.010218, 0.008339, -0.006345, -0.009137, 0.00913, 0.002417, -0.009332, 0.008099, -0.002097, -0.002117, 0.004971, -0.001726, -0.002727, 0.00683, -0.015485, -0.001482, -0.004916, 0.000159, -0.002754, 0.00843, 0.00049, -0.000873), tsp = c(1, 200, 1)) ``` I want to determine whether there is stuctural change at the time series. First, I determine the arima model using auto.arima procedure. Then I apply the below ols-cusum procedure to determine the whether there is a structural break: [https://stats.stackexchange.com/questions/104324/strucchange-package-on-arima-model](https://i.stack.imgur.com/qC4y3.png) The code is below: ``` z<-auto.arima(x, max.p = 3, max.q=3) fit <- arima(z, order = c(1,0,0), include.mean = FALSE) e <- residuals(fit) sigma <- sqrt(fit$sigma2) n <- length(x) cs <- cumsum(e) / sigma retval <- list() retval$coefficients <- coef(fit) retval$sigma <- sigma retval$process <- cs retval$type.name <- "OLS-based CUSUM test" retval$lim.process <- "Brownian bridge" retval$datatsp <- tsp(50) class(retval) <- c("efp") plot(retval) ``` The result is as below: [](https://i.stack.imgur.com/qC4y3.png) Plot shows that there are structural breaks at the data. It seemd to me strange. I wonder how to fit an arima model to this data without structural breaks. How should I deal with this data? Thanks a lot.
What should I do with structural breaks at return time series?- R
CC BY-SA 4.0
null
2023-05-19T16:24:52.777
2023-05-19T16:24:52.777
null
null
73724
[ "r", "time-series", "arima", "change-point" ]
616349
1
null
null
1
19
I'm look to learn about extreme value theory, starting from univariate case and then moving onto the multivariate case. I have tried the textbook by de Haan, but I'm constantly lost trying to read the proof of Fisher–Tippett–Gnedenko theorem in the first chapter. I have also tried Statistics of Extremes: Theory and Applications but I cannot get through the first few chapter either. I have read EXTREMES AND RELATED PROPERTIES OF RANDOM SEQUENCES AND PROCESSES and found the section on on Fisher–Tippett–Gnedenko theorem understandable, but the book does not contain any section on multivariate analysis, and I'm not much interested in time series. I have also tried D-Norms & Multivariate Extremes but found it hard to follow as it got to multivariate extreme part. I'm looking for a comprehensive resource that covers multivariate extreme value theory. If anyone has any recommendation, I'd be grateful.
Resource recommendation for extreme value theory
CC BY-SA 4.0
null
2023-05-19T16:46:05.873
2023-05-19T16:46:05.873
null
null
260660
[ "references", "extreme-value", "extremal-dependence" ]
616350
2
null
547895
0
null
The problem with this approach is that you are implicitly sidestepping the necessary multiple hypothesis correction. Imagine a scenario where you have 100 groups instead of just 3, and all of the groups have the same hazard rate. By chance alone, some of those groups will have better-than-expected survival and some will have worse. If you look at the curves and then choose to compare the best survivors and the worst survivors, you're implicitly ignoring all the other tests that you ran "by eye" - you've visually identified which curves aren't very different and specifically chosen to ignore them, but your p-value calculation pretends those other curves don't even exist. It's poor statistical practice to run lots of tests and only report the most significant ones, which is effectively what's happening when you filter out the other curves by the "eye test".
null
CC BY-SA 4.0
null
2023-05-19T17:20:49.777
2023-05-19T17:20:49.777
null
null
76825
null
616351
1
616368
null
1
24
This is a follow-on to post [Correctly simulating an extreme value distribution for survival analysis?](https://stats.stackexchange.com/questions/616068/correctly-simulating-an-extreme-value-distribution-for-survival-analysis), as I work towards adaptation of that code to the Weibull distribution. In the below code I generate random numbers for $W$ for the Weibull model that takes the form $logT = Ξ± + ΟƒW$ where $Ξ±$ is the linear predictor and $W$ represents a standard minimum extreme value distribution. I generate random numbers for the regression intercept, but I also need reasonable log scale values to go with the randomly generated intercepts to feed into the Weibull survival formula. I assume the variance-covariance matrix (`vcov(fit)`) from the `survreg()` function has the necessary information for providing reasonable log scale values. Is the below code a reasonable way to generate the corresponding log scale values? It makes a very simplistic assumption of linearity. There must be a better way to do this. I wonder if it wouldn't be much easier to simply use `MASS:mvrnorm()` to generate both the intercepts and the corresponding log scale values, but unless I am mistaken my code below should generate a desired wider dispersion of outcomes via its use of `log(rexp(...))` in the `simFx` function. Code: ``` library(survival) fit <- survreg(Surv(time, status) ~ 1, data = lung, dist = "weibull") fitCoef <- fit$icoef[1] # extract intercept value from fit vcov_mat <- vcov(fit) # Function to generate W value from extreme value distribution simFx <- function(){ W <- log(rexp(50)) fitW <- survreg(Surv(exp(W))~1,dist="exponential") params <- fitCoef + fitW$icoef return(params) } r.intercept <- simFX() # assign random value to object # Calculate the corresponding random log scale value r.log_scale <- fit$icoef[2]+sqrt(vcov_mat[2, 2])*(r.intercept-fit$icoef[1])/sqrt(vcov_mat[1,1]) print(paste("Random Intercept:", r.intercept)) print(paste("Random Log Scale:", r.log_scale)) ``` EDIT 1: Below is my attempt to simulate the uncertainty of $W$-only (no simulation of $Ξ²_0$ I don't think) representing a standard minimum extreme value distribution, for a Weibull model. Repeatedly click the last line of code to add simulation lines. Plot image below shows 10 simulations. Trying to keep the code as simple as possible! Running more iterations shows more dispersion than in my other posts where I simulate $Ξ²_0$-only). Also note the plot image (10 simulations) and code further below that the Weibull code below was adapted from, for the exponential model which I believe is correct. [](https://i.stack.imgur.com/o4WBQ.png) Code for Weibull model: ``` library(survival) time <- seq(0, 1000, by = 1) fit <- survreg(Surv(time, status) ~ 1, data = lung, dist = "weibull") fitCoef <- fit$icoef weibCurve <- function(time, survregCoefs) { exp(-(time/exp(survregCoefs[1]))^exp(-survregCoefs[2])) } survival <- weibCurve(time, fitCoef) # Generate random distribution parameter estimates for simulations simFX <- function(){ W <- log(rexp(100)) fitW <- survreg(Surv(exp(W))~1,dist="weibull") params <- fitCoef + fitW$icoef return(weibCurve(time, params)) } plot(time,survival,type="n",xlab="Time",ylab="Survival Probability", main="Lung Survival (Weibull) by sampling from W extreme-value distribution") lines(survival, type = "l", col = "red", lwd = 3) # plot base fitted survival curve ### Click on the below to add simulation lines ### lines(simFX(), col = "blue", lty = 2) ``` Now for exponential model --- [](https://i.stack.imgur.com/hoafH.png) Code for exponential model: ``` library(survival) time <- seq(0, 1000, by = 1) fit <- survreg(Surv(time, status) ~ 1, data = lung, dist = "exponential") fitCoef <- fit$icoef survival <- 1 - pexp(time, rate = 1/exp(fitCoef)) # Generate random distribution parameter estimates for simulations simFX <- function(){ W <- log(rexp(50)) fitW <- survreg(Surv(exp(W))~1,dist="exponential") params <- fitCoef + fitW$icoef return(1 - pexp(time, rate = 1 / exp(params))) } plot(time,survival,type="n",xlab="Time",ylab="Survival Probability", main="Lung Survival (exponential) by sampling from W extreme-value distribution" ) lines(survival, type = "l", col = "red", lwd = 3) # plot base fitted survival curve ### Click on the below to add simulation lines ### lines(simFX(), col = "blue", lty = 2) ``` EDIT 2: Replace the `simFX()` functions in the above code examples for Weibull and exponential with the below in order to correctly reflect the parametric survival form $logT∼α+ΟƒW$ for Weibull and $log(T)∼α+W$ for exponential: Weibull: ``` simFX <- function(){ W <- log(rexp(100)) # W = std min extreme value for parametric survival form logT∼α+ΟƒW newTimes <- exp(fitCoef[1] + exp(fitCoef[2])* W) fitNewTimes <- survreg(Surv(newTimes)~1,dist="weibull") return(weibCurve(time,fitNewTimes$icoef)) } ``` Exponential: ``` simFX <- function(){ W <- log(rexp(100)) # W = std min extreme value for parametric survival form logT∼α+W newTimes <- exp(fitCoef + W) fitNewTimes <- survreg(Surv(newTimes)~1,dist="exponential") return(1 - pexp(time, rate = 1 / exp( fitNewTimes$icoef))) } ```
How to assign reasonable scale parameters to randomly generated intercepts for the Weibull distribution?
CC BY-SA 4.0
null
2023-05-19T17:27:25.463
2023-05-21T09:30:56.287
2023-05-21T09:30:56.287
378347
378347
[ "r", "survival", "random-variable", "extreme-value", "weibull-distribution" ]
616352
1
null
null
2
70
I am thinking of running a regression of $Y ~ Xi$, but for each $X_i$, I have a pre-determined "score" $S_i$ in terms of quality of the data. The higher the score is, the more confidence I have in $X_i$. So I want to run a modified regression of finding $w_i$ to minimize $$| y - \sum w_i X_i |^2 + \lambda \sum w_i^2/S_i^2$$ Basically the penalty terms want me to give preference to the weights on $X_i$ that have "high score of confidence". I wonder is there a problem with this formation? The closed form solution is $w= (X^TX + \lambda/S^TS )^{-1}X^Ty.$ if I am not mistaken but I am not sure how to interpret this. I'd also like to know if there are similar formation of this kind of "data quality regression" problem already.
Regression when factors in the "quality of regressor"
CC BY-SA 4.0
null
2023-05-19T17:35:57.103
2023-05-20T17:19:28.820
2023-05-19T22:49:52.377
296410
296410
[ "regression", "multiple-regression", "optimization" ]
616353
1
null
null
0
14
I am currently working on a logistic regression model using the mgcv package. The mode contains 3 variables (x1 and x2, continuous) and x3 (categorical) and I believe there is interaction between the variables x1 and x2. This is my base model: `gam_model <- gam(Output ~ s(x1) + s(x2) + x3, method = "REML", data = data, family = binomial, select = TRUE)` And this is the output: ``` Formula: Output ~ s(x1) + s(x2) + x3 Parametric coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 2.5464 0.4327 5.886 3.97e-09 *** x31 -0.1607 0.5105 -0.315 0.753 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Approximate significance of smooth terms: edf Ref.df Chi.sq p-value s(x1) 3.9614 9 34.72 <2e-16 *** s(x2) 0.9836 9 51.69 <2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 R-sq.(adj) = 0.664 Deviance explained = 63.9% -REML = 65.942 Scale est. = 1 n = 263 > gam.check(gam_model) Method: REML Optimizer: outer newton full convergence after 20 iterations. Gradient range [-2.632544e-05,1.312618e-05] (score 65.9424 & scale 1). Hessian positive definite, eigenvalue range [1.746351e-07,0.7026972]. Model rank = 20 / 20 Basis dimension (k) checking results. Low p-value (k-index<1) may indicate that k is too low, especially if edf is close to k'. k' edf k-index p-value s(x1) 9.000 3.961 1.01 0.470 s(x2) 9.000 0.984 0.92 0.095 ``` Now, when introducing ti, the output changes drastically, but the deviance explained remains relatively constant: ``` gam_model <- gam(Output ~ ti(x1) + ti(x2) + ti(x1, x2) + x3, method = "REML", data = data, family = binomial, select = TRUE) Formula: Output ~ ti(x1) + ti(x2) + ti(x1, x2) + x3 Parametric coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 1.9417 0.4147 4.682 2.84e-06 *** x31 -0.3893 0.5092 -0.764 0.445 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Approximate significance of smooth terms: edf Ref.df Chi.sq p-value ti(x1) 5.161e-06 4 0.00 0.475 ti(x2) 9.774e-01 4 36.07 <2e-16 *** ti(x1,x2) 5.519e+00 16 39.94 <2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 R-sq.(adj) = 0.66 Deviance explained = 64.9% -REML = 65.132 Scale est. = 1 n = 263 > gam.check(gam_model) Method: REML Optimizer: outer newton full convergence after 15 iterations. Gradient range [-2.806258e-05,9.209817e-06] (score 65.132 & scale 1). Hessian positive definite, eigenvalue range [3.756572e-07,1.244895]. Model rank = 26 / 26 Basis dimension (k) checking results. Low p-value (k-index<1) may indicate that k is too low, especially if edf is close to k'. k' edf k-index p-value ti(x1) 4.00e+00 5.16e-06 0.98 0.425 ti(x2) 4.00e+00 9.77e-01 0.91 0.085 . ti(x1,x2) 1.60e+01 5.52e+00 0.89 0.025 * ``` Would somebody help me explain the results and the changes? Q1: X1 and X2 are closely correlated and in other applications of this model I often see s(x1) going to a value close to zero while s(x2) increases when I add using select = TRUE to the commands. Is this due to the interaction? Q2: In this specific case, I am not certain how to interpret the strong drop in ti(x1) and the high interaction term, especially in the context of a low p-value the gam.check function. My initial thought is that the effect of ti(x1) is mostly driven by interaction via ti(x2), which is now "separated", but is this true, and could I use this model? Thank you very much for your help
Interpretation of Changes introduced by ti() in Logistic GAM Modeling
CC BY-SA 4.0
null
2023-05-19T17:58:03.777
2023-05-21T07:51:57.573
2023-05-21T07:51:57.573
388359
388359
[ "logistic", "interpretation", "generalized-additive-model", "mgcv" ]
616355
1
null
null
0
17
I have a data panel, which contains data for 5 months of the year observed for 3 different years. We have many individuals observed through these periods. The problem is that many individuals are missing observations on certain days. I want to do an interpolation of data, where at the individual level for each hour of the day, you can take the same hours for a previous month and a subsequent month and thus take an average and impute the observation.
What is the best way to impute missing data at the individual level for each hour of the day using R interpolation in Rstudio?
CC BY-SA 4.0
null
2023-05-19T17:25:14.220
2023-05-19T20:02:24.933
null
null
388369
[ "r", "interpolation", "data-imputation" ]
616356
1
null
null
1
41
How interpret Pr(>|t|) results? Can I consider the "speed" significant for regression despite the "intercept" having no statistical difference? Or the linear regression model is only reliable if there had a significant "intercept"? Example ``` summary(lm(dist~speed,data=cars)) # # Call: # lm(formula = dist ~ speed, data = cars) # # Residuals: # Min 1Q Median 3Q Max # -29.069 -9.525 -2.272 9.215 43.201 # # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) -17.5791 6.7584 -2.601 0.0123 * # speed 3.9324 0.4155 9.464 1.49e-12 *** # --- # Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 # # Residual standard error: 15.38 on 48 degrees of freedom # Multiple R-squared: 0.6511, Adjusted R-squared: 0.6438 # F-statistic: 89.57 on 1 and 48 DF, p-value: 1.49e-12 ```
How interpret Pr(>|t|) results?
CC BY-SA 4.0
null
2023-05-19T17:20:31.090
2023-05-19T20:31:34.643
2023-05-19T20:02:44.617
28500
null
[ "r", "regression", "lm" ]
616357
1
null
null
1
34
Does ANCOVA require homogeneity of regression slopes? In other words, do the slopes of the lines need to be the same in order to use this method? Per [this](http://www.biostathandbook.com/ancova.html) blog post ANCOVA can be used to discriminate between models, with different slopes, but [this](https://www.reneshbedre.com/blog/ancova.html) blog post says the slopes must be parallel for ANCOVA to be used. Please explain the discrepancy in plain language if possible. I have some simulated data with Python code below as an example for context. Example Data: ``` ## Module Imports import numpy as np import matplotlib.pyplot as plt import pandas as pd import statsmodels.api as sm import statsmodels.formula.api as smf ## Generate sample data for Z statistic method g1_count = 100 # group 1 number of datapoints g2_count = 100 # group 2 number of datapoints x1 = np.arange(5, 15, 10/g1_count) # array of x values for group 1 y1 = x1 + 3 + np.random.normal(0, 1, g1_count) # array of x values for group 2 x2 = np.arange(0, 10, 10/g2_count) # array of y values for group 1 y2 = 2*x2 + 3 + np.random.normal(0, 1, g2_count) # array of y values for group 2 df = pd.DataFrame({"x1": x1, "x2": x2, "y1": y1, "y2": y2}) # create dataframe ## Plot Data fig, ax = plt.subplots() ax.scatter(x1,y1) ax.scatter(x2,y2) ``` [](https://i.stack.imgur.com/DEbQUm.png) ``` ## Sample data for ANCOVA, combines x and y values to a single column each and adds a categorical column x_c = np.concatenate((x1, x2)) y_c = np.concatenate((y1, y2)) group_list = np.concatenate((np.array((len(x1)*["A"])),np.array((len(x2)*["B"])))) df_c = pd.DataFrame({"x": x_c, "y": y_c, "group": group_list}) df_c ``` [](https://i.stack.imgur.com/ruwRBm.png) Fit Model and See Summary Statistics ``` lm_ancova = smf.ols('y ~ group + x', data=df_c).fit() lm_ancova.summary() ``` [](https://i.stack.imgur.com/hk1WAm.png) Null hypothesis is rejected. The two linear models are likely not the same given the low p-values. However, the assumption that there is no interaction between the group and covariate (x) fails according to an ANOVA test as described [here](https://www.reneshbedre.com/blog/ancova.html). ``` inter_lm = smf.ols('y ~ group * x', data=df_c).fit() # fit linear interaction model sm.stats.anova_lm(inter_lm, typ=3) # ANOVA test on interaction model ``` [](https://i.stack.imgur.com/48ISNm.png) If you have any suggestions for a more appropriate test implemented in Python it would be much appreciated.
Validity of ANCOVA for Linear Models with Different Slopes
CC BY-SA 4.0
null
2023-05-19T18:19:35.407
2023-05-19T19:09:59.810
null
null
388361
[ "regression", "hypothesis-testing", "ancova", "statsmodels" ]
616358
2
null
614935
-2
null
Do you only want to remove highly correlated variables? If so, the following solution uses less than 6GB of RAM, assuming everything is float32, and runs in a few seconds. I used torch because it often seems to parallelize work better than numpy does. The basic idea is to calculate a correlation matrix, make it upper triangular, and remove any column whose (absolute) max correlation is greater than the threshold. ``` import torch COR_LIMIT = 0.8 data = torch.rand(35000, 12000) cor = torch.corrcoef(data.T) cor = torch.triu(cor, 1) keep_mask = torch.abs(cor.max(dim=0).values) < COR_LIMIT data = data[:, keep_mask] ```
null
CC BY-SA 4.0
null
2023-05-19T18:21:13.780
2023-05-19T18:21:13.780
null
null
40513
null
616359
1
null
null
0
16
For one experiment, I have two data sources A and B contributing data points, and a function that takes the union of the sets of datapoints and assigns a quality weight to each point. I'm trying to analyze the quality of A vs B over many such experiments to evaluate if B is "as good" of a data source as A is. One basic thing to do is to count for how many experiments A vs B had the higher sum of weights. But that doesn't capture cases where A and B are end up both capturing about half of the total weight. Is there some statistical method I could use to quantify B's relative goodness vs A? I also tried making a histogram of the ratio of B's total weight to A's total weight, but that ends up being such a wide distribution of values that it wasn't very easy to analyze.
How to compare relative weight of two subsets?
CC BY-SA 4.0
null
2023-05-19T18:35:42.780
2023-05-19T18:35:42.780
null
null
286951
[ "probability", "distributions", "weights", "methodology" ]
616360
2
null
616357
0
null
The formal answer to this question is "yes", the ANCOVA test requires homogeneity of slopes (equal slopes for the groups). But with regards to the underlying motivation for the question, the answer is "no", equal slope is NOT a requirement...if you are asking if the groups are different (in a broad sense). Briefly, the question driving the null hypothesis statistical test (NHST) for the ANCOVA (analysis of covariance) is this: ΒΏAre my groups different on the dependent variable AFTER having controlled for a covariate that comparably impacts that dependent variable? And the power of this test is that it allows you to "correct" for variation across the groups if the covariates are not the same from group to group. For this analysis, you do indeed need a common slope (particularly if you intend to calculate the adjusted means for the groups). However, if the core of your question is this: ΒΏare my groups different from each other? ...then the requirement for equal slope is less pressing. If the slopes are not equal, then you have a situation of moderation (where the covariate moderates the relationship between the groups and the dependent variable). Thus, your groups are REALLY different from each other...it is not just the intercepts that are (may be) different, but the slopes ALSO are different. Hope this helps clarify.
null
CC BY-SA 4.0
null
2023-05-19T18:37:27.787
2023-05-19T18:37:27.787
null
null
199063
null
616361
1
null
null
2
61
I am trying to understand how to actually implement the Harville-Fellner-Schall algorithm for the estimation of the P-splines penalties presented in Appendix E (Sec. E.2) of Marx and Eilers [1]. Let me summarize here the main points. Given a P-spline basis matrix $B$, the data vector $y\in \mathcal{R}^m$ and the discrete derivative operator matrix $D$ of order $d$, the estimate $\hat{\alpha}$ of the spline coefficient vector is given by the solution of the linear least squares problem $$ %\usepackage{physics} \DeclareMathOperator*{\argminB}{argmin} \hat{\alpha} = \argminB_\alpha \left(\Vert y - B\alpha\Vert_2^2 + \lambda \Vert D \alpha\Vert_2^2\right), $$ where $\lambda$ is the parameter that control how wiggly the smooth regression function will be. It is possible to reparameterize this problem in terms of the mixed model $$ y = X\beta + Zc + e \quad {\rm with} \quad c=B\alpha \sim \mathcal{N}(0,\tau^2I) \quad {\rm and} \quad e \sim \mathcal{N}(0,\sigma^2 I). $$ Expressions for $X$ and $Z$ are in [1, Appendix E]. It turns out that $$ \lambda = \frac{\sigma^2}{\tau^2}. $$ At this point Marx and Eilers propose an iterative algorithm that estimates alternatively $\{\hat{\beta}, \hat{c}\}$ and $\{\hat{\tau}^2, \hat{\sigma}^2\}$ until convergence. This algorithm is based on works by Harville, Fellner and Schall. The estimating equations for the variances are $$ \DeclareMathOperator*{\trace}{trace} \hat{\tau}^2 = \frac{\Vert \hat{c}\Vert_2^2}{d - \trace{(T)}}, \quad \quad \hat{\sigma}^2 = \frac{\Vert \hat{y-X\beta -Z\hat{c}}\Vert_2^2}{m - \trace(T)}, $$ where $$ S = R^{-1} - R^{-1}X(X^{\rm T}R^{-1}X)^{-1}X^{\rm T}R^{-1}, $$ and $$ T = I - (I + Z^{\rm T}S Z G)^{-1}. $$ The problems is that when I look at the implementation of this method in the sample code accompanying the book, I see, for example from [Harville_extended.R](https://psplines.bitbucket.io/Support/Harville_extended.R), something like ``` lambda = 1 BtB = t(B) %*% B Bty = t(B) %*% y n = ncol(B) D = diff(diag(n), diff = d) P = t(D) %*% D for (it in 1:nit) { a = solve(BtB + lambda * P, Bty) G = solve(BtB + lambda * P, BtB) ed = sum(diag(G)) yhat = B %*% a sig2 = sum((y - yhat) ^ 2) / (m - ed) tau2 = sum((D %*% a) ^2) / (ed - d) lambda = sig2 / tau2 } ``` It is easy to see in the code above that `sum(diag(G))` is the trace of the hat matrix $$ H = B(B^{\rm T}B + \lambda D^{\rm T}D)^{-1}B^{\rm T} $$ however I do not see an immediate relation between H and the matrix T defined above. Any ideas? [1] Eilers and Marx. Practical smoothing: The joys of P-splines. Cambridge University Press, 2021.
How to compute P-splines penalties with Harville's algorithm?
CC BY-SA 4.0
null
2023-05-19T18:53:10.137
2023-05-22T03:37:40.383
null
null
10876
[ "mixed-model", "regularization", "splines" ]
616362
1
null
null
0
21
From the summary of my fitted generalized additve model (gam) I see two edfs. The first edf is the model edf and second is the reference edf. However, for AIC estimation, the reference edf is used. But what is the reason behind that? As model is estimated using the `edf`, why are we using `reference edf`? This is a sample R code: ``` set.seed(123) library(mgcv) x1=runif(100) x2=runif(100) y=exp(x1*pi) + x2^3 + rnorm(100) M = gam(y~s(x1)+s(x2), method="REML") summary(M) # AIC that takes the reference edf as edf! -2*logLik.gam(M) + 2*(sum(M$edf2)+1) == AIC(M) [1] TRUE ``` ```
What is the reference edf for generalized additive models?
CC BY-SA 4.0
null
2023-05-19T18:53:53.337
2023-05-20T13:39:28.690
null
null
387609
[ "generalized-additive-model", "degrees-of-freedom", "mgcv" ]