Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
βŒ€
ParentId
stringlengths
1
6
βŒ€
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
βŒ€
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
βŒ€
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
βŒ€
LastEditorUserId
stringlengths
1
6
βŒ€
OwnerUserId
stringlengths
1
6
βŒ€
Tags
list
615762
1
616720
null
1
35
I have the following problem: I have a timeseries with the prices for a few futures, which is non-stationary (according to ADF test). If I apply first difference of logs, ADF shows stationarity. But I also have one column in timeseries with news sentiment scores (from -1 to 1), and it seems like I can't apply first difference of logs to it. This timeseries will be later used in ARIMAX. What should I do? I think all dataset should be in the same I(1) or I(0) condition.
First difference of logs of negative numbers causes trouble
CC BY-SA 4.0
null
2023-05-13T12:47:43.597
2023-05-23T18:10:59.573
2023-05-13T18:20:36.313
53690
361080
[ "stationarity", "logarithm", "augmented-dickey-fuller", "differencing", "sentiment-analysis" ]
615764
1
615774
null
4
166
In the attention mechanism, as described in the paper "Attention is all you need", normally the model learns weights $W_Q$ and $W_K$, i.e. the Queries and the Keys. The input of the attention layer is multiplied with both weights to get the Queries and Keys \begin{equation} Q = W_Q^TX~\text{and}~K = W_K^TX \end{equation} before computing the attention weights, called $a_w$ in this context, using the softmax function: \begin{equation} a_w = \operatorname{softmax}\left(\frac{QK^T}{\sqrt{d_k}}\right) \end{equation} where $\sqrt{d_k}$ is the square root of the dimension of the input. To my current understanding, and described in very newbie terms, the idea of using different weights for the Queries and Keys, is to project the input to different spaces, to potentially find useful connections between each element in the input. (A useful connection corresponds to a high softmax entry.) My question is now, how important actually is the aspect of "different spaces"? What if we, in theory, without computational issues, use $4$ weight matrices? What if we only use $1$ weight matrix? That is, compute the keys and queries like so: \begin{array}{cc} Q = W_Q^TX & K = W_Q^TX \end{array} Wouldn't we still be able to find useful connections for each input?
Attention Mechanisms in Tranformers, effect of using more parameters than Keys and Queries
CC BY-SA 4.0
null
2023-05-13T13:25:21.653
2023-05-13T19:40:35.517
2023-05-13T19:40:35.517
22311
198044
[ "neural-networks", "attention" ]
615765
1
null
null
0
29
I am trying to use the `pdp` function in R to plot Partial Dependency plots for a `lightGBM` model. However, it seems that the `pdp::partial()` function does not want to work with lgb.Booster objects. Does anyone have any workarounds or suggestions for other functions? The code is below... ============ Model ============ ``` dtrain1 <- lgb.Dataset( #ensure that 'data" includes only the the predictor variables data = as.matrix(encoded.train.claims %>% select(-lnexpo,-adj.nclaims,-amount,-average,-expo, -nclaims)), label = as.matrix(encoded.train.claims$nclaims), init_score = as.matrix(log(encoded.train.claims$expo)), # this is essentailly the offset term categorical_feature = c(4:12) ) param <- list( objective = "poisson", metric = "poisson", num_iterations = 100, learning_rate = 0.5) lgb.claims.1 <- lgb.train( params = param, data = dtrain1, verbose = 1) ``` ===================== pdp plot ============================= ``` partial(lgb.claims.1, pred.var = "ageph" , train = as.matrix(encoded.train.claims), plot = TRUE) ``` ```
Plotting Partial Dependence Plots for LightGBM models in R
CC BY-SA 4.0
null
2023-05-13T13:30:21.797
2023-05-13T13:31:54.270
2023-05-13T13:31:54.270
387883
387883
[ "r", "lightgbm", "partial-dependency-plots" ]
615766
2
null
615757
4
null
Yes, some of the output has been removed. And while it would be much more straightforward to obtain the sample size with the information that was removed, there is another way to obtain the requested information. As I am more than 95% confident that this is a homework question (as this is the type of homework question I give to my students), I will provide a framework on how to answer the question. If you had the degrees of freedom from this output, you can use the fact that the sum of the error degrees of freedom and the model degrees of freedom give the total degrees of freedom: $$df_{\text{total}} = df_{\text{model}} + df_{\text{error}}$$ The model degrees of freedom is the number of instrumental variables in the model (the number of rows in the coefficients table except for the intercept). The total degrees of freedom is always one less than the sample size. Again, the part of the output that was omitted was the part that lists these two degrees of freedom or the part that lists just the error degrees of freedom. So, we will use the protocol recommended by @COOLSerdash. I will provided an explanation using R and the pt(Β·) function. Using the p-value for the advert partial slope, plug this into the function ``` pt(0.00105/2, 5) ``` You will note that I divided the p-value in half (as the output reports a two-tailed p-value) and I chose 5 as my degree of freedom. Here is the issue. You need to play with this last value to get the answer. That is to say, you need to plug in different values until you 3.417. (Actually, as this function is reporting the left critical value, the output would be -3.417...but the idea is the same...keep changing the degrees of freedom until you obtain the corresponding t-ratio reported in the output.) I have confirmed that the other partial slopes' p-values and t-ratios return the same degree of freedom. So, you could do the same with either of the p-values in the last 3 rows of the coefficients table from this output. I hope this helps.
null
CC BY-SA 4.0
null
2023-05-13T13:44:01.243
2023-05-13T13:44:01.243
null
null
199063
null
615767
1
null
null
0
6
In the paper [Isolation-based Anomaly Detection](https://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/tkdd11.pdf) by Liu, Ting and Zhou, figure 3a compares the density and path length for a cluster of anomalies. Figure 3a along with the sub-text has been provided as image below (figure 3b has not been provided). [](https://i.stack.imgur.com/BqCfk.png) [](https://i.stack.imgur.com/bzxUl.png) I have confusion regarding the symbols and the interpretation of this figure. My understanding from the sub-text (and the paper) is that - - "x" marks indicate the position of data points along the x-axis. There might be multiple x-marks overlapping at a particular position. - The "x" mark on the left denotes a dense cluster of 20 anomalous data points on the extreme left. The normal points form a spread-out cluster (following gaussian distribution) on the right. - There are probably 20 diamond marks at the position of the anomalous cluster, ranging from a density value (right y-axis scale) of ~400 to ~3200. What do these diamond marks denote? Do they stand for the density around each data point? If yes, shouldn't the 20 data points have nearly same density, given they are all clustered together at same position; why the large spread of density value over such a large range along the right-side y-axis, for these 20 points?
Figure comparing path length and density (figure 3a) in paper "Isolation-based Anomaly Detection" by Liu, Ting and Zhou
CC BY-SA 4.0
null
2023-05-13T13:54:23.103
2023-05-13T13:54:23.103
null
null
331772
[ "machine-learning", "random-forest", "anomaly-detection", "isolation-forest" ]
615768
1
null
null
0
33
I'm conducting a randomized 2x2 cross-over trial of 8 participants measuring the effect of a specific diet (intervention) vs normal diet (control) on the number of sleep hours. The study design includes one week periods: run-in, intervention/control, washout, intervention/control, and then washout. Thus in total 5 weeks. Whether intervention or control comes first is randomized 1:1, i.e. 4 participants received control second week and intervention fourth week, and 4 participants received them in the opposite order. Thus: | |Period 1 |Period 2 | ||--------|--------| |Sequence AB |Treatment A |Treatment B | |Sequence BA |Treatment B |Treatment A | The data would then look something like this: |subject |sleep_hours |sequence |period |treatment | |-------|-----------|--------|------|---------| |1 |5 |AB |runin |0 | |2 |4 |AB |runin |0 | |3 |7 |AB |runin |0 | |4 |7 |AB |runin |0 | |5 |4 |BA |runin |0 | |6 |4 |BA |runin |0 | |7 |5 |BA |runin |0 | |8 |7 |BA |runin |0 | |1 |5 |AB |1 |A | |2 |4 |AB |1 |A | |3 |7 |AB |1 |A | |4 |7 |AB |1 |A | |1 |4 |AB |2 |B | |2 |5 |AB |2 |B | |3 |6 |AB |2 |B | |4 |5 |AB |2 |B | |5 |4 |BA |1 |B | |6 |4 |BA |1 |B | |7 |5 |BA |1 |B | |8 |7 |BA |1 |B | |5 |9 |BA |2 |A | |6 |9 |BA |2 |A | |7 |9 |BA |2 |A | |8 |7 |BA |2 |A | I would like to see if the intervention (A) affects sleep_hours compared with the control (B). As I've understood it, it is not advisable to compare mean change in these circumstances, but rather to compare the results and adjust for the run-in period? Thus, is a Linear mixed-effects model the correct test? ``` install.packages(lme4) install.packages(lmerTest) model <- lmer(sleep_hours ~ treatment * period + sequence + (1|subject), data = db) ``` Or is a usual linear regression better? ``` model <- lm(sleep_hours ~ treatment + period + sequence + subject, data = db) ``` And whichever I use, how can I get not only a P value for the comparison, but also produce the means and 95% confidence intervals of the intervention vs the run-in, so that I can produce a table such as: | |Run-in |Intervention period |P | ||------|-------------------|-| |Sleep hours (hours) |4.4 (4.2-4.6) |6 (5.1-6.6) |.001 |
Preferred test for a 2x2 cross-over trial: lmer() or lm()?
CC BY-SA 4.0
null
2023-05-13T14:03:51.823
2023-05-16T15:27:02.187
2023-05-16T15:27:02.187
11887
346798
[ "r", "lm", "lme4-nlme", "crossover-study" ]
615769
1
null
null
0
5
How is the VC dimension of an SVM with Radial basis function as kernel bounded although it is projected to an infinite dimension?
VC dimension of an SVM with Radial basis function as kernel
CC BY-SA 4.0
null
2023-05-13T14:26:08.557
2023-05-13T14:26:08.557
null
null
348149
[ "svm", "vc-dimension" ]
615770
1
615843
null
1
25
I am trying to implement the Louvain algorithm in PySpark. An important part of the algorithm involves calculating the modularity gain of taking node $i$ out of its current community $C_0$ and placing it in a neighboring community $C_1$. On page four of the [original paper](https://arxiv.org/pdf/0803.0476.pdf), the authors describe the modularity gain as a two-step process: - Calculating the modularity gain of removing $i$ from its current community; and - Calculating the modularity gain of placing isolated node $i$ to the new community. The second step is described in equation 2 (page 4), but the first step is only addressed as: > A similar expression is used in order to evaluate the change of modularity when i is removed from its community. I have been unable to find the expression for the first step. Does anyone know how to compute the modularity gain of removing node $i$ from its current community? Could you please share where you found it?
Calculating modularity gain of switching a node from one community to another (Louvain algorithm)
CC BY-SA 4.0
null
2023-05-13T14:34:33.527
2023-05-14T16:00:54.893
2023-05-13T14:35:46.073
22311
242191
[ "clustering", "references", "graph-theory", "networks" ]
615771
2
null
615760
0
null
These are essentially equivalent, except for a different interpretation of $z_i$ in the two representations. For [the exponential function](https://proofwiki.org/wiki/Exponential_of_Sum/Real_Numbers#Theorem), $$\exp(x+y)=\exp(x) \exp(y) .$$ If you take $\exp(z_i)$ from the first representation $$h_i(t|x_i,z_i)=h_0(t)\exp(\beta x_i+z_i)$$ you get $z_i$ in the second representation $$h_i(t|x_i,z_i)=z_ih_0(t)\exp(\beta x_i).$$ The first representation is used for modeling in common R packages. In the R [survival package](https://cran.r-project.org/package=survival), the `coxph()` function can model $z_i$ in $\exp(\beta x_i + z_i)$ with either a Gaussian, gamma, or t distribution. That's explained in Chapter 9 of [Therneau and Grambsch](https://www.springer.com/us/book/9780387987842). The [coxme package](https://cran.r-project.org/package=coxme) extends what you show as $z_i$ in the first representation to more complicated random effects with Gaussian distributions.
null
CC BY-SA 4.0
null
2023-05-13T14:35:41.853
2023-05-13T14:35:41.853
null
null
28500
null
615772
1
null
null
1
40
Here're the results of a multi-variable regression analysis run by Stata to test the effects of the three factors on the price elasticity of supply, which is the dependent variable. The coefficients appear to be realistic and the "Prob>F" value seems acceptable. As this is an interdisplinary study and I don't have a solid statistical background, I would like to confirm whether this result can be considered as robust or if there're any obvious warnings that I may have missed. If you require further context, please let me know, and I can provide additional information. [](https://i.stack.imgur.com/DIJnv.png) Edit: So I read in another thread that this could be due to the possibility that DR and SHR alone are not correlated, but they are when controlled by other factors. So I ran another regression dropping SHR just to test this; then I get the result as follows: [](https://i.stack.imgur.com/Zu4nu.png) You see here the F-test shows an even lower value, but the P-value for variable DR is still very high. What could be the explanation for it?
Interpretation of results of a regression analysis
CC BY-SA 4.0
null
2023-05-13T14:54:29.447
2023-05-14T10:32:14.220
2023-05-14T10:32:14.220
22047
387886
[ "regression", "econometrics", "elasticity" ]
615773
1
null
null
0
12
I'm finding what seems like an absurd number of default center points with the following Response Surface DoE setup in Minitab. Can someone please help me understand why they're needed and if I can reduce the number of center points without losing too much information: Minitab DoE: Response Surface, Central Composite, 2 Continuous Factors, 1 continuous response, 1 Binary Response, Full Resolution, Blocked = 14 data points with no replicates. These data points are distributed as: 4 cube points (all different) 4 axial points (all different) 6 center points (all the same point?) For 2 continuous factors, the center point for both cube and axial are the same so I would've expected just 1 center point before adding replicates. Even if I keep a center point for both cube and axial arrangements that would just be two points. Why does it default to 6?
DoE: Why so many center points for Response Surface design in Minitab?
CC BY-SA 4.0
null
2023-05-13T14:59:18.870
2023-05-13T14:59:18.870
null
null
387885
[ "experiment-design", "minitab", "response-surface-methodology" ]
615774
2
null
615764
5
null
The answer to your question is yes: you'd find useful connections still. Setting $W_Q \triangleq W_K$ is one of the suggestions for parameter efficiency from [Kitaev et al. (2020 at ICLR)](https://openreview.net/forum?id=rkgNKkHtvB). It has the effect of making the queries and keys identical. The authors call this "shared-QK attention" and note in Β§5 that this parameter-tying approach is not worse than the standard attention mechanism: > A shared query-key space does not perform worse than regular attention; in fact, for enwik8 it appears to train slightly faster. In other words, we are not sacrificing accuracy by switching to shared-QK attention.
null
CC BY-SA 4.0
null
2023-05-13T15:05:32.350
2023-05-13T15:05:32.350
null
null
155836
null
615776
1
null
null
0
25
Let $x$ and $y$ be correlated negative binomial distributions, $x\sim~NB(p_1,n_1)$ and $y\sim~NB(p_2,n_2)$ respectively with correlation $\rho$. Can we derive their joint distribution analytically? Subsequently, can also we do so for 2 correlated binomial distributions?
Joint Distribution of Correlated Binomial and Negative Binomial Variables
CC BY-SA 4.0
null
2023-05-13T15:47:16.693
2023-05-13T15:47:16.693
null
null
387888
[ "probability", "correlation", "binomial-distribution", "negative-binomial-distribution", "joint-distribution" ]
615777
2
null
24720
2
null
The example in whuber's answer is very to the point (+1), to which I want to elaborate the rationale behind it from the theoretical perspective. For a better exposition, suppose the number of regressors is $2$ and the number of observations is $n$, so the model can be written as: \begin{align} y_i = \beta_0 + \beta_1X_{1, i} + \beta_2X_{2, i} + \epsilon_i, \quad i = 1, 2, \ldots, n. \end{align} Further suppose the relation among $y = (y_1, \ldots, y_n)$, $X_1 = (X_{1,1}, \ldots, X_{1, n})$, $X_2 = (X_{2, 1}, \ldots, X_{2, n})$ is as set up by whuber: $X_1$ and $X_2$ are orthogonal, $y$ and $X_1$ are highly correlated, $y$ and $X_2$ are (almost) uncorrelated. We are interested in testing - The single $\beta_1$ is $0$: \begin{align} H_0: \beta_1 = 0 \text{ v.s. } H_1: \beta_1 \neq 0. \tag{1} \end{align} - All $\beta$s are 0: \begin{align} H_0: \beta_1 = \beta_2 = 0 \text{ v.s. } H_1: \beta_1 \neq 0 \text{ or } \beta_2 \neq 0. \tag{2} \end{align} It is well-known that the testing procedures applying to problem $(1)$ and $(2)$ are $t$-test and $F$-test respectively. However, the key to explain the posed paradox is recognizing the $t$-test to problem $(1)$ can also be viewed as an equivalent $F$-test, so that we are actually applying the partial $F$-test to problem $(1)$ and the overall $F$-test to problem $(2)$ respectively (a good reference to this is Applied Linear Statistical Models, Section 7.3 by Kutner et al.), whose testing statistics are \begin{align} F^{(1)} = \frac{MSR(X_1|X_2)}{MSE} = \frac{SSR(X_1, X_2) - SSR(X_2)}{MSE} \tag{3} \end{align} and \begin{align} F^{(2)} = \frac{MSR(X_1, X_2)}{MSE} = \frac{SSR(X_1, X_2)}{2MSE} \tag{4} \end{align} respectively. At the significance level $\alpha$, $F^{(1)}$ and $F^{(2)}$ are compared with critical points (($1 - \alpha$)-$F$-quantiles) $q_1^* = F_{1, n - 3}(1 - \alpha)$ and $q_2^*= F_{2, n - 3}(1 - \alpha)$ respectively to determine whether $H_0$ should be rejected. Under this specific setting, it is clear that $SSR(X_2) \approx 0$, which implies that $F^{(1)}$ is approximately $2$ times of $F^{(2)}$, however, the ratio of $q_1^*$ and $q_2^*$ is less than $2$, making the null hypothesis in $(1)$ is more likely to be rejected than the null hypothesis in $(2)$: for example, if $F^{(1)}$ is slightly greater than $q_1^*$, hence the null hypothesis in $(1)$ is (barely) rejected, then $F^{(2)} \approx 0.5F^{(1)}$ is slightly greater than $0.5q_1^*$, which will be less than $q_2^*$, as it is supposed to be considerably greater than $0.5q_1^*$. Hence the null hypothesis in $(2)$ cannot be rejected at the same significance level $\alpha$. Now let's replicate whuber's example to illustrate the above point. As can be read from the output below, $F^{(1)} = 3.012^2 = 9.072 > q_1^* = 7.597663$, $F^{(2)} = 4.684 < q_2^* = 5.420445$, hence at $\alpha = 0.01$, $H_0$ in $(1)$ is rejected but in $(2)$ is not rejected. The reason is that the ratio of $F^{(1)}$ and $F^{(2)}$ is $F^{(1)}/F^{(2)} = 1.94$, which is considerably greater than the ratio of cutoff points $q_1^*/q_2^* = 1.40$. ``` > set.seed(17) > p <- 5 # Number of explanatory variables > x <- as.matrix(do.call(expand.grid, lapply(as.list(1:p), function(i) c(-1,1)))) > y <- x[,1] + rnorm(2^p, mean=0, sd=2) > X <- as.data.frame(x) > > m <- lm(y ~ Var1 + Var2, X) > summary(m) Call: lm(formula = y ~ Var1 + Var2, data = X) Residuals: Min 1Q Median 3Q Max -3.13861 -1.34150 0.09369 1.10478 2.85457 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.5185 0.2980 1.740 0.09244 . Var1 0.8975 0.2980 3.012 0.00534 ** Var2 0.1624 0.2980 0.545 0.58982 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 1.686 on 29 degrees of freedom Multiple R-squared: 0.2442, Adjusted R-squared: 0.192 F-statistic: 4.684 on 2 and 29 DF, p-value: 0.01726 > qf(0.99, 1, 29) [1] 7.597663 > > qf(0.99, 2, 29) [1] 5.420445 ``` To illustrate the "insignificance of overall $F$-test inflation" effect as the number of regressors $p$, consider the same scenario as above but now $p$ ranges from $2$ to $29$ (the penultimate maximum number of regressors that $F$-test works fine). ``` q1 <- qf(0.99, 1, 29:2) q2 <- qf(0.99, 2:29, 29:2) F1 <- q1 + 0.5 # This simulates the case that t-test barely rejects H0. F2 <- F1/(2:29) # F2 is approximately 1/p of F1 by setting. plot(2:29, q2, xlab = "# of predictors", ylab = "") points(2:29, q1, pch = 2) lines(2:29, F1) lines(2:29, F2, lty = "dashed") legend("topleft", c("t-test threshold (q1)", "F-test threshold (q2)", "t-test stat (F1)", "F-test stat (F2)"), pch = c(2, 1, NA, NA), lty = c(NA, NA, "solid", "dashed")) ``` The graph is as follows. It can be seen that the dashed line is always below thresholds (hence $F$-tests are always insignificant), and as $p$ increases, the gap between the dots and the dashed line also increases, indicating that the $F$-test becomes more insignificant. [](https://i.stack.imgur.com/LIfTL.png)
null
CC BY-SA 4.0
null
2023-05-13T15:57:06.693
2023-05-13T23:15:23.027
2023-05-13T23:15:23.027
20519
20519
null
615778
1
null
null
0
16
I am following Introduction to Linear Regression Analysis by Montgomery, Peck Vining and on p. 430 they present the likelihood ratio test. It says that $$ \text{LR} = 2 \left[\ln(L(\text{FM})) - \ln(L(\text{RM})) \right] $$ The formula then claims that this is equal to: $$ LR = 2 \left[ \left (\sum_{i=1}^n y_i \ln(\hat{\pi_i}) + \sum_{i=1}^n (n_i -y_i)\ln(1-\hat{\pi}_i) \right) - \left( y \ln(y) + (n-y)\ln(n-y) - n\ln(n) \right) \right] $$ When I calculate the log-likelihood using `logLik` I get different numbers than when I use the formula above. I just don't understand how to calculate the log-likelihood by hand. Here are my workings. ``` # Data exposure = c(5.8, 15.0, 21.5, 27.5, 33.5, 39.5, 46.0, 51.5) cases = c(0,1,3,8,9,8,10,5) miners = c(98,54,43,48,51,38,28,11) successes = cases failures = miners - cases # Full model (FM) m = glm(cbind(successes, failures) ~ exposure, family=binomial(link='logit')) # Reduced model (RM) m0 = glm(cbind(successes, failures) ~ 1, family=binomial(link='logit')) # Likelihoods ln_L_FM = logLik(m)[1] ln_L_RM = logLik(m0)[1] LR = 2*(ln_L_FM - ln_L_RM) print(LR) ``` Now, to calculate the logLikelihood of the full model: $$ \ln[L(FM)] = \sum_{i=1}^n y_i \ln(\hat{\pi}_i) + \sum_{i=1}^n (n_i -y_i)\ln(1-\hat{\pi}_i) $$ ``` y = cases/miners n = miners p = fitted(m) t1 = y*log(p) t1[1] = 0 t1 = sum(t1) t2 = (n-y)* log(1-p) t2 = sum(t2) loglikFM = t1 + t2 ``` However, this gives `-52.70837` whereas logLik(m) gives -14.43861. What am I doing wrong? How can I get `logLik(m)` by hand? How can I get logLik(m0) by hand? I can't seem to get the same numbers for either of them.
Simple logistic regression - calculate likelihood by hand
CC BY-SA 4.0
null
2023-05-13T16:15:56.527
2023-05-13T16:15:56.527
null
null
137303
[ "r", "regression", "logistic", "generalized-linear-model", "nonlinear-regression" ]
615780
2
null
567069
1
null
As I see it, there is no reason why these models cannot be used in these domains. My feeling is that there is a degree of techniques falling in and out of fashion. Often students will learn the techniques of their lecturers, mentors and colleagues, and I suspect in recent years LMM have gained traction in certain disciplines but not yet in others. Here are a few examples of their applications to engineering and physics, along with one of my own applications in business. - Productivity estimation of bulldozers using GLMM here they are modelling productivity of bulldozers, including different properties of the bulldozer and the operator. They consider soil type as a random effect. - In my own work I have used LMM to estimate the time it takes workers to complete some process. Very roughly speaking: We want to know whether some change in the process (treatment) leads to a reduction in time to complete the process. A huge source of variability is the worker themselves, some are naturally a lot faster than others, therefore the specific worker is a random effect. - Here is an example in engineering. An Analysis of defective products in auto parts factories with generalized linear mixed models I was not able to load a copy of the paper. However from reading the abstract I can imagine how they might be useful here. The data was collected from 12 machines and so you could imagine it might make sense to have machine as a random effect. You would have to read the article to be sure what they do though. - Spatiotemporal dynamics of NO2 concentration with linear mixed models: A Bangladesh case study Published in the journal of physics and chemistry of the earth. I highlight this as the journal name specifically mentions physics and chemistry. --- Update following Ben’s comment. I am not sure I can give a good answer for when to use them. For most of the examples above I tried to explain how they might be used there, because I think a good way to learn is to see examples and then as you become familiar with the examples, you will start to see how your potential use cases match examples you have seen before. I am not sure whether you have done many tutorials/hands on examples using these models, but I think working through a few of these would really help you to see how and why they are useful. I would recommend [this](https://ourcodingclub.github.io/tutorials/mixed-models/#what) tutorial, and note also the β€˜What’ section gives an explanation of why you might want to use them. There is also this [answer](https://stats.stackexchange.com/questions/275450/when-to-use-mixed-effect-model). To quote from the current top answer > β€œI believe that a multilevel model makes sense when there is reason to believe that observations are not necessarily independent of one another. Whatever "cluster" accounts for this non-independence can be modeled.” In my use case my observations are the times it takes workers to complete some task. The times are not independent, and times from the same worker are likely to be similar (some workers are fast, some are slow). To account for this I group the times by the worker. The overall point of this was to determine whether some treatment had led to a speed-up.
null
CC BY-SA 4.0
null
2023-05-13T16:52:16.463
2023-05-21T14:28:39.780
2023-05-21T14:28:39.780
358991
358991
null
615781
1
null
null
0
17
Suppose a linear regression setting with $n$ observations and $p$ explanatory variables. The hypothesis testing $H_{0}: \beta_{j} = 0$ vs $H_{1}: \beta_{j} \ne 0$ has the following test statistic at the level of $\alpha$: $$ F_{calc} = \frac{\beta_{j}^2}{\text{Var}[\beta_{j}]} \sim F_{(1,n-[p+1])} $$ Then, for the same model, we can find confidence intervals for $\beta_{j}$ using the following pivotal quantity: $$ t = \frac{\hat{\beta_{j}} - \beta_{j}}{\sqrt{\text{Var}[\beta_{j}]}} \sim t_{(n-[p+1])} $$ From which we can find the confidence interval at the level $\alpha$: $$ C.I. : \left[ \hat{\beta_{j}} \pm t_{(n-[p+1])}^{(1-\frac{\alpha}{2})}\sqrt{\text{Var}[\beta_{j}]} \right] $$ Now, just looking at the two statistics, we can see that under the hypothesis that $H_{0}$ is true, then the pivotal quantity and the F-statistic are very similar. Furthermore, we already know the result that relates the $t$ distribution and the $F$ distribution: if $ X \sim t_{(k)} $, then $X^2 \sim F_{(1,k)}$, which seems to be key in this problem. However, I can't find an argument to conclude that if the $C.I.$ of $\beta_{j}$ contains 0, then $X_{j}$ is not significant. I hope you can help me.
Proving that in a linear regression, if the confidence interval of a coefficient contains $0$, then the variable associated is not significant
CC BY-SA 4.0
null
2023-05-13T18:07:03.653
2023-05-13T18:07:03.653
null
null
387893
[ "regression", "hypothesis-testing", "distributions", "statistical-significance", "confidence-interval" ]
615782
2
null
615762
2
null
You cannot take logs of negative numbers, but you also do not need to. Use sentiment scores as they are.
null
CC BY-SA 4.0
null
2023-05-13T18:19:25.503
2023-05-13T18:19:25.503
null
null
53690
null
615783
1
615787
null
2
86
> A random variable $Z$ is obtained as follows. Let $X$ follow $U(0, 1)$, and $Y$ given $X = x$ be Bernoulli with probability of success $x$. If $Y = 1, Z$ is defined to be $X$. Otherwise, the experiment is repeated until a pair $(X, Y )$ with $Y = 1$ is obtained. Then, find the probability density function of $Z$ on $(0, 1).$ Let $N$ be the trial number at which we get $Y=1$. $F_Z(z)=\sum_{n=1}^{\infty}P(Z \leq z | N=n)P(N=n)=\sum_{n=1}^{\infty}P(Z \leq z,y_n=1)P(y_1 \neq1)^{n-1}.$ Now, $P(Z \leq z,y_n=1)=P(Z \leq z |y_n=1)P(y_n=1)=P(X_n \leq z |y_n=1)P(y_n=1)=P(y_n=1 | X_n \leq z)P(X_n \leq z)$ I know $Y|X=x$ follows $Ber(x)$, how can I proceed to find $P(y_n=1 | X_n \leq z)$ or $P(y_n=1, X_n \leq z)$? Source: [Problem 29, ISI PSA 2021](https://www.isical.ac.in/%7Eadmission/IsiAdmission/PreviousQuestion/MStat-PSA-2021.pdf)
Finding the probability density function
CC BY-SA 4.0
null
2023-05-13T18:27:30.603
2023-05-13T22:41:22.557
2023-05-13T19:06:33.413
339153
339153
[ "probability", "density-function" ]
615784
1
null
null
0
9
For more on conditional disintegration, cf. e.g. the references [1] [2], or these questions [[a]](https://math.stackexchange.com/questions/108740/conditional-probability-and-the-disintegration-theorem) [[b]](https://math.stackexchange.com/questions/3834037/from-disintegration-to-conditioning) on math stackexchange. > Question: For the von-Mises Fisher distribution, is there a conditional disintegration via antipodal pairs of points? I.e. so each conditional distribution would be a Bernoulli and have as values the two points of an antipodal pair? Observation: At least in the case that $\kappa$ (concentration) equals $0$, and thus the [von Mises-Fisher distribution reduces to the uniform distribution](https://en.wikipedia.org/wiki/Von_Mises%E2%80%93Fisher_distribution#The_uniform_hypersphere_distribution) on the (hyper)sphere, the answer would clearly seem to be yes. Namely the resulting conditional distributions of the disintegration would just be "fair coins", i.e. Bernoullis giving equal probability to each antipode. Comment: A reference would suffice, although something like a derivation / calculation would be amazing of course. If it's relevant, I'm most interested in the case that $p=3$ (i.e. distributions on the 2D sphere in 3D space). Slightly related question: [Statistical model for axis angle rotations](https://stats.stackexchange.com/questions/123886/statistical-model-for-axis-angle-rotations) (conditioning is on a single direction, not antipodal pair of directions) [1] [Conditioning as disintegration, J. T. Chang and D. Pollard, Statistica Neerlandica (1997) Vol. 51, nr. 3, pp. 287Β±317](http://www.stat.yale.edu/%7Ejtc5/papers/ConditioningAsDisintegration.pdf) [2] sections 2.2. and 2.3 of [Exact Bayesian Inference by Symbolic Disintegration, Chung-chieh Shan and Norman Ramsey, POPL’17, January 15–21, 2017, Paris, France ACM. 978-1-4503-4660-3/17/01](https://homes.luddy.indiana.edu/ccshan/rational/disintegrator.pdf). [http://dx.doi.org/10.1145/3009837.3009852](http://dx.doi.org/10.1145/3009837.3009852)
Is there an "antipodal" conditional disintegration of the von Mises Fisher distribution?
CC BY-SA 4.0
null
2023-05-13T18:48:56.523
2023-05-13T18:48:56.523
null
null
290892
[ "mathematical-statistics", "references", "circular-statistics", "conditioning", "von-mises-distribution" ]
615786
1
null
null
0
11
In reference to this question: [interpreting level-log model that has a percentage variable](https://stats.stackexchange.com/questions/283421/interpreting-level-log-model-that-has-a-percentage-variable) If I have a regression equation: child_test_score = 400 + 25log(family_income) Is this the correct interpretation: A one percent increase in family_income is expected to increase child_test_score by 0.25 percentage points? or A one percent increase in family_income is expected to increase child_test_score by 0.25 points (0.25 units)? or both? I tend to see the latter interpretation so I am wondering if the former interpretation is incorrect. Thanks
Regression interpretation with logarithms
CC BY-SA 4.0
null
2023-05-13T19:25:05.817
2023-05-13T19:25:05.817
null
null
319565
[ "regression-coefficients", "logarithm" ]
615787
2
null
615783
4
null
This is an interesting problem. To formalize the definition of $Z$, it is helpful to introduce the positive integer-valued random variable $\tau$ (known as "stopping time" in probability theory): \begin{align} \tau = \min\{i: Y_i = 1\}, \end{align} where $Y_i | X_i = x \sim B(1, x)$ and $X_1, X_2, \ldots$ are i.i.d. $U(0, 1)$ random variables. With these notations, it is easy to see that $Z = X_\tau$. By the law of total probability, the CDF of $Z$ is given by ($0 < z < 1$): \begin{align} & F(z) = P[X_\tau \leq z] = \sum_{i = 1}^\infty P[X_i \leq z, \tau = i] \\ =& P[X_1 \leq z, Y_1 = 1] + \sum_{i = 2}^\infty P[X_i \leq z, Y_1 = \cdots = Y_{i - 1} = 0, Y_i = 1], \tag{1} \end{align} where \begin{align} P[X_1 \leq z, Y_1 = 1] = \int_{[X_1 \leq z]}P[Y_1 = 1|X_1]dP = \int_{[X_1 \leq z]}X_1dP = \int_0^zxdx = \frac{1}{2}z^2. \tag{2} \end{align} Here we used the definition of measure-theoretic conditional probability $P(A|\mathscr{G})$: for any $G \in \mathscr{G}$, $P(A \cap G) = \int_GP(A | \mathscr{G})dP$. Similarly, \begin{align} P[Y_1 = 1] = \int_\Omega P[Y_1 = 1|X_1]dP = \int_\Omega X_1dP = \int_0^1 xdx = \frac{1}{2}. \end{align} It thus follows by independence that for $i \geq 2$ \begin{align} & P[X_i \leq z, Y_1 = \cdots = Y_{i - 1} = 0, Y_i = 1] \\ =& \prod_{k = 1}^{i - 1}P[Y_k = 0]P[X_i \leq z, Y_i = 1] = \left(\frac{1}{2}\right)^{i - 1}\frac{1}{2}z^2 = \left(\frac{1}{2}\right)^iz^2 . \tag{3} \end{align} Substituting $(2)$ and $(3)$ into $(1)$ yields \begin{align} F(z) = z^2\sum_{i = 1}^\infty\left(\frac{1}{2}\right)^i = z^2. \end{align} Hence the density of $Z$ is $f(z) = 2z, 0 < z < 1$.
null
CC BY-SA 4.0
null
2023-05-13T19:34:32.253
2023-05-13T22:41:22.557
2023-05-13T22:41:22.557
20519
20519
null
615788
1
null
null
3
209
Let us assume we have a magical 1D line of length 1 cm. And we have an unbiased coin. There are points on the line, where the probability of getting head is high. Example: if the coin lands on 0.5 cm, then it has high probability of getting head. Initial idea: Discretize the line into intervals (say 10). Then we have 10 bins of 0.1 cm. Randomly pick a bin, flip the coin inside that bin and observe the outcome(head/tail.) We count the number of times it was head in each bin. Assume that the number of flips is limited to 30. After 30 flips, we should be able to say $P(\text{head in }bin_i) = X$ Is there any better way to handle this? What if we want to discretize the line into 100 bins?
How do we find probability of a binary event occurring in continuous space?
CC BY-SA 4.0
null
2023-05-13T19:52:34.540
2023-05-14T21:51:36.697
null
null
327906
[ "probability" ]
615790
1
null
null
0
10
Is there a clear and precise explanation of why minimising the variance of the weights in SIS with respect to a proposal ensures that the samples generated from the empirical distribution induced by the normalised weights will be closer to the posterior/target distribution? Also is there a reference which proves something about the distribution (mass function) induced by SIS in relation to the target/posterior density? Like random variables sampled from SIS with increasing particle counts converge in distribution to random variables sampled from the target? --Apologies for abuse of notation, or any mis-use of terminology. It just seems like most results for SIS / SIR are provided in terms consistency with respect to arbitrary test functions, instead of the actual samples generated under the normalised weights. Most of my understanding stems from~[https://www.cs.ubc.ca/~arnaud/doucet_johansen_tutorialPF.pdf](https://www.cs.ubc.ca/%7Earnaud/doucet_johansen_tutorialPF.pdf), so if I missed something obvious here let me know, and thank you in advance!
Why do we want to minimise the variance of our importance weights in SIS with respect to the proposal distribution
CC BY-SA 4.0
null
2023-05-13T20:42:04.240
2023-05-31T20:45:03.887
null
null
195683
[ "variance", "importance-sampling", "particle-filter", "sequential-monte-carlo" ]
615791
1
null
null
0
10
I have a small data set and I have missing values and decided to use multiple imputation technique ``` id <- mice(df, m=5) analysis <- with(id, lm(a ~ b + c + d)) summary(pool(analysis)) ``` In my model with the missing values the assumptions (normality, linearity, homoscedasticity) are violated how can I check these assumptions even in the 5 data sets? Should I check for each data set? If they are also violated, what method can I use in this case? Thank you very much for your help!
Multiple imputation and regression assumptions
CC BY-SA 4.0
null
2023-05-13T21:04:11.480
2023-05-15T12:56:59.930
2023-05-15T12:56:59.930
109647
387900
[ "multiple-regression", "assumptions" ]
615792
1
null
null
0
11
I am learning the multi-fidelity modeling and have a question about Emukit's `mixed_noise` parameters, or more general, how should we determine the noise when the training data are simulation results, which can be assumed to be exact/noise-free according to the book Gaussian processes for machine learning. In Emukit's tutorial, it says: > Note: The model implementation defaults to a MixedNoise noise likelihood whereby there is independent Gaussian noise for each fidelity. This can be modified upfront using the 'likelihood' parameter in the model constructor, or by updating them directly after the model has been created. In the example below, we choose to fix the noise to '0' for both fidelities in order to reflect that the observations are exact. and in the example: ``` gpy_lin_mf_model.mixed_noise.Gaussian_noise.fix(0) gpy_lin_mf_model.mixed_noise.Gaussian_noise_1.fix(0) ``` In my work, the high and low fidelity observations are both simulation results from two different mesh refinement levels. Should I simply fix both noise variance to be 0 like this tutorial example "in order to reflect that the observations are exact"? Or should I just fix the noise variance to be zero and try to quantify/determine the noise for the low fidelity? If yes, how should I derive a reasonable parameter? Also, I'm not sure if this mental image is correct: now I see the discrepancy term $\gamma(\mathbf{x})$ in linear multi-fidelity modeling $f_t(\mathbf{x}) = \rho f_{t-1}^* (\mathbf{x}) + \gamma(\mathbf{x}) $ is such a Gaussian process, it samples at the shared points between high and low fidelities (most methods assume nested DoE) and the target value is the difference among those shared points. Hence the two fidelities can both be exact right? In fact, it makes the prediction of the errors more confident, doesn't it? I appreciate your comments and insights!
How should I understand the noise variance parameter(s) in multifidelity modeling using Emukit
CC BY-SA 4.0
null
2023-05-13T21:35:31.107
2023-05-13T21:35:31.107
null
null
366562
[ "gaussian-process", "hyperparameter", "noise" ]
615793
2
null
615788
1
null
While this may not be directly answering your question, I can provide an idea using a conditional probability approach, because you want to know the probability of getting head given that you are at some specific position. As far as I understand, your probability of landing on a bin itself is a random variable, say $Y$. I would consider this random variable to follow a multinoulli (or multinomial distribution), with parameters $p{_i}$ corresponding to the probability of landing on the bin $i$. Then you do the flipping experiment. Let's represent this experiment by $X$. Given that it is an unbiased coin, this follows a uniform distribution with parameter $p$=0.5. Then your probability of getting head given that your coin lands on a particular bin is the conditional probability $P(X|Y)$, which equals to $P(X\bigcap Y)/P(Y)$. This is the probability that you want to estimate (you do not estimate probabilities though, you estimate parameters). Your question obviously implies that the probability of getting head and being in the bin $i$ is not independent, e.g. they have a functional relationship. You can find the distribution of $P(Y)$ by sampling where your coin flips end up at, and then do distribution fitting. If your parameters $p{_i}$ turn out to have a well-defined functional relationship, you may generalize this to an increasing number of intervals. However, even at this point you are unable to know the exact probability distribution of $P(X\bigcap Y)$, and in practice joint probability distributions are hard to define exactly (precisely the reason we develop regression models). If you have an intuition on how this joint probability distribution changes with respect to the result of experiments, you can choose a suitable distribution and estimate its parameters using your observations. I want to conclude by saying that generally, probabilistic questions like this actually begin with you defining a probability space following your theorization about a phenomenon. Depending on how the probability space is constructed, you may have different solutions. See: [https://www.wikiwand.com/en/Bertrand_paradox_(probability)](https://www.wikiwand.com/en/Bertrand_paradox_(probability))
null
CC BY-SA 4.0
null
2023-05-13T21:51:48.723
2023-05-13T21:51:48.723
null
null
282477
null
615794
2
null
615788
5
null
What you have here is a [binary regression model](https://en.wikipedia.org/wiki/Binary_regression) with a single continuous input variable on the interval $[0,1]$. Your regression model is: $$Y|X=x \sim \text{Bern}(f(x)) \quad \quad \quad f: [0,1] \rightarrow [0,1],$$ and you are trying to infer the form of the regression function $f$. At present this model is highly general and so you will need to think about what type of binary regression model and inference method you want to use. Usually this would mean considering what kind of functional form or smoothness properties the function $f$ might be expected to have. You might decide to narrow down the form of this function to some parametric form and then you would have a parametric binary regression model, or you might simply assume some simple smoothness properties and decide to use a nonparametric binary regression model (see e.g., [Diaconis and Freedman 1993](https://www.jstor.org/stable/2242332)). Once you decide on a modelling and inference approach, you should sample from your unit interval at appropriate places (determined by experimental design principles in regression)to get a set of data showing the coin-flip outcomes at the sampled points. You would then take that data and use it to estimate the function $f$ using the relevant inference methods for your chosen model.
null
CC BY-SA 4.0
null
2023-05-13T22:09:18.913
2023-05-14T21:51:36.697
2023-05-14T21:51:36.697
173082
173082
null
615795
2
null
615205
1
null
If your intention is to summarize the spread of your data, then just standard deviation, variance, range or coefficient of variation can be sufficient. In your case, you should compute modes to check whether your data follows a multimodal distribution as well. However I cannot see any reason just differencing standard deviation from the mean would inform you about the dataset. I would use median, mode and range values to summarize that dataset. This operation is probably inspired by the construction of confidence intervals for the estimation of a population mean. However, these intervals are constructed randomly and utilizes information about the distribution of the random variable in question to infer how likely the true population parameter is in many intervals calculated in the same manner. In essence, it tests how stable your estimation is concerning the mean. That is often done to support hypothesis tests for population parameters. I see distribution information is completely discarded here, which may be the reason that you obtain irrelevant, negative values. Edit: I also cannot see any reason to prefer "standard deviations above and below the mean", as some answers have suggested, over computing first and third quartiles or interquartile range (in general, L-estimators); then again, if your goal is to inform about the spread in your data.
null
CC BY-SA 4.0
null
2023-05-13T22:43:52.293
2023-05-13T22:59:06.723
2023-05-13T22:59:06.723
282477
282477
null
615796
1
615817
null
1
40
In a binary pairwise MRF, the joint distribution is as follows: \begin{align} p(x\mid\theta) & = \exp\left(\sum_{s \in N} \theta_s x_s + \sum_{(s,t) \in E} \theta_{st} x_s x_t - \Phi(\theta)\right) \\ \Phi(\theta) & = \log\left(\sum_{x}\exp\left( \sum_{s \in N} \theta_s x_s + \sum_{(s,t) \in E} \theta_{st} x_s x_t \right) \right) \end{align} $$x_s^{(\ell)} = \begin{cases} 1 & \text{if person $s$ votes yes on matter $\ell$}, \\ 0 & \text{otherwise.} \end{cases} $$ I've heard that this graph is something called Ising model from statistical physics. However, I'm not sure how to interpret this/estimate values. Say that I want to take the MLE of $\theta_s$, and I have a disconnected graph ($E = \emptyset$), how would I find a closed-form expression?
Understanding the Ising Model and finding the MLE
CC BY-SA 4.0
null
2023-05-13T23:05:19.850
2023-05-14T18:20:54.577
2023-05-14T18:20:54.577
5176
387902
[ "maximum-likelihood", "graphical-model", "markov-random-field", "ising-model" ]
615797
2
null
615772
0
null
Two of your explanatory variables, SHR and DR, show too high standard errors (with respect to their estimated coefficients) yet you reject the null hypothesis. This may be a sign of multicollinearity. Are these variables defined through each other, e.g. is one of them a linear combination of the other? I suggest you compute the correlation matrix of your variables and check if these variables show high correlation. This is okay if your intention is to do prediction (see Gujarati's Basic Econometrics) but in this state you cannot proceed with statistical inference as two of your estimations are highly imprecise. You can drop variables based on the correlation matrix, or based on your theorization. I would proceed with the latter before the former, as if your model shows signs of multicollinearity, this can be a consequence of sampling ([https://online.stat.psu.edu/stat462/node/177/](https://online.stat.psu.edu/stat462/node/177/)).
null
CC BY-SA 4.0
null
2023-05-13T23:11:21.237
2023-05-14T00:00:52.060
2023-05-14T00:00:52.060
282477
282477
null
615798
2
null
615754
0
null
ANOVA is done to test whether there is a difference between groups, there isn't necessarily a direction. That is, you will not be testing if there is an advantage of right hand players over left hand players; you will be testing if their performance differs significantly. Same can be said for the surface as well. As for your other questions, you can choose other explanatory variables upon your choice. Note that if you utilize more than 2 variables you will formulate a different experimental design, which gets further complicated depending on the model you formulate.
null
CC BY-SA 4.0
null
2023-05-13T23:20:32.340
2023-05-13T23:20:32.340
null
null
282477
null
615799
2
null
615728
1
null
@whuber makes the correct explanation for this question. The data are categorical, discrete or continuous depending on your construction of the probability space, i.e. they are a modelling choice. For example, all colours visible to the human eye have an order of frequency in the visible electromagnetic spectrum, but you can both consider "blue, green, purple" or their respective frequencies as your data, depending on your problem. However, if you want a practical solution, I would try (and did before) plotting a frequency table or a histogram to see if there are numerical values in between them. If not, i.e. I would see same values counted various times and conclude that they are in fact categorical variables. I can see where you are coming from, in my case this was a numerical assignment (when was a measurement recorded if I recall correctly).
null
CC BY-SA 4.0
null
2023-05-13T23:31:37.990
2023-05-13T23:31:37.990
null
null
282477
null
615800
1
null
null
0
10
I am trying to get the values for True Positives, False negatives, false positives, and true negatives for my confusion matrix when I run the k-nearest algorithm in Matlab. However, when I calculate them, the values do not add up to the total sum of the elements for the confusion matrix which is 500. Also, my true negative values are coming up to a negative number. Am I doing something wrong? Below are the values for the TP, FN, FP, and TN I calculated TP - 398 FN - 102 FP - 102 TN - -102[](https://i.stack.imgur.com/1qZLf.png)
Confusion matrix : Values for TN are negative
CC BY-SA 4.0
null
2023-05-13T23:57:16.363
2023-05-14T00:08:19.090
2023-05-14T00:08:19.090
387904
387904
[ "matlab", "confusion-matrix" ]
615802
1
null
null
0
20
I'm trying to measure the effect of the included IVs on an individual's self-esteem (binary:positive/negative). This is panel data across three waves. I am running an LPM FE model. I'm not sure I understand the 2WFE model entirely. So do the results I have come under the assumption that time and individual effects are removed? And so, my IVs have no effect (or negative) on an individual's self-esteem? ``` Dependent variable: -------------------------------------------- POSITIVITY 1WFE (Year) 2WFE --------------------------------------------------------- INCOME 0.008*** -0.006 (0.001) (0.006) HOMEOWN 0.028*** -0.023 (0.003) (0.025) EDUC 0.003*** -0.019 (0.001) (0.033) INSURANCE 0.015 -0.011 (0.011) (0.029) MARRIED -0.001 0.006 (0.007) (0.024) PARENT 0.035*** 0.013 (0.009) (0.052) --------------------------------------------------------- Observations 4,942 4,942 R2 0.010 0.002 Adjusted R2 0.008 -1.999 F Statistic 8.025*** (df = 6; 4933) 0.438 (df = 6; 1645) ========================================================= Note: *p<0.1; **p<0.05; ***p<0.01 ```
Can someone explain why the signs of my coefficient estimates flip between a 1-way and 2-way fixed effects model? Also, loses significance?
CC BY-SA 4.0
null
2023-05-14T00:27:49.920
2023-05-16T12:08:57.110
2023-05-16T12:08:57.110
11887
null
[ "r", "interpretation", "panel-data" ]
615803
1
null
null
0
15
I want to assess a specific type of event's impact on a business's revenue. For example, receiving first press coverage, etc. I am thinking about interrupted time series analysis but I'm not sure how to use it for many time series--I have data for many businesses so it is more like panel data. Are there any packages or methods I should check out? I'm mainly doing this analysis in Python but I can do R too. There are some data questions too: - The time they received the first coverage will differ (A received the first coverage in 2010/10 while B 2020/10). - The baseline of the revenue for each business will differ too (A could go from 400 to 700 while B goes from 50000 to 10000). I'm thinking about if I should have fixed effects, but I only used fixed effects in linear regression, not timer series ones. - Overall, there is a trend for revenues to grow for every business, so there will be autocorrelation. I'm wondering what methods or data normalization are suited to answer my RQ. Right now I normalized the time to have the time series convert to a time index ranging from -6 to 6, with 0 being the value when the intervention happened (regardless of their actual date). I'm not sure how to do with 2, should I normalize the actual numbers to a relative percent increase or not? Any insights will be helpful. I'm not very fixated on causal inference but want to explore rather than validate the impulse impact and generalize it with many time series. Thanks in advance.
Interrupted Time Series for Panel with Fixed Effects?
CC BY-SA 4.0
null
2023-05-14T03:00:17.770
2023-05-14T03:00:17.770
null
null
321529
[ "r", "regression", "time-series", "python", "intervention-analysis" ]
615804
1
615900
null
3
109
I am interested in peak models that are observed in instrumental analysis. The term "generalized" is commonly used in the context of statistical distributions, referring to a class of distributions that are based on a particular distribution, but are modified to include additional parameters or changes to its shape or properties. For example, we have generalized normal distribution and normal distribution. (i) Does the term "generalized" have a more formal meaning than the one given above? (ii) Is there a specific protocol in statistics to generalize a simpler distribution to a more complex one? By that I mean how do we add another shape parameter to an existing distribution?
Is there a formal definition of the term "generalized" when used with distributions?
CC BY-SA 4.0
null
2023-05-14T03:26:54.057
2023-05-15T10:27:26.773
null
null
251478
[ "distributions", "terminology" ]
615806
1
null
null
0
15
Say I want to predict continuous Y with continuous X and dichotomous D1 and D2 with a multiple linear regression model. The hypothesis is that X negatively predicts Y on one level of D1 (the control level), and the negative predictive power should reduce on the other level of D1 (the treatment level). These effects should not differ between the two levels of D2 (i.e., D2 does not interact with X or D1). If D1 and D2 are dummy coded and with the control level coded as 0, one cannot interpret the coefficient of X as its simple effect on Y on the control level of D1, because there is also D2. If D1 and D2 are sum coded, then one cannot read directly from the model output whether X significantly and negatively predicts Y on the control level of D1. The question: how should one code D1 and D2 to properly test the hypotheses? If testing them requires building multiple models, what models would make sense, and in what order should one build them? Thank you very much.
Coding dichotomous variables in multiple linear regression
CC BY-SA 4.0
null
2023-05-14T04:08:44.320
2023-05-14T04:08:44.320
null
null
291506
[ "hypothesis-testing", "multiple-regression", "categorical-encoding" ]
615809
1
null
null
1
18
I am new to statistical modelling and I need some help with Bayesian model averaging. I have 3 models and I would like to derive a BMA of these models. I am using the BIC estimate for the different models and a uniform prior probability of 0.33. Model 1, BIC= 4420 Model 2, BIC= 3940 Model 3, BIC = 4325 I am stuck trying to calculate the Posterior probability for each model and I don’t know how to use that in estimating the BMA. Any help would be greatly appreciated.
Help with Bayesian model averaging
CC BY-SA 4.0
null
2023-05-14T06:00:42.700
2023-05-14T06:13:40.987
2023-05-14T06:13:40.987
387916
387916
[ "bayesian", "model-averaging" ]
615810
1
null
null
1
17
I am using the `R` programming language. I want to manually fit the D-vine copula for tree level 2 using `BiCopHgunc().` Still, I got a different result (optimal parameter and copula families) when using `RVineStructureSelect(),` which gives an optimal parameter and copula families output. Could someone please provide the `R` code to fit the 3-dimensional vine copula in tree level 2, where `V` variables as the conditioning variables? My code is: ``` copQV<-BiCop(family = 14, par = 13.37) h1<-BiCopHfunc2(pData[,1],pData[,2], copQV) copVD<-BiCop(family = 1, par = 0.77) h2<-BiCopHfunc1(pData[,2],pData[,3], copVD) summary(BiCopSelect(h1, h2)) ```
Fitting Vine Copula tree by tree
CC BY-SA 4.0
null
2023-05-14T06:45:10.513
2023-06-02T12:17:33.473
2023-06-02T12:17:33.473
361403
387921
[ "r", "time-series", "copula", "vine-copula" ]
615811
1
null
null
0
11
I am familiar with the dual form of the soft margin SVM when there is only one parameter $C,$ but I cannot find the dual form of the class-weighted soft margin SVM which has the following objective with parameters $C_1$ and $C_2$: $$\frac12\Vert w\Vert^2 +C_1\sum_{\xi_i:y_i=-1}^l\xi_i +C_2\sum_{\xi_i:y_i=1}^l\xi_i$$ Can the dual form of this problem be derived?
How do you find the dual form of the class weighted soft-margin SVM?
CC BY-SA 4.0
null
2023-05-14T07:02:54.883
2023-05-29T08:19:37.310
2023-05-29T08:19:37.310
362671
387922
[ "svm" ]
615812
1
null
null
0
6
The following program tries to show how a stateless load balancer creates variability when spreading balls between bins (with default settings below, the `max/min` ratio is ~2). Obviously, the more balls we throw, the smaller the spread. Question: how do I convert it to a continuous example? i.e. instead of using balls, I would like to load balance a "traffic", but I am failing to model this. My goal is to simulate a behavior showing how load-balanced traffic is spread non-uniformly even using a perfectly uniform sharding function. ``` #!/usr/bin/env python3 import numpy as np import matplotlib.pyplot as plt import webbrowser def uniform_bin_filling(balls, bin_len): bins = np.zeros(bin_len, dtype=int) for i in range(balls): bins[np.random.randint(0, bin_len)] += 1 return bins if __name__ == "__main__": import argparse parser = argparse.ArgumentParser() parser.add_argument("--balls", type=int, default=150) parser.add_argument("--bins", type=int, default=10) args = parser.parse_args() bins = uniform_bin_filling(args.balls, args.bins) print("Expected average is {:.2f}".format(args.balls / args.bins)) print("The number of balls in each bin is:") print(bins) r = bins.max() / bins.min() print("\nMax/Min ratio: {:.2f}".format(r)) ``` ```
simulate load-balancer variability
CC BY-SA 4.0
null
2023-05-14T07:54:00.167
2023-05-14T07:54:00.167
null
null
41354
[ "mathematical-statistics", "python", "simulation" ]
615813
1
null
null
0
22
## Notation: - $\mu$: true population parameter to be estimated - $T_i$: mutually independent unbiased estimators of $\mu$ - $Var(T_i)$ = $\sigma_i$ - $T = a + \sum_i^n b_iT_i$ - $a$, $b_i$ $\in \mathbb{R}$ ## Question: Conditions on $a$ & $b_i$ for T to be an unbiased estimator of $\mu$ ## Attempt: Say T is an unbiased estimator of $mu$ $$\implies E[a + \sum_i^n b_iT_i] = \mu$$ $$\implies a + \sum_i^n b_iE[T_i] = \mu$$ $$\implies \mu(\sum_i^n b_i) = \mu-a$$ $$\implies (\sum_i^n b_i) = 1-\frac{a}{\mu}$$ Now, $\mu$ is unknown true population parameter, hence the value of constants cannot depend on $\mu$ Thus, this forces $a=0$ & $\sum_1^nb_i=1$
Conditions For Unbiased Linear Combination of Estimators
CC BY-SA 4.0
null
2023-05-14T07:59:29.277
2023-05-14T07:59:29.277
null
null
387887
[ "self-study", "unbiased-estimator" ]
615814
2
null
615642
5
null
TL;DR The article that you refer to makes things look more worse than they actually are. Their bootstrapping procedure is not a good way to apply bootstrapping. In the case of OLS there shouldn't be big problems with high dimensionality if the sample size is large. If you can not get correct results with OLS, where a correct confidence interval can be easily computed analytically, then something must be wrong with the implementation of the bootstrapping method. It is good though to be reminded that the residuals are not the same as the errors and that we can use simulations with OLS to test (potentially wrong) implementations of bootstrapping. ### Simple reproduction of the article results The article that you refer to is performing simulations of errors by bootstrapping/resampling of the residuals. Below is a simple example that reproduces this. The model is a linear regression with $n=500$ samples (or 250 pairs) and $p=125$ parameters. The distributions that are plotted here are just for the first parameter estimate $\hat{\beta}_1$. ### Discrepancy in estimated sample variance The third image, resampling the true errors, gives a correct indication of the sample distribution of the coefficient. The first and second images, resampling all residuals, or resampling the pairs, have distributions with a different variance. They lead to errors in the estimates of standard errors and confidence intervals. The reason for the discrepancy is that bootstrapping only works when the bootstrapped samples are a good representation of the true distribution. This is not the case when $p/n$ is large. - resampling residuals The bootstrap samples are created by simulating errors by sampling from the residuals, however the variance of the residuals is lower than the variance of the errors $$\text{Var}(r_i) \approx \left(1-\frac{p}{n}\right) \text{Var}(\epsilon)$$ - pairwise resampling in the case of pairwise resampling the distribution is effectively a scaled binomial distribution. The variance will be $$\text{Var}(r_{i,paired}) \approx \frac{1}{2} \text{Var}(\epsilon)$$ The factor $0.5$ stems from the Bernoulli variable having variance $0.25$ and the scaling is the difference of the two pairs which has variance $2\text{Var}(\epsilon)$ ### Discussion You should only use bootstrapping when the sampling is done from a distribution that represents the population distribution. This is not the case with paired resampling, which is sampling a distribution with half the variance, and with the residual resampling, which is sampling a distribution with smaller variance if $p/n$ is large. The bootstrapping is often performed when a distribution of is difficult to compute. This is either the case when 1) the assumptions about the error distribution are false, or the error distribution is unknown 2) the propagation of errors is difficult to compute. For ordinary linear regression, the second case is not an issue. The statistic is a linear sum of the data and it's sampling distribution will often approximate a normal distribution. With different cost functions the behaviour might not be too far. The problem is just to estimate the variance, and the residuals are often a good indication for this. But, one has to apply the right corrections. The problem is more difficult in the situation where the error distribution has large tails and the variance is not easily estimated with a small sample. In this case the typical remedy is to simply gather more data. Potentially one could do an advanced semi-parametric bootstrapping by combining the residuals with a normal distribution that relates to the residuals being the errors with the estimate subtracted (the estimate being some correlated normal distribution). ### Plot of reproduction [](https://i.stack.imgur.com/xFI7R.png) ``` set.seed(1) n = 500 # data samples p = 125 # parameters m = 1000 # times resampling ### create paired data X = matrix(rnorm(n*p/2),ncol =p) X = rbind(X,X) Y = rnorm(n) solve(t(X)%*%X)[1,1] ### this is the theoretic variance ### compute main model mod = lm(Y~X+0) ### variables used for resampling Y_m = predict(mod) res = mod$residuals err = Y ### perform resampling of residuals b_residuals = sapply(1:m, FUN = function(i) { Y_s = Y_m + sample(res,n) lm(Y_s~X+0)$coefficients[1] }) ### perform resampling of errors b_errors = sapply(1:m, FUN = function(i) { Y_s = Y_m + sample(err,n) lm(Y_s~X+0)$coefficients[1] }) ### perform paired resampling b_paired = sapply(1:m, FUN = function(i) { selection = rep(1:(n/2),2)+rbinom(n,1,0.5)*n/2 Y_s = Y_m + res[selection] lm(Y_s~X+0)$coefficients[1] }) ### plot histograms layout(matrix(1:3,3)) hist(b_residuals, breaks = seq(-0.5,0.5,0.02), freq = 0, ylim = c(0,10), main = "resampling of residuals", xlab = expression(beta[1])) lines(mod$coefficients[1]*c(1,1),c(0,10), lty = 2, col = 2) hist(b_paired, breaks = seq(-0.5,0.5,0.02), freq = 0, ylim = c(0,10), main = "resampling of residual pairs", xlab = expression(beta[1])) lines(mod$coefficients[1]*c(1,1),c(0,10), lty = 2, col = 2) hist(b_errors, breaks = seq(-0.5,0.5,0.02), freq = 0, ylim = c(0,10), main = "resampling of true errors", xlab = expression(beta[1])) lines(mod$coefficients[1]*c(1,1),c(0,10), lty = 2, col = 2) var(b_residuals)/var(b_errors) var(b_paired)/var(b_errors) ```
null
CC BY-SA 4.0
null
2023-05-14T08:42:51.603
2023-05-15T07:16:49.380
2023-05-15T07:16:49.380
164061
164061
null
615817
2
null
615796
0
null
For a generic graph, finding that MLE is a hard task, you will most likely need to resort to some approximation. In the literature you can find this referred to as the Inverse Ising problem. If the graph is disconnected, you are left with a set of independent binary random variables, for which the MLE estimator is straightforward to compute. You should get $$\theta_s^{MLE}=\log\frac{\mu_s}{1-\mu_s}$$ where $$\mu_s=\frac1L\sum_{l=1}^Lx_s^{(l)}$$
null
CC BY-SA 4.0
null
2023-05-14T10:37:32.763
2023-05-14T10:38:49.613
2023-05-14T10:38:49.613
268623
268623
null
615818
1
null
null
0
10
Is it possible to generate a projection matrix from Quadratic Discriminant Analysis, as Linear Discriminant Analysis does? I have found a library called [mataveid](https://github.com/DanielMartensson/Mataveid) that can do Linear Discriminant Analysis (and much more) and I notice that at the end of the code, the projection matrix `W` is multiplied with the data `X` and generates a projection called `P`. Is it possible that Quadratic Discriminant Analysis can do the same? ``` function [P,W] = lda(varargin) % Check if there is any input if(isempty(varargin)) error('Missing inputs') end % Get impulse response if(length(varargin) >= 1) X = varargin{1}; else error('Missing data X') end % Get the sample time if(length(varargin) >= 2) y = varargin{2}; else error('Missing class ID y'); end % Get the sample time if(length(varargin) >= 3) c = varargin{3}; else error('Missing amount of components'); end % Get size of X [row, column] = size(X); % Create average vector mu_X = mean(X, 2) mu_X = mean(X, 2); % Count classes amount_of_classes = y(end) + 1; % Create scatter matrices Sw and Sb Sw = zeros(row, row); Sb = zeros(row, row); % How many samples of each class samples_of_each_class = zeros(1, amount_of_classes); for i = 1:column samples_of_each_class(y(i) + 1) = samples_of_each_class(y(i) + 1) + 1; % Remove +1 if you are using C end % Iterate all classes shift = 1; for i = 1:amount_of_classes % Get samples of each class samples_of_class = samples_of_each_class(i); % Copy a class to Xi from X Xi = X(:, shift:shift+samples_of_class - 1); % Shift shift = shift + samples_of_class; % Get average of Xi mu_Xi = mean(Xi, 2); % Center Xi Xi = Xi - mu_Xi; % Copy Xi and transpose Xi to XiT and turn XiT into transpose XiT = Xi'; % Create XiXiT = Xi*Xi' XiXiT = Xi*XiT; % Add to Sw scatter matrix Sw = Sw + XiXiT; % Calculate difference diff = mu_Xi - mu_X; % Borrow this matrix and do XiXiT = diff*diff' XiXiT = diff*diff'; % Add to Sb scatter matrix - Important to multiply XiXiT with samples of class Sb = Sb + XiXiT*samples_of_class; end % Find the eigenvectors - by solving the generalized eigenvalue problem: Sb*v = Sw*v*lambda L = chol(Sw, 'lower'); Y = linsolve(L, Sb); Z = Y*inv(L'); [V, D] = eig(Z); % Sort eigenvectors descending by eigenvalue [D, idx] = sort(diag(D), 1, 'descend'); V = V(:,idx); % Get components W W = V(:, 1:c); P = W'*X; end ```
Is it possible to generate a projection matrix from Quadratic Discriminant Analysis?
CC BY-SA 4.0
null
2023-05-14T10:43:36.963
2023-05-14T10:43:36.963
null
null
275488
[ "dimensionality-reduction", "discriminant-analysis" ]
615820
1
null
null
0
12
I'm new to Bayesian analysis. My predictor is a logarithmically increasing variable (10, 100, 1000..) and the outcome is a categorical variable which indicates correct/incorrect. I need to analyze this data using Bayesian analysis. What have I done? I have analyzed this data using a general linear model (GLM). I can see a linear relationship which shows that correct responses increase with length. What I need I need to develop this further using Bayesian analysis. - Can I treat my predictor as a continuous variable - I only have finite values though (10, 100, 1000...)? - Can I send the coefficient obtained using the GLM into the Bayesian as the likelihood? - What would be the assumptions needed for this to be valid? For example, should the data be transformed in a specific way? I have heard that this is a popular technique. I searched everywhere for 2. and 3., but couldn't find a good reference for it.
Bayesian analysis - logarithmically increasing predictor, categorical outcome
CC BY-SA 4.0
null
2023-05-14T10:52:57.093
2023-05-14T11:02:55.110
2023-05-14T11:02:55.110
362671
387884
[ "r", "bayesian", "generalized-linear-model", "references", "ordinal-data" ]
615821
1
null
null
0
7
I’m looking at CausalPy documentation and the algorithm of interest ir Difference in Differences. I see that they build two different models. In general, how should one select the features and/or interactions to condition on in a DiD model? [https://causalpy.readthedocs.io/en/latest/notebooks/did_pymc_banks.html](https://causalpy.readthedocs.io/en/latest/notebooks/did_pymc_banks.html)
How to choose features to condition on in DiD causal model?
CC BY-SA 4.0
null
2023-05-14T11:07:33.943
2023-05-14T11:07:33.943
null
null
288172
[ "causality", "generalized-did" ]
615823
1
null
null
0
11
I was wondering when you use ADF method for unit root testing, if variables are stationary at 10% level, p-value just over .05. How can one be more confident that it is sationary, could you for example use a test for autocorrelation in residuals of the adf equation to make sure error terrm is white noise? exp. breusch Pagan test? Appreciate every answer :)
ADF test stationary at 10% critical value
CC BY-SA 4.0
null
2023-05-14T12:02:10.120
2023-05-14T12:02:10.120
null
null
383188
[ "augmented-dickey-fuller", "breusch-pagan", "critical-value" ]
615824
1
null
null
0
19
I am a statistical noobie and I will need to do some testing of the hypotheses of child patients but I am not sure which tests should be performed and how. I have measures of different metrics per each child on day 1 at a few particular hours and then other measures on the second day of the same patients at the same hours but with different conditions. Also, some measures have multiple values as there has been a small workout at each hour and those values are measured before and after for some metrics. So I have ended up with the data like this per each child: [](https://i.stack.imgur.com/YPslN.png) My hypotheses are (ordered from easiest to perform to hardest in my opinion): - Children fall asleep sooner with white noise in the background. - Children have lower heart rates when white noise is played in the background. - Children have better vital functions when white noise is being played in the background (This one is a result of an HR, Number of breaths, and Saturation at once) Thank you very much for any suggestions and help to perform these tests :) I am pretty lost due to having those multiple measures as I can't find many articles that would work with data like this :/
Hypothesis testing with repeated measures per subject per condition + multiple conditions with repeated measures + hypothesis of previous hypotheses
CC BY-SA 4.0
null
2023-05-14T12:13:33.207
2023-05-14T12:13:33.207
null
null
387943
[ "hypothesis-testing", "multiple-comparisons" ]
615825
1
null
null
0
8
I learned how to get the embedding features from wav2vec from this [question](https://stackoverflow.com/questions/69266293/getting-embeddings-from-wav2vec2-models-in-huggingface) but I have some questions. - What is the difference between using those fixed embedding features in a downstream task and fine-tuning the model? I know that I can use the fixed embedding features with text(for example: BERT) but Does the same thing apply for audio data and wav2vec? - The resulting embeddings are long sequences of vectors(for example: the sequence length is 1676 for a 30 second audio file). What is the best way to shorten that sequence assuming that my data have audio files with different length?
How can I use the features embedding from wav2vec model?
CC BY-SA 4.0
null
2023-05-14T12:20:00.127
2023-05-14T12:20:00.127
null
null
116480
[ "neural-networks", "transformers" ]
615829
1
null
null
0
11
In transformer network ([Vaswani et al., 2017](https://arxiv.org/abs/1706.03762)), the feedforward networks have equation: $\mathrm{FNN}(x) = \max(0, xW_1 + b_1) W_2 + b_2$ where $x \in \mathbb{R}^{n \times d_\mathrm{model}}$, $W_1 \in\mathbb{R}^{d_\mathrm{model} \times d_{ff}}$, $W_2 \in\mathbb{R}^{d_{ff} \times d_\mathrm{model}}$. We know that the biases $b_1$ and $b_2$ are vectors. But, for the equation to work the shape of $b_1$ and $b_2$ must agree, i.e., $b_1 \in\mathbb{R}^{n \times d_{ff}}$ and $b_2 \in\mathbb{R}^{n \times d_\mathrm{model}}$. My question: is it true that $b_1 = \begin{bmatrix} (b_1)_{1} & (b_1)_{2} & \dots & (b_1)_{d_{ff}}\\ (b_1)_{1} & (b_1)_{2} & \dots & (b_1)_{d_{ff}} \\ \vdots & \vdots & & \vdots \\ (b_1)_{1} & (b_1)_{2} & \dots & (b_1)_{d_{ff}} \end{bmatrix}$ and $b_2 = \begin{bmatrix} (b_2)_{1} & (b_2)_{2} & \dots & (b_2)_{d_\mathrm{model}}\\ (b_2)_{1} & (b_2)_{2} & \dots & (b_2)_{d_\mathrm{model}} \\ \vdots & \vdots & & \vdots \\ (b_2)_{1} & (b_2)_{2} & \dots & (b_2)_{d_\mathrm{model}} \end{bmatrix}$?
Shape of biases in Transformer's Feedforward Network
CC BY-SA 4.0
null
2023-05-14T12:46:32.733
2023-05-14T12:46:32.733
null
null
387019
[ "neural-networks" ]
615830
1
null
null
0
40
The sign of the partial correlation coefficient is consistent with the sign of the corresponding estimated parameter. For example, in the estimated regression equation $\widehat{Y}=\hat{b}_{0}+\hat{b}_{1} X_{1}+\hat{b}_{2} X_{2}$, the sign of $r_{YX_{1}\cdot X_{2}} $ is consistent with the sign of $\hat{b}_{1}$, and the sign of $r{_{YX_{2}\cdot X_{1}}} $ is consistent with the sign of $\hat{b}_{2}$. --- I don't understand why we have $r_{YX_{1}\cdot X_{2}} $ is consistent with the sign of $\hat{b}_{1}$ and the sign of $r{_{YX_{2}\cdot X_{1}}} $ is consistent with the sign of $\hat{b}_{2}$. I know that $$r^{2}_{YX_{1}\cdot X_{2}}=\frac{\left(X^{\prime}_{1}N_{(\mathbf{1_{n}},X_{2})}Y\right)^{2}}{\left(X^{\prime}_{1}N_{(\mathbf{1_{n}},X_{2})}X_{1}\right)\cdot\left(Y^{\prime }N_{(\mathbf{1_{n}},X_{2})}Y\right)},\quad r^{2}_{YX_{2}\cdot X_{1}}=\frac{\left(X^{\prime}_{2}N_{(\mathbf{1_{n}},X_{1})}Y\right)^{2}}{\left(X^{\prime}_{2}N_{(\mathbf{1_{n}},X_{1})}X_{2}\right)\cdot\left(Y^{\prime }N_{(\mathbf{1_{n}},X_{1})}Y\right)},$$ Where $$N_{(\mathbf{1_{n}},X_{1})}=I_{n}-P_{(\mathbf{1_{n}},X_{1})},$$$$ P_{(\mathbf{1_{n}},X_{1})}=(\mathbf{1_{n}},X_{1})\cdot\left[(\mathbf{1_{n}},X_{1})^{\prime}(\mathbf{1_{n}},X_{1})\right]^{-1}\cdot(\mathbf{1_{n}},X_{1})^{\prime};$$$$\quad N_{(\mathbf{1_{n}},X_{2})},\quad P_{(\mathbf{1_{n}},X_{2})}\quad \text{similarly}.$$ So I think $r_{YX_{1}\cdot X_{2}} $ is consistent with the sign of $X^{\prime}_{1}N_{(\mathbf{1_{n}},X_{2})}Y$ and the sign of $r{_{YX_{2}\cdot X_{1}}} $ is consistent with the sign of $X^{\prime}_{2}N_{(\mathbf{1_{n}},X_{1})}Y.$ Do the following relationship hold? $$\text{sign}(X^{\prime}_{1}N_{(\mathbf{1_{n}},X_{2})}Y)=\text{sign}(\hat{b}_{1})\quad\text{and}\quad\text{sign}(X^{\prime}_{2}N_{(\mathbf{1_{n}},X_{1})}Y)=\text{sign}(\hat{b}_{2}).$$ --- $$(X^{\prime}_{2}N_{(\mathbf{1_{n}},X_{1})})\cdot\widehat{Y}$$$$=(X^{\prime}_{2}N_{(\mathbf{1_{n}},X_{1})})\cdot(\hat{b}_{0}+\hat{b}_{1} X_{1}+\hat{b}_{2} X_{2})$$$$=(X^{\prime}_{2}N_{(\mathbf{1_{n}},X_{1})}X_{2})\cdot \hat{b}_{2} \Rightarrow \text{sign}(X^{\prime}_{2}N_{(\mathbf{1_{n}},X_{1})}\widehat{Y})=\text{sign}(\hat{b}_{2}).$$ If $$\text{sign}(X^{\prime}_{2}N_{(\mathbf{1_{n}},X_{1})}\widehat{Y})=\text{sign}(X^{\prime}_{2}N_{(\mathbf{1_{n}},X_{1})}{Y}),$$ then the conclusion will be hold.
The sign of the partial correlation coefficient
CC BY-SA 4.0
null
2023-05-14T13:11:49.240
2023-05-14T13:43:13.517
2023-05-14T13:43:13.517
371966
371966
[ "regression", "self-study", "multiple-regression", "partial-correlation" ]
615831
1
null
null
0
12
I have the following R code, I am fitting a VAR(10) models to a bivariate time series, comprising two variables, gaz and nuc. Yet, when trying to forecast, I had negative values, whereas my series is the nuclear electricity production and the total imports of gaz by France, from november 2003 to november 2022. So negative values are impossible. I wonder the cause of this, as my model passed the ARCH test, the serial tests too, but not the Jarque-Bera test. Maybe it is not stationary, but I do not know how to interpret the value of the roots of the characteristic polynomial in R, which you can see here, performed on 4 different models [](https://i.stack.imgur.com/fFUHn.png). I tried to perform an RMSE test, but it didn't work, because R doesn't recognise that start_date is before end_date : ``` library(lubridate) start_date <- ymd("2003-11-01") end_date <- ymd("2021-12-01") subset_data <- window(energy.bv, start = start_date, end = end_date) Modelenergy2_test <- VAR(subset_data, p = 10, season = 12, type = "both") forecast_2022 <- predict(Modelenergy2_test, n.ahead = 11, ci = 0) actual_2022 <- window(energy.bv, start = ymd("2022-01-01"), end = ymd("2022-11-01")) library(Metrics) rmse <- RMSE(forecast_2022$NuclΓ©aire, actual_2022$NuclΓ©aire) ``` I get the following error after subset data : Error in window.default(x, ...) : 'start' ne peut pas Γͺtre aprΓ¨s 'end' In addition: Warning message: In window.default(x, ...) : valeur de 'end' inchangΓ©e Here is my code : ``` lagSelect<-VARselect(energy.bv,lag.max = 10,type="both") lagSelect<-VARselect(energy.bv,lag.max = 10,type="both")$selection lagSelect Modelenergy2 <- VAR(energy.bv, p = 10, type = "both", season = 12) summary(Modelenergy2) ``` Here is the summary of the model :[](https://i.stack.imgur.com/nZHFM.png) [](https://i.stack.imgur.com/nGySG.png) [](https://i.stack.imgur.com/fEs8I.png) [](https://i.stack.imgur.com/oERSr.png) Any help would be welcome, thanks in advance !
problem forecasting in R using a VAR model : interpretation of characteristic polynomial roots
CC BY-SA 4.0
null
2023-05-14T13:34:27.410
2023-05-14T13:34:27.410
null
null
364061
[ "r", "time-series", "forecasting", "rms", "bivariate" ]
615832
2
null
615491
2
null
Because there are typographical errors in the statement here and (apparently) in the paper, and nothing is explained here about $F$ (it's not a CDF), let's begin by simplifying the notation. Suppose you have a positive integrable real-valued function $F:[0, \epsilon]\to\mathbb (0,\infty)$ and for all $x\in(0,\epsilon)$ the inequality $$F(x) \le \mu + \sigma\int_x^\epsilon F(y)\,\mathrm dy\tag{1}$$ is satisfied. Here, $\mu$ and $\sigma$ can be identified by inspection from (B4) in the paper, $$\mu = \frac{K}{1 - 2R\epsilon/\alpha},\quad \sigma = \frac{2CR}{\alpha - 2R\epsilon} \gt 0.$$ From this information alone the paper purports to conclude that $F$ is bounded on $[0,\epsilon].$ That this does not follow can be established with a simple counterexample. Take, for instance, $$F(x) = 1/x;\quad \sigma = 1;\quad \mu = -\log\epsilon.$$ Then $(1)$ reduces to $1/x \le -\log(x),$ which always holds for $0 \lt x \le 1,$ but obviously $F$ is not bounded on $[0,\epsilon]$ no matter how small $\epsilon$ may be. This is not a counterexample to [GrΓΆnwall's Inequality](https://en.wikipedia.org/wiki/Gr%C3%B6nwall%27s_inequality). The problem here is that the order of integration in $(1)$ is the reverse of that required to apply the Inequality. Changing that order would negate the integral, effectively making its multiplier $-\sigma$ negative, and the inequality requires the multiplier be positive. --- Thus, Proposition (4) in the paper might be true, but it's not true for the reasons given at this part of the paper. If I'm following the argument correctly, I think the error begins with the second line of inequalities following (B3), where "min" ought to read "max." When the remaining lines are fixed up, a similar expression will emerge where GrΓΆnwall's Inequality does apply.
null
CC BY-SA 4.0
null
2023-05-14T13:51:54.227
2023-05-14T13:51:54.227
null
null
919
null
615834
2
null
615612
1
null
A comment by @StephanKolassa let me answer my own question. There are serious doubts about the usefulness (and correctness) of p-rep. Here are critiques and discussions of p-rep: R.R. Macdonald, [Commentary: Why replication probabilities depend on prior probability distributions: A rejoinder to Killeen (2005)](https://psycnet.apa.org/record/2005-15678-016) Psychological Science, 16(12), 1007–1008. "Any simulation of replication probabilities has to incorporate a distribution for [prior probabilities]. Killeen's approach ignores such distributions and as a result, seems appropriate only when there is no relevant prior information." P.R. Killeen: [Replicability, Confidence, and Priors](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1440522/), Psychol Sci. 2005 Dec; 16(12): 1009–1012. Killeen responds to some critics. Gat, Y., [p-rep, mathematical error, pure and simple](https://probonostats.wordpress.com/2007/09/15/p-rep-mathematical-error-pure-and-simple/) 2007. "It turns out that Killeen really is utterly and honestly mistaken about his proposed statistic, the p-rep. His entire analysis is wrong because of a relatively simple mathematical error." G.J. Iverson, M. D. Lee, and E-J Wagenmakers, [prep misestimates the probability of replication](https://www.ejwagenmakers.com/2009/IversonEtAl2009PBR.pdf) Psychonomic Bulletin & Review 2009, 16 (2), 424-429. doi:10.3758/PBR.16.2.424. "Our results show that, over any plausible series of experiments, the true probabilities of replication will be very different from those predicted by prep. We discuss some basic problems in the formulation of prep that are responsible for its poor performance, and conclude that prep is not a useful statistic for psychological science."
null
CC BY-SA 4.0
null
2023-05-14T13:59:57.167
2023-05-14T18:44:03.213
2023-05-14T18:44:03.213
25
25
null
615835
1
null
null
0
30
what's the correct way to quantify the loss of information we have when we approximate the likelihood from multivariate normal distribution with a full covariance matrix to a product of univariate Gaussian pdf ? I am doing some Bayesian inference and i am comparing the results i get when i use the multivariate distribution with the one i get when i assume the data points independents. The results are very similar but i would like to know if there is some procedure that can justify this approximation. i have read i can use the Kullback-Leibler divergence like in [KL divergence between two multivariate Gaussians](https://stats.stackexchange.com/questions/60680/kl-divergence-between-two-multivariate-gaussians) $$ \begin{aligned} KL &= \int \left[ \frac{1}{2} \log\frac{|\Sigma_2|}{|\Sigma_1|} - \frac{1}{2} (x-\mu_1)^T\Sigma_1^{-1}(x-\mu_1) + \frac{1}{2} (x-\mu_2)^T\Sigma_2^{-1}(x-\mu_2) \right] \times p(x) dx \\ &= \frac{1}{2} \log\frac{|\Sigma_2|}{|\Sigma_1|} - \frac{1}{2} \text{tr}\ \left\{E[(x - \mu_1)(x - \mu_1)^T] \ \Sigma_1^{-1} \right\} + \frac{1}{2} E[(x - \mu_2)^T \Sigma_2^{-1} (x - \mu_2)] \\ &= \frac{1}{2} \log\frac{|\Sigma_2|}{|\Sigma_1|} - \frac{1}{2} \text{tr}\ \{I_d \} + \frac{1}{2} (\mu_1 - \mu_2)^T \Sigma_2^{-1} (\mu_1 - \mu_2) + \frac{1}{2} \text{tr} \{ \Sigma_2^{-1} \Sigma_1 \} \\ &= \frac{1}{2}\left[\log\frac{|\Sigma_2|}{|\Sigma_1|} - d + \text{tr} \{ \Sigma_2^{-1}\Sigma_1 \} + (\mu_2 - \mu_1)^T \Sigma_2^{-1}(\mu_2 - \mu_1)\right]. \end{aligned} $$ is it correct if i reduce this solution for 2 multivariate Gaussian in the solution i want by considering $\Sigma_2$ as a diagonal matrix ?
Kullback-Leibler divergence between product of independent gaussians and a multivariate normal distribution
CC BY-SA 4.0
null
2023-05-14T14:04:58.037
2023-05-14T14:43:26.350
2023-05-14T14:43:26.350
7224
275569
[ "normal-distribution", "multivariate-normal-distribution", "kullback-leibler" ]
615836
1
615840
null
4
638
I have a data set that is analogous to a survival analysis dataset. I have experimental animals, and these animals are modelled as having two states where the second state is absorbing. i.e. Each individual will only transition once, and when it reaches state 2, it stays there. The individual may also opt to never enter state 2 for the whole experiment. I understand this to be a basic setup for a survival analysis using Markov chains. I'm worried about the effect of individual variance in this setup. For example… those individuals with a tendency to quickly transition into state 2, they will have a small impact on the transition parameter estimate, and those individuals who never transition, will have an over represented effect on the transition parameter estimates (as they are in state 1 for a longer period, they are evaluated more times by the likelihood of the state transition). From my understanding of Markov chains, I'm assuming the parameter estimates will be biased to representing those individuals who are less prone to transitioning into state 2. When dealing with individual variance… my intuition is to add a random effect to capture that variance and integrate it out of the likelihood. However, in this context, since each individual can only transition once (or never), that means each random effect level will have a sample size of 1 or 0… which feels problematic. I was wondering if there were any standard methods to deal with this kind of bias in Markov chain-style survival analysis. Or perhaps, Markov chain is just a bad choice when there is expected individual variance that can't be accounted for with covariate data. Also, please let me know if I'm misunderstanding something about Markov chains here. ### Edit: Why not a cox-prop-hazard model? In this dataset, I have experimental individuals which share the same environment & hazard covariates. Hence, when one individual encounters high-hazard covariates, this means all other individuals are also exposed to the same high-hazard covariates. My understanding of cox-prop-hazard is that covariates are only evaluated at event times (death) accorss individuals. So in this context, the cox-prop-hazard model will always be comparing individuals exposed to the same covariates. My understanding is that a Markov-chain approach will get around this, as Markov-chains will take into account the whole covariate history of all individuals. ### Comments Sorry for describing this setting in rather abstract terms... but this is a field experiment with animals. I'm worried if I start going into the intricacies of my field setup, it will just confuse matters, as it will take a lot of text to describe the experimental setting and expected behaviours.
Fitting a Markov chain with variable path lengths (survival analysis)
CC BY-SA 4.0
null
2023-05-14T14:13:18.953
2023-05-14T19:24:23.880
2023-05-14T15:58:15.057
35780
35780
[ "mixed-model", "survival", "markov-process" ]
615837
1
null
null
0
18
I have a data concerning the number of posts about certain hashtag on Instagram by influencers and not influencers per hour. The time period is one week, the dataset looks like this: |Hours |2 PM |3 PM |4 PM |5 PM |6 PM |7 PM | |-----|----|----|----|----|----|----| |Influencers |8 |15 |10 |20 |13 |25 | |Not influencers |38 |58 |30 |28 |49 |75 | I want to perform some statistical test to figure out is there a different in the patterns of posting between two groups. For instance, can we figure out which group is this just looking at the pattern of the distribution. But I am confused which kind of test I need to perform. What kind of test can I choose taking into account that data is not distributed normally and probably one group can influence another group?
Test the distribution of data of two categories over time statistically
CC BY-SA 4.0
null
2023-05-14T14:35:19.940
2023-05-14T14:56:33.060
2023-05-14T14:56:33.060
286141
286141
[ "time-series", "distributions", "mathematical-statistics" ]
615838
1
615879
null
0
37
I have fitted a VAR model to first-differenced financial data, and conducted a Granger-Causality test on stationary data. The results can be seen below: [](https://i.stack.imgur.com/8oXku.png) The above results show that only one variable seems to Granger-Cause the price return of gold, highlighted in green. However, there are many other variables that are quite close to rejecting the Null-hypothesis. In these cases, could one say that the other variables partially Granger-Cause the forecasting variable, and therefore include them in a multi-variate model?
Granger-Causality test result interpretation
CC BY-SA 4.0
null
2023-05-14T15:38:41.140
2023-05-15T08:54:55.837
2023-05-15T08:54:55.837
53690
387945
[ "time-series", "interpretation", "model-selection", "vector-autoregression", "granger-causality" ]
615839
1
null
null
0
15
If log variable have a unit root, can you then difference it? For ARDL model variables should not be I(2) or more. Variables of log form are I(2). Would it be a problem to difference it. What I am asking is: is it the varibales included in the actual model that need to be I(0) or I(1). so that it does not matter if the raw data were I(2) as long as you use the first difference of this which is I(1) (or I(0)) in model/analysis. or stated differently: If log X is I(2) it is not appropriate to include but you can include the differenced log X if it is I(1) or I(0)
Can I transform variables to comply with conditions on order of integration in an ARDL model?
CC BY-SA 4.0
null
2023-05-14T15:41:58.510
2023-05-15T06:55:42.760
2023-05-15T06:55:42.760
53690
383188
[ "time-series", "logarithm", "differencing", "ardl" ]
615840
2
null
615836
5
null
In the case of a single terminating event, a comparison with the Cox PH model as @sextus-empiricus suggested is a great idea and you'll find that the two will provide virtual identical standard errors of comparable quantities. My impression is that you are worrying too much about variances being unrealistic. When the vast majority of subjects stay in one state for a prolonged period, the state transition probability associated with that (start in state A and end in state A in the next time period) is near 1.0 and on the log-likelihood scale this contributes nearly 0 to the log-likelihood, which ultimately is used to estimate the variance. In other words, near-redundancies in the data are already accounted for by Markov modeling. For that reason the likelihood doesn't change whether or not you carry observations forward once a subject reaches an absorbing state. For numerical efficiency, don't carry such records forward.
null
CC BY-SA 4.0
null
2023-05-14T15:47:35.797
2023-05-14T15:47:35.797
null
null
4253
null
615842
1
null
null
2
60
I've constructed a Bayesian Generalized Difference in Differences model. I model an intercept as well as three coefficients; one for treatment_group assignment, one for post-start period, and one for the interaction of treatment_group assignment and post_start period (aka exposure.) First, I'd like to validate that I've done this appropriately. And second, I have a question below about the output. Namely, I'm curious how my lift can be positive but my interaction term could be negative? Note that, pre-experiment, the control group had a lower rate (K/N) than the treatment group but post experiment, the control group had a higher rate. So it appears that the time period effect might have been stronger for the control group than the treatment group. I understand that GDD is designed for this use case. But I'm also confused how it could handle this when there are no observations of the treatment group without exposure during the post-start period. ``` bayesian_model = """ // GDD data { int<lower=0> N; int<lower=0,upper=1> treatment_group[N]; int<lower=0,upper=1> post_start[N]; int<lower=0> successes[N]; int<lower=0> trials[N]; } parameters { real alpha; real beta_treatment_group; real beta_post_start; real gamma_exposure; } model { // priors alpha ~ normal(0, 10); beta_treatment_group ~ normal(0, 10); beta_post_start ~ normal(0, 10); gamma_exposure ~ normal(0, 10); // likelihood for (i in 1:N) { successes[i] ~ binomial_logit(trials[i], alpha +treatment_group[i]*beta_treatment_group +post_start[i]*beta_post_start +treatment_group[i]*post_start[i]*gamma_exposure); } } generated quantities { real lift; lift = inv_logit(alpha + beta_treatment_group + beta_post_start + gamma_exposure) - inv_logit(alpha + beta_post_start); } ``` And here is the output of the sampler ``` mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat alpha 1.135 0.038 1.064 1.206 0.001 0.001 1464.0 1985.0 1.0 beta_treatment_group 0.209 0.060 0.102 0.322 0.002 0.001 1499.0 1847.0 1.0 beta_post_start 0.252 0.052 0.155 0.348 0.001 0.001 1541.0 1931.0 1.0 gamma_exposure -0.168 0.081 -0.311 -0.008 0.002 0.001 1655.0 1998.0 1.0 lift 0.007 0.009 -0.010 0.022 0.000 0.000 3656.0 3211.0 1.0 ``` My question is: How can I have a negative interaction term (treatment_group * post_start) gamma exposure but a positive lift? Have I computed lift wrong? Edit: In retrospect my original lift computation deviates from the traditional DiD lift computation and might not be reliable. Here is a new attempt, which I believe tis closer to the mark if not accurate- but would still appreciate feedback. ``` generated quantities { real pre_diff; real post_diff; real lift; pre_diff = inv_logit(alpha + beta_treatment_group) - inv_logit(alpha); post_diff = inv_logit(alpha + beta_treatment_group + beta_post_start + gamma_exposure) - inv_logit(alpha + beta_post_start); lift = post_diff - pre_diff; } ``` The lift is now negative and does not contradict the exposure coefficient.
Generalized Difference in Differences model: time*group interaction contradicts lift
CC BY-SA 4.0
null
2023-05-14T15:57:19.400
2023-05-16T18:31:45.507
2023-05-14T16:23:44.930
288172
288172
[ "bayesian", "generalized-linear-model", "causality", "stan", "generalized-did" ]
615843
2
null
615770
1
null
I contacted the author (Renaud Lambiotte) and he shared the answer with me. The change in modularity of removing node $x$ from its community $C_x$ is: $$\triangle Q_{remove} = - \frac{1}{m} \sum_{i \in C_x \setminus \{x\}} \Big[ A_{ix} - \frac{k_i k_x}{2m} \Big] $$ The change in modularity of inserting node $x$, which is now alone in its community, into $C_1$ is: $$ \triangle Q_{insert} = \frac{1}{m} \sum_{i \in C_1} \big[ A_{ix} - \frac{k_i k_x}{2m} \Big]$$ Where $A_{ix}$ is the weight of the edge between vertices $i$ and $x$, $m = \sum_{i,j} A_{ij}$, and $k_i$ and $k_x$ are the degrees of vertices $i$ and $x$ respectively.
null
CC BY-SA 4.0
null
2023-05-14T16:00:54.893
2023-05-14T16:00:54.893
null
null
242191
null
615844
2
null
615836
3
null
Technically, according to [Wikipedia](https://en.wikipedia.org/wiki/Markov_chain), > A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. I suppose that you could re-define your data in a way to incorporate a series of "events" that represents covariate exposures or patterns of exposures, but that seems unnecessarily complicated here. There is no reason to go beyond your simple two-state model, provided that you take care in defining your predictor variables. You are correct that a Cox model only evaluates predictor values that are in place at event times. Nothing, however, prevents you from defining time-varying predictors that incorporate the histories of covariates in some way: cumulative values, running averages, averages weighted toward more recent observations, anything that makes sense based on your understanding of the subject matter. Then, as Frank Harrell points out in another answer (+1), it won't really matter whether you set the problem up as a two-state Markov model or as a Cox model.
null
CC BY-SA 4.0
null
2023-05-14T16:14:26.240
2023-05-14T16:14:26.240
null
null
28500
null
615845
1
null
null
3
70
I am currently looking into the correlation between academic freedom (my independant variable) and university rankings (my dependant variable) using OLS. I find a negative significant correlation, but since I only have one dependant variable, this results suffers from omitted variable bias. In order to take this into account, I want to introduce some control variables into my model (that are both correlated with my independant and dependant variable). One of these controls is the presence of democracy/civil liberties (as measured by the Freedom House dataset) in the country. When I regress academic freedom on these variables, I find a very high correlation (the coefficient for both democracy and civil liberties is 0.95). Because of this high correlation, I was wondering if it was a good idea to use democracy/civil liberties as a control variable to isolate the effect of academic freedom. Is there a risk of a biased coefficient because of this high correlation ? If there is a risk, would the use of proxies for qualities of institutions that are less correlated with academic freedom be a solution to still control for democracy in my model ?
Is it a bad idea to use a variable that is strongly correlated with my independant variable of interest as a control variable?
CC BY-SA 4.0
null
2023-05-14T16:23:09.683
2023-05-14T17:39:12.673
null
null
382870
[ "confounding", "controlling-for-a-variable" ]
615847
1
null
null
1
21
I am reading chapter 10 of the Handbook of Markov Chain Monte Carlo (Chapman and Hall/CRC, 2011), by James P. Hobert on the Data Augmentation algorithm, which honestly just looks like a Gibbs sampler to me. In section 10.2.4 (page 268) on the Central Limit Theorem, they calculate the covariance between $X_1$ and $X_0$, assuming that $X_0$ is sampled from the invariant distribution $f_X$, which implied that $X_1$ is also sampled from $f_X$. If this is the case, shouldn't they both independent? In what sense are they both sampled from an invariant distribution if they're not independent samples?
What does it mean for a Markov chain to have an invariant distribution but not be iid?
CC BY-SA 4.0
null
2023-05-14T16:57:53.383
2023-05-14T17:01:38.337
2023-05-14T17:01:38.337
44269
387957
[ "markov-chain-montecarlo" ]
615848
1
615854
null
4
725
Given that $X_1, \ldots, X_n$ are conditionally independent and identically distributed random variables and that, given a value of $\theta$, $X_i \mid \theta \sim \operatorname{Bernoulli}(\theta), i = 1, \ldots, n$. given that $$r(i) = \mathbb{P}(X_i = 1 \mid X_1, \ldots, X_{i-1} = 1)$$ How do I prove that $r(i)$ is always increasing? # My approach In order to prove that $r(i)$ is always increasing, I need to prove that $r(i) - r( i -1 )> 0, \forall i$. I know that $X \sim \operatorname{Bernoulli}(\theta)$, but i don't know what's the "closed form" of $r(i)$ for a given $i$.
Proving that a function is always increasing
CC BY-SA 4.0
null
2023-05-14T17:22:13.317
2023-05-16T00:07:50.947
2023-05-14T18:29:19.280
285155
285155
[ "probability", "distributions", "bayesian", "posterior" ]
615849
2
null
615845
3
null
In OLS you have [omitted-variable bias](https://en.wikipedia.org/wiki/Omitted-variable_bias#In_linear_regression) if the omitted predictors would have non-zero coefficients in the model and are correlated with the included predictors. Then the included predictors are correlated with the error term in the model. To minimize bias in your study, you want to include all predictors that are correlated with academic freedom and would still be associated with the outcome when academic freedom is taken into account. The problem with including other predictors correlated with academic freedom isn't bias. Including a predictor whose omission would lead to omitted-variable bias would, if anything, reduce bias. The problem with including multiple correlated predictors is that their individual coefficient estimates will then have high variances, due to [multicollinearity](https://stats.stackexchange.com/tags/multicollinearity/info). Depending on the data, you might thus find that neither predictor individually has a "significant" association with outcome. That's not necessarily a problem. You could, for example, evaluate the combination of the measures of academic freedom and democracy together instead of trying to isolate them individually. The negative covariances among the coefficient estimates can offset the high variances of the individual estimates when you consider them together. An alternative might be to treat the measure of democracy as an [instrumental variable](https://stats.stackexchange.com/tags/instrumental-variables/info), as explained in detail in a [Wikipedia article](https://en.wikipedia.org/wiki/Instrumental_variables_estimation) and more heuristically on this [Cross Validated page](https://stats.stackexchange.com/q/563/28500). In that case you want the instrumental variable to be highly correlated with the included predictor of primary interest but not with the model's error term and not to be associated with outcome except insofar as the instrument is associated with the included predictor. Whether this makes sense in your situation requires careful application of your understanding of the subject matter. If you want to take that approach you should consult with a statistician or econometrician experienced in such modeling, as there are many underlying assumptions and ways that you might be led astray.
null
CC BY-SA 4.0
null
2023-05-14T17:39:12.673
2023-05-14T17:39:12.673
null
null
28500
null
615850
1
null
null
1
32
I'm aware of the issues underlying using PCA to feature select ([https://blog.kxy.ai/5-reasons-you-should-never-use-pca-for-feature-selection/](https://blog.kxy.ai/5-reasons-you-should-never-use-pca-for-feature-selection/)) and ([https://towardsdatascience.com/pca-is-not-feature-selection-3344fb764ae6](https://towardsdatascience.com/pca-is-not-feature-selection-3344fb764ae6)). However, I need to consider potential alternative methods of feature selection rather than just looking at correlation, so for now I'm looking at using PCA to see if it might be useful. My original dataset consists of thousands of proteins ( rows= samples, columns= the different proteins), where the values correspond to their concentration values. Values were centered and log transformed prior to PCA. Using PCA as an unsupervised way to "feature select", I have selected the first 30 PCs that account for ~80% of the explained variation. For each feature, I have multiplied the loadings by the 'proportion of variance explained'. Then I took the sum of all loadings for each feature, I then use a cut off point and select only those features which have a final loading value >= to that cut off ( supposedly the most important in relation to the PCs). My confusion lies with what exactly does a feature with a high loading mean in relation to the feature in the original dataset? For example, from this link ([https://towardsdatascience.com/pca-is-not-feature-selection-3344fb764ae6](https://towardsdatascience.com/pca-is-not-feature-selection-3344fb764ae6)), he explains "The only way PCA is a valid method of feature selection is if the most important variables are the ones that happen to have the most variation in them". In other words does this mean features with high loadings would show the most variation in the original dataset? Ultimately, I would like to select features which do not show a large variation in the original dataset across samples, so should I select those with a lower loading? If I'm on the wrong tracks entirely, considering the definition of loadings as "the covariances/correlations between the original features and the unit-scaled components", if features with low loadings are not informative/not important for the PCA, what is that telling me about the variance or otherwise of the feature in the original dataset? I'm aware of similar questions on Stack/Cross-Validated, but none that clarify this point. [Using PCA for feature selection?](https://stats.stackexchange.com/questions/506474/using-pca-for-feature-selection?rq=1) [Using principal component analysis (PCA) for feature selection](https://stats.stackexchange.com/questions/27300/using-principal-component-analysis-pca-for-feature-selection) Any nudges in the right direction would be appreciated.
What do my feature's PCA loadings tell me about the features in the original dataset (using PCA for feature selection)?
CC BY-SA 4.0
null
2023-05-14T17:46:32.510
2023-05-14T20:07:10.580
2023-05-14T20:07:10.580
361346
361346
[ "pca", "feature-selection" ]
615851
1
null
null
0
11
I am having data gathered from a longitudinal study, where each participant answered multiple questionaires per day. What I would like to do is a simple hypothesis test (something like a Welch's t-test) to test if the average answers (numerical value) given by different groups (e.g. male vs. female; student vs. workers, etc) differ significantly from each others. My problem is, that I am not sure about how the fact that we have repeated measurement of each participant affects the applicability of 'standard' tests for difference in means. Usually, I utilized Mixed Linear Models with participant id as random effect, but I would prefer to use a simpler model/test for reporting. My questions are therefore: - What are there any pitfalls when applying statistical tests for difference of group mean (such as t-test) naively to such longitudinal data? - Do variations of such tests exists, that adresses the within-subject coraviance? - If I have to use Mixed Linear Models (with a lmer-formula like y ~ 1 + sex + (1|subject)) : The grouping variables of interest (sex etc) do not vary within subjects , does this effect the model? I'm relatively inexperienced with analyzing longitudinal data; so any advice would be appreciated. :)
Simple hypothesis test for group differences in a longitudinal studies
CC BY-SA 4.0
null
2023-05-14T18:25:08.403
2023-05-14T18:25:08.403
null
null
161989
[ "hypothesis-testing", "mixed-model", "panel-data" ]
615852
1
null
null
1
22
## Problem Statement Let's say I have multiple solved OLS regressions of the form $Y \sim X_1$, $Y \sim X_2$, ..., $Y \sim X_n$ I want to find which best two combined are the best predictors combined, using MSE as the measure of prediction quality. Such that $Y \sim X_i + X_j$ yields the lowest MSE for all $i, j \in [1,n]$ --- ## Approach 1: - Intuition Now, the intuitive and immediate answer is something along the lines of: "find the two with the least correlation and lowest MSE's, and that will yield a lower MSE". The problem I'm having with that is formalizing the balance between "lowest MSE's" with "least correlation". If I pick the two lowest MSE's for example, they might be perfectly collinear and that won't help. If I pick the two with the lowest pairwise correlation, how will I know that they are each predictive enough so that their combined MSE's are the lowest? --- ## Approach 2: - Brute Force This seems straight forward enough: for every $X_i, X_j$, run a regression, calculate MSE, choose the lowest. --- ## Approach 3: - Better way? Is there some approach out there that solves this in a more rigorous/formal way? Maybe some axiom that says "least correlated pair will always yield lowest MSE, only if something something..."? I feel like there's some way of figuring this out, but everything I'm thinking of leads me to some pairwise correlation... Is the best approach just trying out all pairs?
Using multiple individual regressions Y~X_i to find the best two combination of predictors
CC BY-SA 4.0
null
2023-05-14T18:50:39.413
2023-05-14T18:50:39.413
null
null
224247
[ "regression", "correlation", "multiple-regression", "error" ]
615854
2
null
615848
9
null
By the law of iterative expectations and conditional independence, for any $n \in \mathbb{N}$ and $u_i \in \{0, 1\}$, $i = 1, \ldots, n$, we have \begin{align} & P(X_1 = u_1, \ldots X_n = u_n) = E[P(X_1 = u_1, \ldots, X_n = u_n|\theta)] \\ =& E[P(X_1 = u_1|\theta)\cdots P(X_n = u_n|\theta)] \\ =& E[\theta^s(1 - \theta)^{n - s}], \tag{1} \end{align} where $s = u_1 + \cdots + u_n$. It then follows by $P(A|B) = P(A \cap B)/P(B)$ and $(1)$ that \begin{align} r(i) = \frac{P(X_1 = \cdots = X_i = 1)}{P(X_1 = \cdots = X_{i - 1} = 1)} = \frac{E[\theta^{i}]}{E[\theta^{i - 1}]}, \quad i = 1, 2, \ldots. \end{align} So to show $r(i + 1) \geq r(i)$, it suffices to show that $(E[\theta^i])^2 \leq E[\theta^{i + 1}]E[\theta^{i - 1}]$. But this is a direct consequence of the [Cauchy-Schwarz inequality](https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality#Probability_theory): \begin{align} E[\theta^i] = E\left[\theta^{(i + 1)/2}\theta^{(i - 1)/2}\right] \leq \sqrt{E[\theta^{i + 1}]E[\theta^{i - 1}]}. \end{align} This completes the proof.
null
CC BY-SA 4.0
null
2023-05-14T18:53:14.903
2023-05-15T04:38:26.737
2023-05-15T04:38:26.737
20519
20519
null
615855
1
null
null
0
35
Assume you have two samples: The first one has a significantly positive average, the second one has a significantly negative average. Does this directly imply that the averages of the first and second sample differ, with the average of the first sample being higher?
Sample comparison via t-test
CC BY-SA 4.0
null
2023-05-14T19:17:24.913
2023-05-14T19:17:24.913
null
null
383321
[ "t-test" ]
615856
2
null
615836
3
null
> I'm worried about the effect of individual variance in this setup. For example… those individuals with a tendency to quickly transition into state 2, they will have a small impact on the transition parameter estimate, and those individuals who never transition, will have an over represented effect on the transition parameter estimates The individual variations can be absorbed into a single survival curve. Whether or not an individual is susceptible to transitioning quickly or not might be unknown, yet that uncertainty can be expressed into the survival curve. Some individuals transition early others do not. That's exactly what the survival curve is about, it models that variation between different individuals. A similar situation is described in the question: [How to introduce uncertainty in fitting the original data when simulating survival curves?](https://stats.stackexchange.com/questions/615412/)
null
CC BY-SA 4.0
null
2023-05-14T19:24:23.880
2023-05-14T19:24:23.880
null
null
164061
null
615857
2
null
600918
1
null
- No. It means there is no cointegration at all. You need to reject the middle null hypothesis for cointegration - If you have cointegration you can choose the first values and get weights weights = list(jres.evec.T[0] / jres.evec.T[0][0]) - You need to fit VECM next. But not in this data (because lack of cointegration). `vecm.VECM(timeseries, deterministic="n", k_ar_diff=order, coint_rank=rank)` Rank you get from johansen test and order from VAR or using function `vecm.select_order(timeseries, maxlags=max_lag, deterministic="co")`
null
CC BY-SA 4.0
null
2023-05-14T19:32:01.580
2023-05-14T19:32:01.580
null
null
127115
null
615858
2
null
615132
2
null
> Can someone please tell if this approach to Bootstrapping Longitudinal/Repeated Measures data is statistically valid? It works if the bias of the coin is assumed to be the same among the different students. In that case the results from each student are like a repetition of the sampling/experiment and multiple samples from a population give insight into the statistical variation in the sampling distribution. This variation can be expressed by using bootstrapping (a direct calculation would be easier, it is not clear whether this question is actually about the bootstrapping, and instead more about considering the students as independent samples that are an indication of the sampling distribution). --- Sidenote 1: in your code you use expressions like `h_given_h`, which is not very clear language. The frequency of HH, which you analyse, is different from the frequency of H given H, which you use in your wordings. Example: Consider the case where the bias is fully correlated "There is a coin where if it lands head then the probability of the next flip being heads is 1 (and if tails then the next flip being tails is also 1)", then the frequencies of HH and TT will be around 0.5 (depending on the first flip being H or T), but H given H will be one, and not 0.5. --- Sidenote 2: The method works, but is not very powerful. With the same example above, you have students with either only TT or only HH. You could have a very clear table like ``` 1 HH 343 2 HT 0 3 TH 0 4 TT 479 ``` But if these results are from only a few students (consider the extreme case of only 2 students, one flipped 343 HH's and the other flipped 479 TT's), then your bootstrapping will generate very large confidence intervals. The confidence intervals express the frequencies of HH and TT, and that includes the first flip. So while you have 343 and 479 results, many different measurements, you treat the variability here as only the 2 students first flips.
null
CC BY-SA 4.0
null
2023-05-14T21:10:43.470
2023-05-14T21:20:57.497
2023-05-14T21:20:57.497
164061
164061
null
615859
1
null
null
1
31
I am trying to compare the asset price forecasting abilities of SSMs with ARIMA and VAR models. To keep it brief, this is the plan that I am following: - Collect multivariate data - Perform ADF stationarity test on the data, apply differencing where needed - Estimate a VAR model on the differenced data - Perform Granger causality test, select variables that Granger causes the forecast variable or is very close to doing so - Estimate ARIMA model - Perform forecast of ARIMA and VAR, and back-difference to original scale (price), calculate RMSE - Calibrate SSM (univariate and multi-variate through identification) - Forecast using the SSM with the original undifferenced data, since SSMs do not require stationary series, and calculate RMSE - Compare results In an overall brief sense, this is what I am currently doing. It's my first time working with such models, and I am therefore interested in getting input. Does it make sense, or is something missing? Edit: The SSM is a LTI SSM with Deterministic and Stochastic elements, given by the equations: [](https://i.stack.imgur.com/iJC5X.png) Where the Stochastic elements matrices C and F can be used to find the Kalman gain matrix K by: K = CF^-1 The VAR and ARIMA models use their standard formulations
ARIMA, VAR and State Space Model (SSM) forecasting comparison
CC BY-SA 4.0
null
2023-05-14T21:37:06.080
2023-05-15T09:39:31.503
2023-05-15T09:39:31.503
387945
387945
[ "time-series", "forecasting", "arima", "vector-autoregression", "state-space-models" ]
615860
1
null
null
1
22
I have numerical process that generates percentiles values of a certain random variable $X$, meaning that I have $x_i$ values and their corresponding probability values $\hat{F}(x_i)$. I want to fit a smooth non-parametric distribution function to these data and I'm looking for a function or package in R, Matlab or Python that can perform this, similar to what is available to perform kernel density estimation. Is there something like this available?
Is there a package/function in R, Matlab or Python to perform smooth non-parametric fitting of empirical CDF data?
CC BY-SA 4.0
null
2023-05-14T21:47:04.023
2023-05-14T21:47:04.023
null
null
387964
[ "nonparametric", "quantiles", "cumulative-distribution-function" ]
615861
1
null
null
0
39
I am simulating the revenues of a portfolio of items using one input variable. This variable is randomly extracted from a normal distribution n times, where n is the number of Monte Carlo simulations. Revenues is the output parameter and they indeed depend on this random variable. I know that, in general, as per the Central Limit theorem, the output parameter of a Monte Carlo simulation should have a normal distribution. However, my case is peculiar: although the revenues depend on the randomly extracted variable that is mentioned above, there are cases in which, for particular values of this variable the revenues are just set to 0. This does not mean that they don't depend on the random variable, but just that, for particular values of this one, they are coded to be 0. This implies that the distribution of the revenues at the end of the simulations is not normal. In my opinion it makes sense that it is like this, as this is how the model is supposed to work and it reflects reality. In the end, I am using the obtained distributions of revenues to extract statistics of it (mean, median, percentiles,...), regardless of its shape. Does it make sense or are the concepts of "Monte Carlo" and "normal distribution of the output parameter at the end of the simulations" necessarily tied up together?
Monte Carlo simulations and Central Limit theorem
CC BY-SA 4.0
null
2023-05-14T22:06:17.910
2023-05-14T22:06:17.910
null
null
383746
[ "random-variable", "simulation", "monte-carlo", "central-limit-theorem" ]
615862
1
null
null
0
15
I am searching for some studies to evaluate the effect of drug x use among patients using drug y. One study reports the odds ratio for the concurrent use of both drugs. Can I assume it is the same as the odds ratio for drug x use with drug y as the independent variable?
Odds ratio interpretation of dependent and independent variables
CC BY-SA 4.0
null
2023-05-14T22:16:01.277
2023-05-14T22:16:01.277
null
null
387848
[ "interpretation", "odds-ratio", "dependent-variable" ]
615863
2
null
615685
0
null
I have since found a solution that works quite efficiently, and wanted to briefly state it here for anyone else who might end up with a similar question. The solution I found works, but ended up being a bit more complex than I initially anticipated. - As in linear programming, find the polytope vertices defined by the inequalities above. - Triangulate the polytope - Calculate the volume of each resulting simplex - Select one simplex at random, with each simplex weighted by its percentage contribution to the overall polytope volume. - Draw a uniform sample from a uniform Dirichlet distribution of the same dimension as the simplex and polytope - Transform the barycentric coordinates into cartesian coordinates.
null
CC BY-SA 4.0
null
2023-05-14T23:18:44.017
2023-05-14T23:18:44.017
null
null
191317
null
615864
1
615896
null
1
39
I want to apply ARIMAX model on Google trends. I used python package to get daily data. However, this data contains a lot of zeros, so if I do first difference of logs, I see a lot of (inf) in python. How I can solve that? Better some way to apply to single keyword search, but if there is a fast way to make weighted index (with rescaling), it may also help.
ARIMAX model for Google trends: trouble with lots of zeros
CC BY-SA 4.0
null
2023-05-14T23:24:48.570
2023-05-15T09:47:44.283
2023-05-15T06:45:30.240
53690
361080
[ "time-series", "arima", "logarithm", "zero-inflation" ]
615865
1
null
null
0
35
I have a log likelihood that looks like the following. $$ \log(p(Z|\Theta)) = -\sum_{p = 1}^{N} \left[ L \log\left(\pi\left(A F(p, \Theta) + \sigma_n^2\right) \right) + \frac{\sum_{l = 1}^{L} Z_l(p) }{A F(p, \Theta) + \sigma_n^2} \right] $$ It comes from an exponential distribution. The observations are $Z_l(p)$ and $A F(p, \Theta) + \sigma_n^2$ is the model. However, the scale factor $A$ and additive factor $\sigma_n^2$ are estimated empirically or from prior knowledge before computing this likelihood. Some real world measurements $Z$ do give me positive log-likelihood values. I want to have a quantitative measure of how good my data fits the measurements. That is why I thought of looking at the likelihood of all the different real world measurements. So, a measurement giving a log-likelihood value close to $0$ (but negative) is more fitted to my model than a measurement that gives a log-likelihood value far from $0$ (but negative). However, now that I also have positive values for log-likelihood, I am confused how to say if the measurements fit my model. Sometimes, by visual inspection, I find that a log likelihood of $-2000$ reconstruct my data better than a log-likelihood of 50 for example. To emphasize again, this is from a visual inspection. Any input is appreciated. My suspect: I think $A$ and $\sigma_n^2$ are the reason for which probably it is more than $0$. Am I right? If yes, how can I say something about my estimation (not just with visual inspection, but also from likelihood).
Why the log likelihood is positive in some cases
CC BY-SA 4.0
null
2023-05-14T23:27:25.483
2023-05-14T23:27:25.483
null
null
327104
[ "maximum-likelihood", "likelihood", "model", "scale-parameter" ]
615866
1
null
null
0
56
My instructor in a recorded online data science learning platform described MSE as equal to RSE squared. After reviewing the formulas for MSE and RSE, however, I don't understand how or when this might be true. The formulas are: $\text{RSE} = \sqrt{\dfrac{\text{SSE}}{n-p-1}}$ $\text{MSE} = \dfrac{\text{SSE}}{n}$ If RSE accounts for degrees of freedom, and MSE does not, when/how can MSE be equal to RSE squared?
In which contexts is MSE equal to RSE squared?
CC BY-SA 4.0
null
2023-05-14T23:38:25.307
2023-05-15T06:53:58.463
2023-05-15T04:42:08.223
347696
null
[ "python", "regression" ]
615867
2
null
615271
0
null
So I think what I was looking for was: $$\frac{y}{x}=\alpha+\varepsilon$$ which rewrites to: $$ y=\alpha x + x \varepsilon $$ I think this could be expressed as a weighted least squares regression with $1/x$ as the weights.
null
CC BY-SA 4.0
null
2023-05-15T00:38:57.910
2023-05-15T00:38:57.910
null
null
346961
null
615868
1
null
null
0
19
As far as I know, one of the main advantages of using RNN is to learn sequences of texts from the (textual) data set. If I am correct, this is intuitive to understand when RNN is applied to a task such as text generation (e.g. predicting next words, building Q&A bot, text summarization). I've seen a lot of blog posts and online course materials performing RNN for text classification tasks such as sentiment analysis or news article categorization. My question: What is the benefit of using RNN for these types of text classification tasks when these also can be done using other ML algorithms such as SVM, Naive Bayes, and etc? The reason for raising this question is that I wasn't sure how identifying sequences of texts works or whether this is needed for the classification tasks where sequences seem do not matter.
Advantages of Using Recurrent Neural Network (RNN) Over Other Machine Learning Algorithms for Classification Tasks
CC BY-SA 4.0
null
2023-05-15T00:41:31.923
2023-05-15T00:41:31.923
null
null
161495
[ "neural-networks", "classification", "recurrent-neural-network" ]
615871
2
null
438880
0
null
Not a lot of info to go off from, but a few things to note: - OP mentions the marginal distribution is roughly symmetric and discrete. I've surmised this from their mention of a KDE plot. - OP mentions "To me it doesn't seem to make sense to use a Poisson because the variance doesn't equal the mean". Now, since OP mentions examining the marginal distribution I will assume OP has computed the sample mean and variance of the outcome and noticed they are not equal. The size of the discrepancy is not mentioned, but let's assume for now they are so different that sampling variability could not reasonably explain the difference. - OP doesn't mention any covariates so I'm inclined to think they are only looking at the marginal distribution. There is no mention of what kind of predictions OP would like to make. Let's assume for now that they are interested mostly in the posterior and they could make predictions of many types from there. Perhaps OP's data looks like ``` set.seed(0) n <- 250 x <- rnorm(n) y <- rpois(n, exp(5 + 0.03*x)) hist(y) ``` [](https://i.stack.imgur.com/5ZxM0.png) The data are roughly symmetric, discrete, and the mean is not equal to the variance. Can we model this with Bayesian statistics? Sure, why not. Let's assume a Gamma prior on the rate parameter and use a Poisson likelihood. I'll assume OP has enough data so that the prior is negligible. Our model will be $$ y_i \sim \mbox{Poisson}(\lambda_i) $$ $$ \lambda_i \sim \mbox{Gamma}(1, 1) $$ Because the prior is conjugate, we have a posterior we can easily sample from (its a negative binomial distribution, but let's pretend we don't know that for a moment). I will simulate 1000 datasets from the posterior predictive distribution and compare them to my `y` (again, pretending we don't know the `x` I've used). Here are some histograms from the posterior predictive, real data in dark blue. The replications look fine, nothing too different [](https://i.stack.imgur.com/NGdbX.png) Additionally, we can look at the means and variances of the generated data. While the data are more variable than the replicates, it could be argued that the posterior predictive generates data that looks like the real data. [](https://i.stack.imgur.com/oNXlC.png) Anyway, I'm not saying OP should use this model. I'm trying to show OP how you can use Bayesian statistics with a discrete likelihood even though the mean and variance are not equal. I'm under the impression OP is being too restrictive with their modelling assumptions.
null
CC BY-SA 4.0
null
2023-05-15T03:46:58.490
2023-05-15T03:46:58.490
null
null
111259
null
615872
1
null
null
0
10
I have a question about the Error propagation in RMS. My surface profilometer has P-V error with 100 nm (It follows gaussian distribution). And I investigate the z-axis positions of the object surface with many points (more than 10000 data points). Now, I want to caculation the RMS error of the object surface. RMS = SQRT[SUM{(data-reference)^2}/N] In here, What is the uncertaintyof the RMS? How I can caculate the error propagation?
Error propagation from P-V error to RMS error
CC BY-SA 4.0
null
2023-05-15T04:09:43.310
2023-05-15T04:09:43.310
null
null
387974
[ "mathematical-statistics", "rms", "error-propagation" ]
615873
1
null
null
0
11
I have a question about the Error propagation in RMS. My surface profilometer has P-V error with 100 nm (It follows gaussian distribution). And I investigate the z-axis positions of the object surface with many points (more than 10000 data points). Now, I want to caculation the RMS error of the object surface. RMS = SQRT[SUM{(data-reference)^2}/N] In here, What is the uncertaintyof the RMS? How I can caculate the error propagation?
Error propagation from P-V error to RMS error
CC BY-SA 4.0
null
2023-05-15T03:21:16.450
2023-05-31T20:47:39.830
2023-05-31T20:47:39.830
11887
387974
[ "mathematical-statistics", "error-propagation" ]
615874
1
null
null
1
28
I am trying to build a machine learning model for spatiotemporal data. My predictors are all monthly climate variables, and as such display spatial autocorrelation. The target dataset however has relatively low spatial autocorrelation (it is quite noisy). What is the procedure for a robust estimate? One method I have seen being mentioned is to combine the random forest with a kriging approach, i.e. to fit a random forest to the training data and then combine the random forest prediction with kriging applied to the random forest residuals. Is this a robust way to tackle spatial auto-correlation? Others mention to apply a spatial cross validation. Is there a preferred way, and if so are there any typical python libraries that can help me achieve this? I know that in R there are libraries specifically for spatial datasets but as far as I can tell they don't exist in python.
Spatial autocorrelation machine learning python
CC BY-SA 4.0
null
2023-05-15T04:55:43.060
2023-05-15T04:55:43.060
null
null
387976
[ "machine-learning", "python", "random-forest", "autocorrelation", "spatial" ]
615875
1
null
null
1
56
A fair die is rolled till $1$ is observed. Let $X$ be the number of rolls and $Y$ be the number of $6$ obtained in these $X$ rolls. Find (a) $E[Y|X=x]$, (b) $E[Y]$. My Solution: Once we are given that $X=x$ then we know that the 1st $1$ was obtained in the $x^{th}$ roll and in the first $x-1$ rolls $1$ never appeared and so $Y|X=x$ follows a Binomial $(x-1, 1/5)$ distribution and so $E[Y|X=x]=\frac{x-1}{5}$. Now by Tower property we have $$E[Y]=E[E[Y|X]]=E[\frac{X-1}{5}]=\frac{E[X]-1}{5}=\frac{6-1}{5}=1$$. Is my solution correct?
Conditional expectation of random variables
CC BY-SA 4.0
null
2023-05-15T04:57:19.987
2023-05-15T17:13:36.523
null
null
376295
[ "random-variable", "conditional-probability", "binomial-distribution", "conditional-expectation" ]
615877
2
null
615866
0
null
Are you sure that they use different denominators? Some definitions use different denominators, and some don't. They would be the same if $\mathrm{SSE} = 0$. $(\sqrt{x})^2 = x$, so the only solution in the other case would be $p=-1$ and degrees of freedom are usually defined as non-negative.
null
CC BY-SA 4.0
null
2023-05-15T05:43:08.037
2023-05-15T06:53:58.463
2023-05-15T06:53:58.463
35989
35989
null
615878
2
null
615839
0
null
You got it right. The variables that appear in the model must be well behaved (there are some conditions they must satisfy) so that the model and especially the estimator(s) would work as expected. The transformations that you apply outside of the model to make the variables well behaved are beyond the scope of the conditions.
null
CC BY-SA 4.0
null
2023-05-15T06:54:06.790
2023-05-15T06:54:06.790
null
null
53690
null
615879
2
null
615838
2
null
- Either a variable Granger-causes another one or it does not. There can be no partial causation. Think of it in terms of model equations; either a coefficient is zero or it is not, there is no other possibility. What can be partial is your certainty. You can be somewhat certain that a variable Granger-causes another one. - You want to use results of a Granger-causality test for feature selection when building a forecasting model. That boils down to using statistically significant lags of some variables. Rob Hyndman has explained in "Statistical tests for variable selection" why that is a bad idea. Instead, consider selecting your variables using information criteria such as AIC that are optimized for feature selection for forecasting. - Using a VAR for forecasting stock prices or gold price sounds quite naive, unless it is meant as an exercise. If these prices were following VAR type of patterns, investors would be tempted to exploit them. By doing so (buying low and selling high), they would affect the supply and demand of these assets, so that the price patterns would disappear. As there are a lot of people and firms trying to do that all over the world, this happens very fast. Therefore, patterns disappear very fast – most of the time before you are able to exploit them.
null
CC BY-SA 4.0
null
2023-05-15T07:01:29.410
2023-05-15T07:07:44.280
2023-05-15T07:07:44.280
53690
53690
null
615880
1
null
null
0
7
The separator function in SVM is: $f(\mathbf{x})=\sum\limits_{\mathbf{x_i} \in Support}\alpha_i\times K(\mathbf{x}, \mathbf{x_i})-b$ Depending on the kernel, this may correspond to adding any number of attributes to the original problem. However, there are only as many weights $\alpha_i$ as there are support vectors. How does this not limit the model's expressive power by forcing dependencies on the relative importance of the attributes?
Small number of weights in the SVM separator function
CC BY-SA 4.0
null
2023-05-15T07:05:09.800
2023-05-15T07:05:09.800
null
null
266189
[ "machine-learning", "svm" ]
615881
1
null
null
0
16
I am working with species distribution in R. As response variables, I have the are for three different patterns (filling, expansion, unfilling), that can be transformed into a proportion [0-1], if compared to the total area available. Two of the response variables are zero-inflated, while one of them is one-inflated. As predictors, I have body mass and dispersal. As my data are bounded between 0 and 1 but I also have several 0s and 1s, I thought I needed a zero-one inflated beta regression model, which I looked up in R using `zoib` package. It would be great if I could model the three response variables and the two covariates all together, but first I tried with only one (response and covariate), but I am already failing. As it's not a very common model technique, I am struggling to find online help. This is a sample of my data: ``` myData <- structure(list(filling = c(0.3833, 0, 0.0103, 0, 0.0228, 0, 0, 0, 0, 0, 0.1975, 0.0559, 0.0796, 0.0755, 0, 0, 0, 0.0572, 0.4701, 0.2258), expansion = c(0.5052, 0.2222, 0.9381, 0.75, 0.035, 0.9375, 1, 0.9855, 0.0038, 0.25, 0.4459, 0.9441, 0.0845, 0.2956, 0.1, 0.8444, 1, 0.0804, 0.2594, 0.3226), unfilling = c(0.0458, 0, 0, 0, 0.6717, 0, 0, 0, 0.2123, 0, 0.1624, 0, 0.6285, 0.0063, 0, 0, 0, 0.0757, 0.1682, 0), adult_mass_g = c(92500, 45500, 277.615, 278, 77000, 21666.685, 21666.7, 19928.335, 131250, 160166.665, 52999.99, 10.6, 63612.305, 777.95, 771, 85, 140.045, 483.5, 12760.04, 19166.665), dispersal = c(12.785, 8.654, 0.524, 0.524, 11.558, 5.754, 5.754, 5.496, 17.291, 17.291, 9.411, 0.7, 10.405, 5.748, 5.722, 0.273, 0.36, 5.737, 4.301, 5.379)), row.names = c(NA, 20L), class = "data.frame") ``` And this is what I tried so far: ``` zoib(filling ~ adult_mass_g | 0 | adult_mass_g, data = myData, zero.inflation = T, one.inflation = F) ``` The error I'm getting is: ``` Error in rnorm((p.xsum - 1) * 4, 0, 0.1) : invalid arguments ``` But I cannot understand what type of error this is. Any comment would be appreciated! Cross-posted here: [https://stackoverflow.com/questions/76237611/zero-and-one-inflated-beta-regression-modeling-using-zoib-package-in-r](https://stackoverflow.com/questions/76237611/zero-and-one-inflated-beta-regression-modeling-using-zoib-package-in-r)
Zero- and one-inflated beta regression modeling using zoib package in R
CC BY-SA 4.0
null
2023-05-15T07:07:50.247
2023-05-15T07:07:50.247
null
null
365178
[ "r", "zero-inflation", "beta-regression", "one-inflation" ]
615882
1
null
null
0
7
suppose to have the following situation: you have a total number of deaths equal to 120. Of them, 20 individuals had cancer. The ratio is 20/120 = 0.16. The total individuals with cancer was 320. How can you relate 0.16 (i.e., the ratio) to the size 320 of all patients with cancer? Thank you in advance
Relate ratios to sample size
CC BY-SA 4.0
null
2023-05-15T07:08:33.907
2023-05-15T07:08:33.907
null
null
14270
[ "descriptive-statistics", "odds-ratio", "ratio" ]
615883
2
null
615646
1
null
The terminology for basic plots like this is not entirely standardised, but I would say this is a fairly typical [dot chart (or Cleveland dot chart)](https://stats.stackexchange.com/a/149146/121522). It does, however, have a couple of features that are not seen in the prototypical dot chart: - It shows uncertainties/confidence intervals. - It uses size, colour, and shape to convey some additional pieces of information. Point #2 may lead you to call this a bubble plot but I don't think that would be accurate. Bubble plots usually refer to a scatterplot that shows variation in a third dimension using variation in point size.
null
CC BY-SA 4.0
null
2023-05-15T07:20:37.307
2023-05-15T07:20:37.307
null
null
121522
null
615884
2
null
266610
1
null
I also get confused when facing eq. 3.3.12 yesterday, it seemed not "so" obvious as mentioned in the text. When I searched for answers, the only method was the above by @Steve and @user106860, thanks a lot. But soon I realized that using $D_i(Y_{1i}-Y_{0i}) = Y_{1i}-Y_{0i}$ in the condition $D_i=1$ seems to be a little ticky and hard to come up with. Moreover, we cannot use this idea to calculate the dual form: $\mathrm{E}(Y_{1i}-Y_{0i}|D_i=0)$. Herein, I list a direct method below, which maybe easier to follow: First of all, we change $p(X_i)$into the prob-form $P(D_i=1|X_i)$ in eq. 3.3.12. Then, our target is to verify: $$ \mathrm{E}\left(\frac{(D_i-P(D_i=1|X_i))Y_i}{P(D_i=0|X_i)P(D_i=1)}\right) = \mathrm{E}(Y_{1i}-Y_{0i}|D_i=1) $$ Similar to the idea in verifying the formula of $ATE$ in the footnote in the text, we need to use the Law of iterated expectation on the $LHS$: $$ \begin{aligned} LHS = \frac{1}{P(D_i=1)}\mathrm{E}\left(\frac{1}{P(D_i=0|X_i)}\mathrm{E}\left((D_i-P(D_i=1|X_i))Y_i | X_i\right)\right) \end{aligned} $$ let $(D_i-P(D_i=1|X_i))Y_i | X_i \equiv \Delta$, then $$ \begin{aligned} \mathrm{E}\left(\Delta\right) &= \mathrm{E}\left(\Delta|X_i, D_i=1\right)P(D_i=1|X_i) + \mathrm{E}\left(\Delta|X_i, D_i=0\right)P(D_i=0|X_i) \\ &= (1-P(D_i=1|X_i))\mathrm{E}\left(Y_i |X_i, D_i=1\right)P(D_i=1|X_i) \\ &+ (0-P(D_i=1|X_i))\mathrm{E}\left(Y_i |X_i, D_i=0\right)P(D_i=0|X_i) \\ &= P(D_i=0|X_i)\mathrm{E}\left(Y_i |X_i, D_i=1\right)P(D_i=1|X_i) \\ &- P(D_i=1|X_i)\mathrm{E}\left(Y_i |X_i, D_i=0\right)P(D_i=0|X_i) \\ &= (\mathrm{E}\left(Y_i |X_i, D_i=1\right) - \mathrm{E}\left(Y_i |X_i, D_i=0\right))P(D_i=1|X_i)P(D_i=0|X_i) \\ &\mathop{=========} \limits_{Y_{1i},Y_{0i} \; \perp\!\!\!\!\perp \;D_i|X_i}^{Y_i=Y_{0i}+D_i(Y_{1i}-Y_{0i})} \mathrm{E}\left(Y_{1i}-Y_{0i} |X_i, D_i=1\right)P(D_i=1|X_i)P(D_i=0|X_i) \end{aligned} $$ Substitute in the above equation of $LHS$: $$ \begin{aligned} LHS &= \frac{1}{P(D_i=1)}\mathrm{E}\left(\frac{1}{P(D_i=0|X_i)}\mathrm{E}\left((D_i-P(D_i=1|X_i))Y_i | X_i\right)\right) \\ &= \frac{1}{P(D_i=1)}\mathrm{E}\left(\mathrm{E}\left(Y_{1i}-Y_{0i} |X_i, D_i=1\right)P(D_i=1|X_i)\right) \\ &= \frac{1}{P(D_i=1)} \sum\limits_{x}\left(\sum\limits_{y}(Y_{1i}-Y_{0i})P(Y_{1i}-Y_{0i}|X_i, D_i=1)P(D_i=1|X_i)P(X_i)\right) \\ &= \frac{1}{P(D_i=1)} \sum\limits_{x}\left(\sum\limits_{y}(Y_{1i}-Y_{0i})P((Y_{1i}-Y_{0i}), X_i, D_i=1)\right) \\ &= \sum\limits_{x}\sum\limits_{y}(Y_{1i}-Y_{0i})P((Y_{1i}-Y_{0i}), X_i| D_i=1) \\ &= \sum\limits_{y}(Y_{1i}-Y_{0i})\left(\sum\limits_{x}P((Y_{1i}-Y_{0i}), X_i| D_i=1)\right) \\ &= \sum\limits_{y}(Y_{1i}-Y_{0i})P(Y_{1i}-Y_{0i}| D_i=1) \\ &= \mathrm{E}(Y_{1i}-Y_{0i}|D_i=1) \\ &= RHS \end{aligned} $$ Similarly, we can also verifying that $$ \mathrm{E}\left(\frac{(D_i-P(D_i=1|X_i))Y_i}{P(D_i=1|X_i)P(D_i=0)}\right) = \mathrm{E}(Y_{1i}-Y_{0i}|D_i=0) $$
null
CC BY-SA 4.0
null
2023-05-15T07:23:13.890
2023-05-15T07:23:13.890
null
null
387980
null