Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
616718
2
null
436124
0
null
This is fine. Ultimately, your neural network is layers of feature extraction followed by a regression on those extracted features. A more routine regression has no restrictions on the coefficients, and neither does that last layer of your neural network. It might just be that a small change to some of the neurons in that final hidden layer results in a large (magnitude exceeding $1$) change in the outcome. So be it. If your concern is that the output neuron will give a value above $1$ or below $0$ (so no longer having an interpretation as a probability), note that this can happen when all of the weights are contained within an interval like $(0,1)$. This is a known issue of the [linear probability model](https://en.wikipedia.org/wiki/Linear_probability_model). If you want to transform output values to be restricted to an interval like $(0,1)$ so they can be interpreted as probabilities, various activation functions (quite analogous to link functions in generalized linear models) are viable. This is where you might see `sigmoid` in the output layer of neural network code, though many functions are viable.
null
CC BY-SA 4.0
null
2023-05-23T17:39:43.820
2023-05-23T17:39:43.820
null
null
247274
null
616719
2
null
479415
1
null
(Since you describe this an an AB test, I am assuming that you randomly recommend each game type. If the players choose the game type, then there is an additional layer of complexity that my answer does not address.) You are off to a good start by making a predictive model of `monetization` with high (validated) predictive accuracy, and that is where I would start. However, your goal is not to predict the `monetization` value given that a player has selected a paricular game. Your goal is to recommend the game in order to get the highest `monetization` value. Thus, I would take the known characteristics (the value of the features other than the game) and make predictions using each game version. Then the game version yielding the highest predicted `monetization` is the game version to recommend. You might want some kind of interval estimate for the difference in predicted `monetization` between recommending each game. Given your comment that the game version seems not to be an important feature, it might be that these intervals show neither game version to yield much of a higher `monetization` than the other. Aside from the interval estimate, it sounds like this is exactly what you are doing, so I think your strategy seems to be a reasonable one. To address some specific questions... > but the problem is that the feature 'game_version' is more or less being ignored in this prediction It might be that the game version just is not that important. This seems like valuable information. > Finally, in most cases A is better than B (has more monetization than the best of B), except for a small minority (~ 190 cases out of 50000) where B is better than A. It sounds like game version `A` is associated with greater `monetization`. While I struggle to reconcile this with the previous point, my interpretation is that game version `A` is usually going to lead to more money and should be the recommended game in the absence of highly compelling evidence that game version `B` should be recommended.
null
CC BY-SA 4.0
null
2023-05-23T18:00:00.357
2023-05-23T18:00:00.357
null
null
247274
null
616720
2
null
615762
1
null
I agree with Richard; you don't need to take the log of the (sentiment) negative numbers. Just pass the values of the sentiment factors time series as an exogenous factor in the ARIMAX(.) model.
null
CC BY-SA 4.0
null
2023-05-23T18:10:59.573
2023-05-23T18:10:59.573
null
null
366818
null
616722
2
null
402016
0
null
One major issue with the ROCAUC is that it does not change upon applying monotonic transformations to the predictions. Thus, ROCAUC does not evaluate how accurate the predicted probabilities are (and your goal is to get accurate ones), just how well the two categories have their predictions separated. Thus, as the comments describe, ROCAUC probably is not the performance metric you want to assess. However, there are two common performance metrics that do get after the probabilities: Brier score and log loss ("crossentropy loss" in some circles). Both of these are examples of the strictly proper scoring rules that are discussed [here](https://stats.stackexchange.com/a/339993/247274). $$ \text{Brier Score} = \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2\\ \text{Log Loss} = -\dfrac{1}{N}\overset{N}{\underset{i=1}{\sum}}\left( y_i\log(\hat y_i) + (1-y_i)\log(1 - \hat y_i) \right) $$ A reasonable criticism of both the Brier score and log loss is that they are difficult to interpret. To that, I respond: - You are just comparing models, where all that matters is the relative performance instead of any absolute sense of performance. - Without a context, it is quite difficult to decide what qualifies as good performance. - Both the Brier score and log loss can be transformed to give values that are analogous to the $R^2$ of linear regression fame. These are the Efron and McFadden pseudo $R^2$ values, respectively, discussed by UCLA. I also give the equations below. $$ R^2_{\text{Efron}} = 1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 }{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\bar y \right)^2 }\right)\\ R^2_{\text{McFadden}} = 1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left( y_i\log(\hat y_i) + (1-y_i)\log(1 - \hat y_i) \right) }{ \overset{N}{\underset{i=1}{\sum}}\left( y_i\log(\bar y) + (1-y_i)\log(1 - \bar y) \right) }\right) $$ Above, the $y_i$ are the true categories, coded as either $0$ or $1$; the $\hat y_i$ are the predicted probabilities of category $1$; and $\bar y$ is the unconditional probability of membership in category $1$ (so just the proportion of $1$s out of all instances). Finally, I recommend a read of Benavoli et al. (JMLR 2017) for comparing models on multiple sets of out-of-sample comparisons.$^{\dagger}$ While the article makes a case for using Bayesian methods, it also discusses classical approaches. REFERENCE [Benavoli, Alessio, et al. "Time for a change: a tutorial for comparing multiple classifiers through Bayesian analysis." The Journal of Machine Learning Research 18.1 (2017): 2653-2688.](https://www.jmlr.org/papers/volume18/16-305/16-305.pdf) $^{\dagger}$As one of the comments remarks, there are legitimate reasons not to like out-of-sample predictions unless the sample size is quite large.
null
CC BY-SA 4.0
null
2023-05-23T18:23:49.900
2023-05-23T18:23:49.900
null
null
247274
null
616723
2
null
540428
0
null
This is a common occurrence in machine learning and exactly the issue of [unbalanced-classes](/questions/tagged/unbalanced-classes) that gets discussed on here so often. The typical recommendation is to do nothing, as [class imbalance is minimally problematic](https://stats.stackexchange.com/questions/357466/are-unbalanced-datasets-problematic-and-how-does-oversampling-purport-to-he). Even the bounty-earning answer to the linked question discusses a solution to cost minimization through clever experimental design, rather than what to do once you have the data. I have thought about if a a zero-inflated Bernoulli likelihood might be helpful, but this kind of hierarchy winds up not making sense to me: you can represent the low probability of vategory $1$ through the zero-inflated model, or you can just predict low probabilities and use a model that is not as complicated (can deal with just a Bernoulli variable instead of some zero-inflated valiable, since a binary variable is so simple). An issue you might encounter is that all or almost all of the outcomes might be predicted to belong to the majority category (typically coded as $0$). This is an artifact of your software imposing an arbitrary threshold (typically a probability of $0.5$) to bin the continuous outputs into discrete categories. As you change the threshold, you can have predictions ranging from all predicted as category $0$ to all predicted as category $1$. However, this means that you no longer evaluate the original but, instead, the models in conjunction with a decision rule that maps predictions below a threshold to one category and predictions above the threshold to the other.
null
CC BY-SA 4.0
null
2023-05-23T18:35:17.833
2023-05-23T18:35:17.833
null
null
247274
null
616724
1
null
null
1
17
I am trying to work on FAVAR model but can't find the dataset used for this replication [https://jbduarte.com/blog/time%20series/r/favar/2020/04/24/FAVAR-Replication.html](https://jbduarte.com/blog/time%20series/r/favar/2020/04/24/FAVAR-Replication.html). I looked everywhere but can't find it just to follow along before applying it to my data
dataset used in replication of Bernanke, Boivin and Eliasz (QJE, 2005) in R
CC BY-SA 4.0
null
2023-05-23T18:37:53.490
2023-05-23T18:37:53.490
null
null
331310
[ "r", "dataset" ]
616725
1
616769
null
1
18
I'm running a cross-lagged panel model analysis in Mplus. The cross-lagged path coefficients were constrained to be equal across time. Now, I'd like to test if the cross-lagged path from MSC to MA and the cross-lagged path from MA to MSC are statistically significantly different (i.e., (6) vs. (7) in the syntax below). I believe this is possible using Wald chi-sq test, but I wonder if someone can help me with the Mplus syntax. I'm adding the syntax that I have currently. I'd greatly appreciate your help! VARIABLE: Names = ``` female g9achv g9ll g9iep g9like3 g9good3 g9diff3 g6achv g6ll g6iep g6like3 g6good3 g6diff3 g3achv g3ll g3iep g3like3 g3good3 g3diff3; ``` Missing = all(-999) ; Usevar = ``` female g3ll g3iep g6ll g6iep g9ll g9iep g9good3 g9diff3 g9like3 g6good3 g6diff3 g6like3 g3good3 g3diff3 g3like3 g3achv g6achv g9achv ; ``` Categorical = ``` g9good3 g9diff3 g9like3 g6good3 g6diff3 g6like3 g3good3 g3diff3 g3like3 ; ``` ANALYSIS: Estimator = WLSMV; Parameterization = theta; Processors = 8; MODEL: ! Measurement models f_g3msc by g3good3 g3diff3 g3like3 (1-3); f_g6msc by g6good3 g6diff3 g6like3 (1-3); f_g9msc by g9good3 g9diff3 g9like3 (1-3); g3good3 g3diff3 g3like3 pwith g6good3 g6diff3 g6like3 ; g6good3 g6diff3 g6like3 pwith g9good3 g9diff3 g9like3 ; g9good3 g9diff3 g9like3 pwith g3good3 g3diff3 g3like3 ; ! Autoregressive (AR) Terms f_g9msc on f_g6msc (4) ; f_g6msc on f_g3msc (4) ; g9achv on g6achv (5) ; g6achv on g3achv (5) ; ! Cross-lagged (CL) Terms f_g9msc on g6achv (6) ; f_g6msc on g3achv (6) ; g9achv on f_g6msc (7) ; g6achv on f_g3msc (7) ; ! Co-movements f_g9msc with g9achv ; f_g6msc with g6achv ; f_g3msc with g3achv ; ! Covariates g3achv f_g3msc on female g3iep g3ll ; g6achv f_g6msc on female g6iep g6ll ; g9achv f_g9msc on female g9iep g9ll ;
Comparing cross-lagged coefficients in Mplus (Wald chi-sq test)
CC BY-SA 4.0
null
2023-05-23T18:40:24.110
2023-05-24T13:36:05.640
2023-05-24T13:36:05.640
388671
388671
[ "panel-data", "structural-equation-modeling", "wald-test", "mplus" ]
616726
1
null
null
2
55
I have two questions in general here. Suppose I am recording data in time and the response that I am collecting is a monotonic curve that goes from 0 to 1 (sort of a like a CDF). I was thinking of modeling the data as such but when I think of it from the regression framework it is like a modeling a nonlinear function, i.e., a model like: $$y_i(t) = \Phi\left(\frac{t - \mu}{\sigma}\right) + \epsilon_i(t)$$ where $\Phi$ might be the CDF of the standard normal, and $\epsilon$ an error ter that follows some distribution. If I wanted to fit this model, i.e., estimate the parameters of the model, do I need to use some sort of nonlinear regression procedure in a software? And is a model like this typical in the literature? Secondly, if this type of model does exist in the literature, is there something similar in the Bayesian world?
Fitting a nonlinear model for a CDF
CC BY-SA 4.0
null
2023-05-23T18:59:08.820
2023-05-26T11:43:32.640
null
null
227508
[ "bayesian", "nonlinear-regression", "cumulative-distribution-function" ]
616727
2
null
616699
1
null
There are 4 simulations here for each individual: the continuous, time-varying covariate $Z_i(t)$, the time-fixed binary covariate $X_i$, the event time $T_i$ (based on $X_i$, the assumed time-varying coefficient $\phi(t)$, and $Z_i(t)$), and the censoring time $C_i$. The censoring simulation needs to be handled last, after the rest of the simulation. This usually takes some playing around to meet any specific censoring percentage. Start with simulating $Z_i(t)$ among the individuals, as indicated in the first paragraph of "Theoretical Set up." You need to do that in continuous time for the simulation of event times, but in the final simulated data set for modeling you restrict the recorded values for each individual to the values at the specific observation times $R_t$. If $\max(R_i(t)) \le 20$, don't keep $Z_i(t)$ values in the final simulated data set after the time of the 20th point of the time grid. As the assumed form of $Z_i(t)$ has period 1 in $t$, I at first assumed that the authors intended `t=1` to represent the maximum observation time, $\tau$, the upper limit in the displayed integral. In that case, then the 25 total potential observation times would be `seq(from = 0.04, to = 1, length.out=25)`, and the restriction to at most the first 20 observations would mean that you don't record those discretely sampled values beyond `t=0.8` for $Z_i(t)$. Reconsideration: The proposed baseline hazard function has a median survival of about 5 time units, however, so that initial assumption seems to be wrong. I'm not sure what the authors intended the maximum observation time to be. The basic idea in the prior paragraph still holds, however: you have 25 evenly spaced potential observation times for event occurrence, but you only record values of $Z_i(t)$ for the first 20. For the integral, once you choose a set of the 3 $\lambda$ values, the integrand has a closed form amenable to numeric integration. As written, it seems that the argument to $\exp$ in the formula for $h(t)$ is fixed, simplifying the subsequent integration of $h(t)$ to get the cumulative hazard $H(t)$. Reconsideration: The formula for $h(t)$ presented by the authors doesn't make sense for a proportional-hazards model with time-varying covariates and time-varying coefficients. In fact, the formula presented by the authors has a time-fixed covariate for each individual that depends explicitly on the last observation time. I think that they intended to write: \begin{align*} h_{i}(t)=h_{0}(t) \exp{\left\{\alpha_{1} X_{i}+ \phi(t) Z_{i}(t) \right\} } \end{align*} for the instantaneous hazard, which would then be integrated over time to get the cumulative hazard, up to the last observation time $\tau$. That handles items 1 and 3, to the limits of what I can figure out. After you have chosen the random $\lambda$ values, $F_i(t)$ for individual $i$ is a continuous closed-form function, used along with the random value of the time-fixed binary covariate $X_i$ to do the integration (in one or the other of the forms discussed above) for each individual's hazard function. But for the $Z_i(t)$ values in the simulated data set, you start by only keeping the values of $Z_i(t)$ up to the 20th discrete observation time. To simulate each individual's event time, you sample from $U(0,1)$. The authors say to find the corresponding time from the inverse of the individual's cumulative hazard function over time, but I think that they meant to sample from the corresponding survival-time distribution, where $S(t)=\exp(-H(t))$. The integration to get the cumulative hazard only needs to be done out to the last observation time. If the random sampling indicates an event time beyond that, you indicated a censored value at the last observation time. Only then does it make sense to simulate from $U(0,C_{max})$ for censoring times. There is no way that I know to choose $C_{max}$ simply. Don't forget that some event times will be censored at the last observation time of the study, also. You try some value, find out what fraction of cases would be censored (based on $C_i < T_i$ or $T_i>$ last observation time), and if that doesn't work keep on iterating. Once you have found a value of $C_{max}$ that gave an appropriate fraction of censored cases, omit from the final data set any data for individual $i$ for which the observation time is greater than the corresponding $C_i$.
null
CC BY-SA 4.0
null
2023-05-23T19:00:58.523
2023-05-25T13:47:37.633
2023-05-25T13:47:37.633
28500
28500
null
616729
2
null
438825
0
null
First, I am not convinced that such predictions have to exist. For instance, in the simulation below, I cannot achieve such performance for any of the coefficient combinations I tried, and I do not even get close (my best exceeds $87\%$). This leaves me pessimistic that such performance must be possible on my simulated data as well as on your real data. ``` library(MLmetrics) set.seed(2023) N <- 100 x <- runif(N) Ey <- 25 + 2*x e <- rt(N, 1) y <- Ey + e beta0s <- seq(-50, 50, 0.1) beta1s <- seq(-5, 5, 0.1) # Loop over possible regression coefficients # max_mape <- matrix(NA, length(beta0s), length(beta1s)) for (i in 1:length(beta0s)){ if (i %% 25 == 0){ print(paste( i/length(beta0s)*100, "% complete", sep = "" )) } for (j in 1:length(beta1s)){ # Make predictions # pred <- beta0s[i] + beta1s[j]*x # Loop over the predictions # each_mape <- rep(NA, N) for (k in 1:N){ each_mape[k] <- MLmetrics::MAPE(pred[k], y[k]) } # Store the maximum MAPE for this combination of coefficients # max_mape[i, j] <- max(each_mape) } } # Display the smallest MAPE across all coefficient combinations # min(max_mape) # I get 0.8733133, well above the requirement to be below 0.10 ``` However, if you want to hunt for such a solution, if it exists, the math behind it is what you probably would expect. In an unconstrained regression, you seek out the parameter vector $\hat\beta$ giving $\hat y = X\hat\beta$ that minimizes the absolute loss. $$ \hat\beta = \underset{\beta\in\mathbb R^{p+1}}{\arg\max}\left\{ \overset{N}{\underset{i = 1}{\sum}}\left\vert y_i - \beta^TX_i \right\vert \right\} $$ (Technically, the $=$ should be $\in$, since the $\arg\min$ need not be unique, but that muddies the notation for little gain.) You, however, have restrictions on the parameter vector beyond just being a vector of $p+1$ coefficients corresponding to the intercept and the coefficients on the features, however. Thus, compute the constrained optimizer. $$ \hat\beta = \underset{\beta\in A}{\arg\max}\left\{ \overset{N}{\underset{i = 1}{\sum}}\left\vert y_i - \beta^TX_i \right\vert \right\}\\ A = \left\{ \beta\in\mathbb R^{p+1}\mid 0.1 \le\underset{i}{\max}\left\{\left\vert \dfrac{ y_i - \beta^TX_i }{ y_i } \right\vert\right\} \right\} $$ If $A$ is the empty set, as I suspect is the case in my simulation, then you cannot achieve the required condition that MAPE (really just the APE, since it applies to each observation-prediction pair instead of to the set of predictions) be less than $10\%$. If $A$, however, is not the empty set, then you have a space over which you can look for solutions that minimize the MAE. As far as software goes, you might be on your own to write specialized optimization software to perform this. However, that second $\hat\beta$ equation is what you want to solve with that software.
null
CC BY-SA 4.0
null
2023-05-23T19:24:23.573
2023-05-23T19:24:23.573
null
null
247274
null
616730
2
null
616702
2
null
These notations with subscripts are an abomination used only by people who don't know better. There are conditional probabilitites, and hence conditional probability distributions and one can speak of $\operatorname E(X\mid A)$ where $X$ is a random variable and $A$ is an event. The event in question may be that a particular random variable has a particular value, and then one can write $\operatorname E(X\mid Y=y),$ where $Y$ is a random variable and $y$ is any of the values it can take. This conditional expected value, $\operatorname E(X\mid Y=y),$ depends on what number $y$ is, so one may write $\operatorname E(X\mid Y=y) = h(y).$ In that case, one can define $\operatorname E(X\mid Y)=h(Y),$ and that is a random variable in its own right, and is determined by the value of the random variable $Y$. Its expected value is the same as the expected value of $X,$ so we have $\operatorname E(\operatorname E(X\mid Y)) = \operatorname E(X).$ Often one sees $\displaystyle \int_S f_{X\,\mid\,Y} (x\mid y) \, dx$ but I think the notation $\displaystyle \int_S f_{X\,\mid\,Y=y} (x)\, dx$ is better. Those a good notations; the subscripts of which you write are nonsense.
null
CC BY-SA 4.0
null
2023-05-23T19:26:59.720
2023-05-23T19:26:59.720
null
null
5176
null
616731
2
null
511100
0
null
This does not make sense to me. Your models all predict probabilities. So might do a better job than others. The goal of your model comparison is to determine which models give the best predicted probabilities. If a model does a bad job of making such predictions, then it should be penalized for that poor performance, and the usual techniques like Brier score and log loss do exactly this. You might find that one model makes poor probabilistic predictions but makes outstanding probabilistic predictions upon applying some calibration. In that case, you can compare the post-calibration predictions, considering the model to be the full pipeline of training the original model and then performing the calibration. [Multi-class calibration, however, seems to be very-much an open question.](https://stats.stackexchange.com/a/557330/247274)
null
CC BY-SA 4.0
null
2023-05-23T19:33:13.347
2023-05-23T19:33:13.347
null
null
247274
null
616732
1
616972
null
2
68
Consider a set of 3D points $X = \{x_1, x_2, ...x_n\} $ with $ x_i\in\mathbb{R}^3$ on which we want to fit an arbitrary probability distribution. The distribution we want to fit models some geometrical shape, like a sphere, a torus or a cylinder, but in principle this detail is irrelevant for my question. Just assume we can evaluate the PDF of the distribution. Assuming there is no closed form solution for the maximum likelihood estimate of the parameters of the distribution, we can use an iterative gradient descent method to minimize the negative log likelihood: $$ \mathcal L = \sum_{x \in X} -\log(P(x \, | \,\theta)) $$ where $P(x \, | \,\theta)$ is the probability density function and $\theta$ is the vector of parameters to optimize. However, imagine that the points we are trying to fit include some blobs of outliers, which cannot be modelled with our distribution. Inspired by this paper [1], I thought one way to handle the outliers could be to include a constant hyper-parameter $\Omega$ in the loss function which gives each point a minimum probability: $$ \mathcal L = \sum_{x \in X} -\log(P(x \, | \,\theta) + \Omega) $$ This way, the loss won't go to infinity as soon as one outlier point is not well fitted by the distribution and has $P(x \, | \,\theta) \approx 0.$ Having tried this in practice, I can confirm this allows fitting the object of interest while ignoring the blobs of outlier points, even in cases where the number of outlier points outnumbers the inliers by far. Moreover, I understand this is somewhat equivalent to fitting a mixture model with two components: 1) our arbitrary distribution with PDF $P(x \, | \,\theta)$ and 2) a uniform PDF with constant value $\Omega$. In this regard, would it make sense to introduce two more parameters into the optimization problem to model the mixture weights? Or is this unnecesary, given that one of the mixture components has no parameters to optimize. $$ \mathcal L = \sum_{x \in X} -\log(w_1 \, P(x \, | \,\theta) + w_2 \, \Omega) \\ s.t. ~~ w_1 + w_2 = 1 $$ In addition, are there other ways to make the fitting robust to outliers in a setup like this? [1] [https://arxiv.org/abs/2303.15440](https://arxiv.org/abs/2303.15440) (see Eq. 7)
Modeling outliers in maximum likelihood estimation with gradient descent
CC BY-SA 4.0
null
2023-05-23T19:42:32.410
2023-05-26T07:52:21.660
null
null
96216
[ "distributions", "maximum-likelihood", "outliers", "robust", "mixture-distribution" ]
616733
1
null
null
1
10
I am confused by the following page in Geostatistics for Environmental Scientists, Webster & Oliver: [](https://i.stack.imgur.com/dRM9p.png) ### My understanding Given locations specified by a vector $\mathbf{x}$, we assume an underlying distribution $Z(\mathbf{x})$ behind each observation. To obtain the given covariance function (4.10), we have assumed second order stationarity (i.e. all the $Z(\mathbf{x})$ have equal mean, variance, and a few other things). The author argues this is troublesome since the mean often varies across a region. To remedy this, we instead only assume the expectation value for the difference in mean to be 0, which should hold for close enough by locations. ### My confusion It seems to me that replacing covariances with variances of differences allows us to relax the requirement $E(Z(\mathbf{x})) = \text{const.}$ to $E(Z(\mathbf{x})) - E(Z(\mathbf{x} + \mathbf{h})) = 0$, which seems useful (and perhaps even necessary) to conducting meaningful geostatistics. However, I don't see how we are "released from the constraints of second order stationarity", aren't we still assuming constant variance across the space of interest ? Were the authors simply referring to the relaxed constraint on the means, or am I missing something ?
Geostatistics: Covariance vs Semivariance
CC BY-SA 4.0
null
2023-05-23T19:43:15.047
2023-05-23T19:43:15.047
null
null
388678
[ "covariance", "stationarity", "assumptions", "geostatistics", "variogram" ]
616734
1
null
null
1
11
The intervention: before each new appointment with a student the guidance counsellor is reminded to consult with the student about proper information on contraceptives. The control group measures how often the guidance counsellor consults with students about contraceptives without any reminders. I was going to do tests that compare the pre and post intervention as independent groups but I'm worried for example if the counsellor has multiple appointments a day, because he is getting a new reminder with each appointment, by the 7th appointment of the day he might be influenced to consult with the student even without the reminder because he's already received the reminder 6 times with previous appointments. Is there a statistical test that accounts for that?
Best statistical test for comparing pre and post intervention (but the intervention happens repeatedly?)
CC BY-SA 4.0
null
2023-05-23T19:54:47.823
2023-05-23T19:54:47.823
null
null
368002
[ "t-test", "intervention-analysis" ]
616737
1
null
null
0
37
A typical laboratory in my region may give a result to e.g. the C-reactive protein (CRP) blood test in the following way (I'll omit the units here): - if the result is under <10, they'll give the result as "<10" - in all other cases, they'll give a (continuous) numeric answer Let's say I want to treat that variable (CRP) as a continuous numeric variable for e.g. regression analysis. What is the most customary way to code those (statistically speaking) pesky "<10" results as numeric? I do understand that some kind of compromise is needed, but what might be the most usual approach? I use R for data analysis, if it's of relevance.
How should I code a laboratory variable that may occasionally have "under X.XX" type of character data as a continuous numeric variable?
CC BY-SA 4.0
null
2023-05-23T20:46:39.850
2023-05-23T21:03:53.757
2023-05-23T21:03:53.757
22311
388683
[ "continuous-data", "censoring" ]
616738
2
null
616679
10
null
I would suggest using a generating polynomial to do this. The idea is to represent the contribution of one die towards the total by the power series $(x^1 + x^2 + \dots + x^n)$ where $n$ is the number of faces on the die, then multiplying together one series per die and calculating the coefficient of the power of the sum you're looking for, e.g., the coefficient of $x^{41}$ in your example. Then you can divide by the number of different possible die rolls to get the result as a probability. This works because expanding all the terms in the product is analogous to listing all the possible combinations of die rolls on your dice, e.g., rolling a 1 on all 6 dice in the question is represented by choosing the $x^1$ term from the series for each die and multiplying these together to get $x^6$. So with the 6 dice in the question, there is only 1 way to roll a total of 6. Similarly, there will be many different rolls that give a sum of 41 and number of these will be equal to the coefficient of $x^{41}$. You can reduce the degree of combinatorial explosion by discarding terms where the power of $x$ exceeds the target, or cannot reach it based on the remaining terms in the product. Wolfram Alpha pseudocode: ``` Coefficient(sum(x^n, n=1 to 10)^2 * sum(x^n, n=1 to 8)^4, x^41) / (10^2 * 8^4) ``` [Sample calculation](https://www.wolframalpha.com/input?i=Coefficient%28sum%28x%5En%2C+n%3D1+to+10%29%5E2+*+sum%28x%5En%2C+n%3D1+to+8%29%5E4%2C+x%5E41%29+%2F+%2810%5E2+*+8%5E4%29)
null
CC BY-SA 4.0
null
2023-05-23T21:12:50.387
2023-05-24T17:43:16.913
2023-05-24T17:43:16.913
12942
12942
null
616740
1
616835
null
1
35
There is a "traditional" biomarker (binary predictor) used in the diagnosis of a disease (binary outcome) that has a high cost to perform for the clinical labs. I'm studying alternatives, which are all analyte measurements (continuous predictors). How can I compare the two? (as in, can we replace the binary predictor with one, multiple or a combination of the continuous predictors) I know I can evaluate the binary's performance by directly comparing to outcome (precision, recall, etc.) and the continuous by AUC but I can't think of any common measures between them. Do I fit them both in logistic regression and compare coefs/odds ratios? Do I need to find a cutoff (youden?) for the continuous variable and treat it as a binary? I'm probably overthinking this, but I appreciate the help.
Comparing binary and continuous predictors for diagnostic test
CC BY-SA 4.0
null
2023-05-23T21:46:41.407
2023-05-24T20:48:06.973
null
null
338678
[ "biostatistics", "model-comparison", "diagnostic", "medicine" ]
616741
1
null
null
1
30
I would like to estimate the joint probability of two variables from two different surveys conditional on other variables that the two surveys have in common. As an example I'm using data from this speed dating experiment [1] and the US Census microdata from 2004 [2]. Here is some code to illustrate: ``` import pandas as pd dating_raw = pd.read_csv(".../path/to/Speed Dating Data.csv", encoding='MacRoman') pums2004_raw = pd.read_csv(".../path/to/ss04pus.csv", nrows=100000, usecols=["SEX", "AGEP", "RAC1P", "JWTR", "HISP"]) dating = pd.concat([ dating_raw.age, dating_raw.gender.map({0: "female", 1: "male"}).rename("sex"), dating_raw.race.map({1: "black", 2: "white", 3: "hispanic", 4: "asian-pi", 5: "native", 6: "other"}), dating_raw.go_out.map({ 1: "Several times a week", 2: "Twice a week", 3: "Once a week", 4: "Twice a month", 5: "Once a month", 6: "Several times a year", 7: "Almost never", }), ], axis=1) pums = pd.concat([ pums2004_raw.AGEP.rename("age"), pums2004_raw.SEX.map({1: "male", 2: "female"}).rename("sex"), pums2004_raw.RAC1P.map({1: "white", 2: "black", 3: "native", 4: "native", 5: "native", 6: "asian-pi", 7: "asian-pi", 8: "other", 9: "other"}).rename("race"), pums2004_raw.JWTR.map({ 1: "Car, truck, or van", 2: "Bus or trolley bus", 3: "Streetcar or trolley car", 4: "Subway or elevated", 5: "Railroad", 6: "Ferryboat", 7: "Taxicab", 8: "Motorcycle", 9: "Bicycle", 10: "Walked", 11: "Worked at home", 12: "Other method", }).rename("transport"), ], axis=1,) pums.loc[pums2004_raw.HISP > 1, "race"] = "hispanic" ``` `dating` looks like this: |age |sex |race |go_out | |---|---|----|------| |26 |male |white |Twice a week | |23 |female |other |Several times a week | |28 |female |asian-pi |Twice a week | |31 |female |black |Several times a week | `pums` looks like this: |age |sex |race |transportation | |---|---|----|--------------| |18 |male |hispanic |Worked at home | |26 |female |white |Car, truck, or van | |15 |female |black |Motorcycle | How can I estimate the chance that an asian male over 25 goes out several times a week AND travels to work by any method other than by car/truck/van? I do not think that it is appropriate to assume that going out and mode of transportation are independent. How can I make use of additional variables that appear on both surveys to improve the estimate? What are the pitfalls? $$ P(go\_out=Several times a week, transport ~= Car, truck or van | race=asian, sex=male, age>25) = ? $$ [1] [https://data.world/annavmontoya/speed-dating-experiment](https://data.world/annavmontoya/speed-dating-experiment) [2] [https://www2.census.gov/programs-surveys/acs/data/pums/2004/csv_pus.zip](https://www2.census.gov/programs-surveys/acs/data/pums/2004/csv_pus.zip)
Estimating joint probabilities across two datasets
CC BY-SA 4.0
null
2023-05-23T21:54:21.043
2023-05-25T00:20:45.773
2023-05-23T23:45:51.057
388686
388686
[ "regression", "survey", "joint-distribution" ]
616742
2
null
616707
1
null
The median absolute deviation (MAD) will not commute with the monotonic function of the data, in most cases. Intuitively, this is because the absolute deviation operation "folds" the data around the median, so the monotonicity of the function is "lost" (cannot be utilized). Allow me to propose a slightly different, but practically similar solution: The MAD is a robust measure of dispersion (spread) of the data values. Therefore, practically speaking, perhaps your goal can be achieved using a different measure of dispersion (spread) of the data values, which can accommodate the monotonicity of the function. Specifically, I propose the [Inter-Quartile Range (IQR)](https://en.wikipedia.org/wiki/Interquartile_range), defined as: $\text{IQR}(X) = \text{Q}_3(X) - \text{Q}_1(X)$, where $\text{Q}_3(X)$ is the value below which 75% of the values exist, and $\text{Q}_1(X)$ is the value below which 25% of the values exist. Note: $\text{median}(X) = \text{Q}_2(X)$ is the value below which 50% of the values exist. The advantage of this proposal is that for all 3 measurements $\text{Q}_k(X)$, $k = \{1,2,3\}$, for a monotonic function $f(\cdot)$, we have that: $$\text{Q}_k(f(X)) = f(\text{Q}_k(X))$$ This fact is a generalization of what user @Henry already mentioned in a comment to the question: $\text{median}(f(X)) = f(\text{median}(X))$. In fact, every quantile (not just the 3 quartiles mentioned above) commutes with a monotonic function of the data, because a monotonic function will not change the sorted order of the data. Finally, note that for a symmetric distribution of the data $X$, the median will equal the average of the first and third quartiles, and therefore the MAD will equal half the IQR. But in general I doubt your pixel data $X$ will be symmetric; Nonetheless, if the users of the system/software expect values previously reported by MAD, then if you decide to use IQR, you can simply report half the IQR.
null
CC BY-SA 4.0
null
2023-05-23T22:24:33.387
2023-05-23T22:24:33.387
null
null
366449
null
616743
1
null
null
1
17
I am trying to prove some ideas about the identifiability of covariance function parameters for small samples. From various numerical experiments, it seems that the graphs of two matern covariance functions with different parameters (we may assume $\sigma^2=1$ for both) will intersect at most once on $\mathbb{R}^+$. Any insights as to whether this is definitively true, or how to show this? I have been staring at recurrence relations and derivatives of the modified Bessel function for days now, and no juice. Edit as suggested by commenter @whuber: I am interested less in estimation than in a more causal inference notion of identifiability: that is, assuming that we can fully observe certain population parameters such as $\mathbb{E}(Z)$ or $\text{Var } Z$, where $Z \in \mathbb{R}^n$ is assumed to be a Gaussian process with a particular parametric (say, Matern) covariance, are we able to identify the parameters from the observed population parameter? In the simplest situation relevant to my research, we assume that $\text{Var } Z$ is observed and that $\sigma^2=1$ (the nugget is trivial). Now, we say that a model specifying a family of covariance functions (such as the Matern, exponential, wave family covariance functions) for $Z$ is identifiable if there do not exist two set of parameters that may give rise to population parameter $\text{Var } Z$. Therefore, the model with the exponential covariance is identifiable so long as there is one nonzero distance because $e^{-w/\phi}=e^{-w/\phi'}$ for nonzero $w$ only if $\phi = \phi'$. On the other hand, it can be shown that two wave covariance function with distinct parameters intersect at an infinite number of points, and for my purposes we consider the wave covariance function to not allow identifiability. I believe that two Matern covariance functions with different range and shape may intersect on more than one point only if they are in fact identical. (Otherwise, if they differ only in range or only in shape, one will dominate the other on $\mathbb{R}^+$. This is easy to show.)
Intersection of matern covariance functions with different parameters
CC BY-SA 4.0
null
2023-05-23T22:45:42.193
2023-05-25T21:05:39.220
2023-05-25T21:05:39.220
207809
207809
[ "covariance", "spatial", "function", "spatial-correlation" ]
616744
1
null
null
1
19
I am part of a study where we are trying to understand the effect of social interaction on weight loss. The idea is to have two sets of Groups. In Group 1 there are group of people joining as a family or related individuals(friends etc.), who take part in weight loss program and asked to socially interact and take part in activities after the sessions as well. Then there is second similar group of people but they aren't encouraged to interact with each other after the session. We want investigate the impact of weight changes of family and friends on an individual's weight change, e.g. larger weight loss of family/friends may lead to larger weight loss of an individual. So I have data which is longitudinal in nature. For each individual there is information available of their weight for 10 different sessions and which group did they belong to. Now analyzing this was doable using Linear Mixed Models and I have seen plenty of examples on the net. But there is another dataset where for each individual, I have information on which ID(person) is related to which person(this person's weight change info is also available). So my scenario is as such that not only I need to find if people belonging to group 1 overall weight loss(or gain) was significant and it wasn't for group 2, but I need to make use of the relationship info as well. Neither I am able to find any such use case where such thing was done or able to figure out how to do it. My supervisor is suggesting to use Linear Mixed Models here but I am not able to figure out how to frame the data in such a way that I can use such a model and then interpret it. The datasets are in this format: The firt dataset df1: {'id', 'group', 'age', 'gender', 'Session_1', 'Session_2', 'Session_3', 'Session_4', 'Session_5', 'Session_6', 'Session_7', 'Session_8', 'Session_9', 'Session_10'} A sample dataset: df1: ``` 'id', 'group', 'age', 'gender', 'Session_1', 'Session_2', 'Session_3', 'Session_4', 'Session_5', 'Session_6', 'Session_7', 'Session_8', 'Session_9', 'Session_10' 101, GroupA, 45, M, 76, 75, 76, 78, 75, 75, 74, 74, 73, 73 102, GroupA, 42, F, 56, 57, 56, 58, 55, 55, 54, 53, 55, 54 103, GroupA, 72, M, 80, 81, 80, 82, 79, 79, 78, 77, 79, 78 104, GroupA, 41, F, 60, 61, 60, 62, 59, 59, 58, 57, 59, 58 105, GroupB, 44, M, 76, 75, 76, 78, 75, 75, 74, 74, 73, 73 106, GroupB, 41, F, 56, 57, 56, 58, 55, 55, 54, 53, 55, 54 ... and so on ``` And df2: {'id', 'id_relation', 'household', 'closefriend', 'acquint', 'closerel', 'extendfam', 'spouse_blood', 'spouse_notblood', 'group'}. Sample dataset: ``` 'id', 'id_relation', 'household', 'closefriend', 'acquint', 'closerel', 'extendfam', 'spouse_blood', 'spouse_notblood', 'group' 101, 102, 1, 0, 0, 0, 0, 0, 0, GroupA 101, 103, 1, 0, 0, 0, 0, 0, 0, GroupA 102, 101, 1, 0, 0, 0, 0, 0, 0, GroupA 102, 103, 1, 0, 0, 0, 0, 0, 0, GroupA 103, 101, 1, 0, 0, 0, 0, 0, 0, GroupA 103, 102, 1, 0, 0, 0, 0, 0, 0, GroupA 104, 105, 0, 1, 0, 0, 0, 0, 0, GroupB 104, 106, 0, 1, 0, 0, 0, 0, 0, GroupB 105, 104, 0, 1, 0, 0, 0, 0, 0, GroupB 105, 106, 0, 1, 0, 0, 0, 0, 0, GroupB 106, 104, 0, 1, 0, 0, 0, 0, 0, GroupB 106, 105, 0, 1, 0, 0, 0, 0, 0, GroupB ... and so on ``` The other column here give you information about what is the relation of person in 'id' with person in 'id_relation'. I am thinking if just the relation info is enough and not the type of relationship. And just to be clear there are different sets of related people here, not just one single group. I can merge the dataset and convert the df1 from wide form to long form, but then do what? How to model my dataset and what am I building the model on? What's the formula? weight ~ ...+...+...? Moreover how would I determine that was there overall weight loss or weight gain? Can using the mean/median value is enough for all sessions and then seeing the trendline? Any kind of help, or link to code or study or methodology is appreciated.
Longitudinal data analysis with relationships information
CC BY-SA 4.0
null
2023-05-23T23:24:02.267
2023-05-23T23:24:02.267
null
null
217662
[ "mixed-model", "panel-data", "multilevel-analysis", "clinical-trials" ]
616746
1
null
null
2
21
I want to find out a solution to the following question: "Is there is any impact of social media on the creation of entrepreneurial opportunity for rural women?" I have 2 groups: Online based business & traditional business. A 5-point Likert scale has been used to have their opinions evaluated. My variables are: - Entrepreneurial opportunity(4 items) - Network building(4) - Cost efficiency(4) - Promotional Activities(4) - Access to information(4) - Work life balance(4) I am really confused about how should I proceed. My questions are the following: - What could be a plausible hypothesis? - My data is not normal, so can I use a t-test? - Should I go for factor analysis?
Which statistical test should I use to compare two groups where their opinions are collected by using Likert Scale?
CC BY-SA 4.0
null
2023-05-23T23:30:05.227
2023-05-24T04:16:36.217
2023-05-24T03:22:48.950
345611
388689
[ "hypothesis-testing", "t-test", "factor-analysis", "normality-assumption", "likert" ]
616747
2
null
616658
0
null
#### Variable Definitions The wording conveys, with caution, the nature of each variable. The dependent variable is the variable being acted upon. It is dependent on the influence of other factors. Generally speaking, you can sometimes refer to this as the outcome or response, which conveys that it is the result of some other influence. In other words, if $x\rightarrow{y}$, then $y$ is our outcome. The independent variable is the force acting upon the dependent variable. It is assumed that other forces are not acting upon it, hence it is "independent." This of course isn't entirely true, we simply model it this way for simplicity if we do not have a theoretical purpose for modeling it as influenced by other factors. Which of your variables is used in this case largely depends on what you think is theoretically valid, as you could technically model both as independent variables and dependent variables. Do deaths "cause" Facebook posts? Do Facebook posts "cause" more deaths? It's probably unlikely that Facebook causes deaths (at least not in the way implied here), so your more likely candidates here would be: $$ \text{IV} = \text{COVID deaths} \\ \text{DV} = \text{Facebook posts} $$ #### Correlation and Causation Keep in mind the old adage that "correlation does not imply causation" and that a correlation simply measures the covariance of two variables. For a Pearson correlation, that means: $$ r = \frac{\Sigma(x_i-\bar{x})(y_i-\bar{y})}{\sqrt{\Sigma(x_i-\bar{x})^2 \Sigma(y_i-\bar{y})^2}} $$ Which can be alternatively represented as: $$ r = \frac{Cov(x,y)}{{S_x}{S_y}} $$ Both equations dictate that the covariance of the variables is divided by their standard deviations. In essence, this means we can only say that each variable is associated with each other, but we don't know which causes which. Returning to one of your points: > I want to determine whether the number of deaths influences the number of posts. We can only say they are associated, we cannot directly infer that our IV of COVID deaths indeed influences or causes Facebook posts. Consider for example these two scatterplots, which simply have their axes reversed: [](https://i.stack.imgur.com/36O2Q.png) Does flower petal length cause petal width? Or vice versa?. With that in mind, be careful with your wording when conducting a correlation. How you word your explanation matters, and only more sophisticated statistical analysis can attempt to answer more causal questions.
null
CC BY-SA 4.0
null
2023-05-23T23:30:50.800
2023-05-23T23:30:50.800
null
null
345611
null
616748
2
null
616697
2
null
tl;dr A parametric bootstrap (#3 below), or a cheesy parametric bootstrap (#2 below), are probably the most straightforward approaches. The components of uncertainty are (1) the sampling variance of the fixed effect components $\mathbf V$ (for a fixed-effect model matrix $\mathbf X$, the corresponding variances of the predictions are $\textrm{Diag}(\mathbf X \mathbf V \mathbf X^\top))$; (2) the variances of the random effects (for previously unmeasured levels, if $\boldsymbol \Sigma$ is the covariance matrix of the random effects, then $\textrm{Diag}(\mathbf Z \boldsymbol \Sigma \mathbf Z^\top)$ are the corresponding variances of the random-effects components of the prediction. Combining these components is easy, because they're Gaussian. Notes so far: - you can construct $\mathbf X$ with model.matrix(); you could use lFormula() and extract the reTrms$Zt component to get $\mathbf Z^\top$ (or, see the machinery in vignette("lmer", package = "lme4") and use model.matrix() + Matrix::fac2sparse + Matrix::KhatriRao to make your own). $\mathbf V$ comes from vcov(); $\boldsymbol \Sigma$ is crossprod(getME(., "Lambda")). Once you go back from the link to the response scale (exponentiating the random effects, which are on the log scale) we end up with a log-Normal with the specified mean and variance. The hard part is combining the Gamma distribution. What we have now is a generalized logNormal-Gamma distribution (ugh). - If you just wanted the variance (and were prepared, e.g., to construct confidence intervals based on a Gaussian distribution), you could get the variance of the generalized distribution by combining the Gamma and log-Normal variance (the formula for the variance of a generalized distribution in terms of the mean and variance of its components can be derived from generating functions, and is in Pielou's Mathematical Ecology book, which I don't have with me right now ... it may appear elsewhere on the web but I couldn't find it in a quick search). - The next better approach, which would be pretty quick: draw MVN samples (using MASS::mvrnorm or one of the other alternatives in the R ecosystem) from the distribution with the combined FE sampling variance + RE variance, exponentiate them, and draw Gamma samples based on those mean values. This is basically the method described in section 7.5.3 of Bolker (2008), which Lande et al. 2003 call "population prediction intervals". This is pretty good, but ignores uncertainty in the RE covariance matrix. You might be able to implement this with simulate() — - "True" parametric bootstrapping would simulate, re-estimate parameters, and then do the computation above including the RE variance but not the FE sampling variance (that part would be handled by the sampling/re-fitting). - Or you could go all the way Bayesian ... These four choices are essentially ordered by computational effort (the actual Bayesian computation might not be slower than the parametric bootstrap, but might take more effort to set up). --- Second thoughts after starting an implementation. Not sure that we actually want $\boldsymbol \Sigma$ from the model, which is the covariance matrix of the random effects in the fitted model. Instead, we want to create a new standalone model block and push the relevant elements of the `theta` vector into it ...
null
CC BY-SA 4.0
null
2023-05-23T23:42:05.117
2023-05-29T18:04:39.257
2023-05-29T18:04:39.257
2126
2126
null
616749
1
null
null
1
28
In a 3 (condition) X 2 (time) mixed model ANOVA. If I hypothesised that anxiety in group A will increase from time 1 to time 2 and my results found no significant interaction but a significant main condition and time effect can I accept the hypothesis or not? In addition, if main effect of time showed paranoia increased from time 1 to time 2 and main effect of condition A was higher than condition B but not C, can we infer causation? As in can we infer that condition A caused an increase in anxiety or can we only base this on the interaction between time and condition? I think I'm just confused about whether to accept the hypothesis based on main effects or reject based on non-significant interaction.
Interpreting Mixed Model ANOVA
CC BY-SA 4.0
null
2023-05-24T00:18:18.513
2023-05-31T20:55:20.757
2023-05-31T20:55:20.757
121522
388690
[ "hypothesis-testing", "mixed-model", "anova", "causality", "psychology" ]
616750
1
null
null
0
7
Objective: Report summary statistics for each treatment x timepoint x condition sub-grouping when there is unequal sampling and repeated observations per subject. It is assumed that all the subjects come from the same population. I think I want to use this pooled standard deviation formula for spread, but am uncertain: [](https://i.stack.imgur.com/0ircJ.png) A 1% sample of "The Data": ``` df <- structure(list(id = c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8), timepoint = c("t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t1", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2", "t2"), intervention = c("trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "trt", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl", "ctrl"), condition = c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 2, 2, 2, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2), v_magnitude = c(0.75, 0.096, 0.38, 0.579, 0.164, 0.175, 0.309, 0.113, 0.293, 0.113, 0.594, 0.112, 0.434, 0.362, 1.121, 0.458, 0.287, 0.153, 0.258, 0.428, 0.218, 0.644, 0.254, 0.163, 0.13, 1.102, 0.163, 0.2, 0.348, 0.165, 0.361, 0.064, 0.481, 0.207, 0.16, 0.225, 0.275, 1.809, 0.366, 0.252, 0.335, 1.561, 1.173, 0.248, 0.706, 0.206, 0.347, 0.273, 1.57, 0.123, 0.279, 0.671, 0.11, 0.168, 0.31, 1.291, 0.356, 0.294, 1.902, 0.192, 0.291, 0.197, 0.543, 1.252, 0.201, 0.152, 0.221, 0.809, 3.09, 0.313, 0.832, 0.253, 0.648, 0.529, 1.179, 0.224, 0.322, 1.17, 1.436, 0.35, 0.178, 0.095, 1.657, 0.734, 0.17, 0.312, 0.287, 0.496, 0.618, 0.468, 0.453, 0.469, 0.364, 0.328, 0.387, 0.54, 0.279, 0.628, 0.253, 0.323, 0.602, 0.122, 0.323, 0.486, 0.351, 0.273, 0.751, 0.331, 1.003, 0.824, 2.79, 2.037, 0.637, 0.21, 0.466, 0.023, 1.133, 0.615, 0.331, 2.305, 0.521, 0.464, 0.355, 0.343, 0.734, 0.646, 0.263, 0.435, 3.169, 0.118, 0.92, 0.267, 0.285, 1.798, 0.187, 0.154, 0.92, 0.205, 0.292, 0.45, 0.149, 2.082, 0.721, 0.528, 0.542, 0.192, 0.083, 0.174, 0.163, 0.33, 0.281, 0.256, 0.246, 0.409, 0.133, 0.156, 0.183, 0.137, 0.413, 0.281, 0.727, 0.18, 0.076, 0.11, 0.101, 0.051, 0.225, 0.363, 0.158, 0.322, 0.298, 0.36, 0.253, 0.336, 0.127, 0.26, 0.167, 0.353, 0.252, 0.862, 2.506, 0.058, 0.318, 0.284, 0.437, 0.467, 0.127, 0.982, 3.257, 0.446, 0.154, 0.194, 3.849, 0.891, 0.778, 0.562, 0.24, 0.328, 0.186, 0.713, 0.274, 0.542, 0.331, 0.82, 0.364, 0.224, 0.637, 0.228, 0.719, 0.5, 0.598, 0.137, 1.393, 0.877, 0.325, 0.674, 1.302, 0.221, 0.145, 0.188, 0.248, 2.017, 1.827, 0.301, 0.155, 0.149, 0.405, 0.7, 0.169, 0.078, 0.598, 1.302, 0.274, 0.209, 0.602, 0.102, 0.082, 0.134, 0.116, 0.268, 0.825, 0.297, 0.761, 0.461, 0.657, 0.659, 1.025, 0.666, 0.28, 0.525, 0.385, 0.598, 0.342, 0.203, 1.588, 1.038, 0.227, 0.186, 0.492, 0.239, 0.712)), row.names = c(NA, -261L), class = c("tbl_df", "tbl", "data.frame")) ``` 8 subjects, 4 per treatment condition. Every subject is measured multiple times at each timepoint and condition. For the mean, I would usually get the mean per id and then find the mean of that aggregated product. Here's what that routine looks like: ``` df |> group_by(id, intervention, timepoint, condition) |> summarise(subj_mean = mean(v_magnitude), subj_var = var(v_magnitude), n = n()) |> group_by(intervention, timepoint, condition) |> summarise(mean = mean(subj_mean), pooled_sd = pooled_sd(subj_var, n)) ``` [](https://i.stack.imgur.com/pvcGq.png) Where: ``` pooled_sd <- function(var, n, k = n()) sqrt(sum(var * ( n - 1 )) / (sum(n) - k)) ``` We ultimately analyze these data with a mixed effects model and perform hypothesis testing on the estimated marginal means.
What's the correct way to report location and spread descriptive stats when you have repeated observations and unequal sampling?
CC BY-SA 4.0
null
2023-05-24T00:36:05.473
2023-05-24T00:36:05.473
null
null
285797
[ "r", "descriptive-statistics" ]
616751
2
null
190182
0
null
There are basically two kinds of splines. Regression splines and Smoothing splines. Both can become P-splines by adding penalties to balance model fit to data and smoothness of the curve (and without needing to choose number of knots and position of knots as the penalty handles that). The difference between Regression spline and Smoothing spline is that that former includes both spline (non-linear function f(.)) and regression coefficients of linear variables while the later only has spline (non-linear function f(.)).
null
CC BY-SA 4.0
null
2023-05-24T00:57:34.070
2023-05-24T00:58:42.630
2023-05-24T00:58:42.630
388692
388692
null
616752
1
616846
null
4
216
Consider the following process for generating a random sample: - Sample $X_1, X_2, \dots, X_n \sim \mathcal{N}(0,1)$ - Compute $M = \max\limits_i |X_i|$ - Scale the values to get $Z_i = X_i / M$ Can we say anything about the convergence of distribution of $Z_i$ as $n\to\infty$? The $Z_i$s are clearly identically distributed, but they aren't independent since for any $n$ almost certainly exactly 1 $Z_i$ will take the value -1 or 1. Here's what the points are distributed like for various values of $n$ (65,000 trials each): $n=16$ [](https://i.stack.imgur.com/jHtbM.png) $n=64$ [](https://i.stack.imgur.com/5Os0f.png) $n=4,096$ [](https://i.stack.imgur.com/iqcuE.png) These all use 100 evenly spaced bins, hence the spikes at the ends for the smaller values of $n$. Does the limiting distribution have a name? Edit: I should clarify that the obvious candidate thing for it to converge to would be a point mass on 0, but I'm not sure whether it does that or not. Renamed $Y$ to $Z$ for consistency with the plots.
Does the following distribution converge to anything?
CC BY-SA 4.0
null
2023-05-24T01:03:07.413
2023-05-27T09:27:03.750
2023-05-24T19:17:08.423
82535
82535
[ "normal-distribution", "convergence", "extreme-value" ]
616753
1
null
null
0
15
I am trying to determine what procedure should I use to feature engineer the most descriptive possible dataset to predict a binary outcome. The dataset has variables that are count-valued with different max values, and continuous variables. I know I can standardize or normalize the continuous variables, and the ones in my dataset do follow a good approximation of a Gaussian distribution, but this makes these continuous values way smaller than those that appear in count variables $(\geq 10 for all count-valued covariates)$. Similarly, I don't know if it is advisable to adjust count data (which in this case seems to follow Poisson distributions) via normalization or standardization. I have read that these problems are algorithm-specific, do these transformations affect Neural Network approaches?
How to normalize dataset with mixed data (continuous and count)?
CC BY-SA 4.0
null
2023-05-24T01:05:01.100
2023-05-24T01:05:01.100
null
null
246997
[ "neural-networks", "normalization", "count-data", "standardization", "mixed-type-data" ]
616754
2
null
616679
6
null
Using @Henry's suggestion of recursion over the dice, here is an R function that will return the full PMF of the sum resulting from rolling a set of fair dice with number of faces of each die given by the vector `d`. The function manages combinatorial explosion by summing the probability by each possible cumulative die roll at each iteration. So the number of rows in the table after iteration $i$ is: $$1-i+\sum_{j=1}^id_j$$ ``` library(data.table) droll <- function(d) { dt <- data.table(v = seq_len(d[1]), p = 1/d[1]) for (x in d[-1]) { dt <- dt[ , .( v = c(outer(v, seq_len(x), "+")), p = rep(p/x, x) ) ][, .(p = sum(p)), v] } dt } # 4d8 + 2d10 with( droll(rep.int(c(8, 10), c(4, 2))), barplot(p, names.arg = v, main = "4d8 + 2d10") ) ``` [](https://i.stack.imgur.com/w6hGZ.png) ``` # 24d6 + 16d8 + 17d10 + 9d12 + 11d20 system.time(dt <- droll(rep.int(c(6, 8, 10, 12, 20), c(24, 16, 17, 9, 11)))) #> user system elapsed #> 0.16 0.01 0.21 with(dt, barplot(p, names.arg = v, main = "24d6 + 16d8 + 17d10 + 9d12 + 11d20")) ``` [](https://i.stack.imgur.com/5Ovz5.png)
null
CC BY-SA 4.0
null
2023-05-24T01:29:22.263
2023-05-24T14:54:25.373
2023-05-24T14:54:25.373
214015
214015
null
616755
1
616764
null
3
137
As specified here on the [NIST](https://www.itl.nist.gov/div898/handbook/eda/section3/eda35g.htm) website, the 3rd assumption for the K-S test is "... the distribution must be fully specified. That is, if location, scale, and shape parameters are estimated from the data, the critical region of the K-S test is no longer valid. It typically must be determined by simulation". With that asumption in place, when would the use of the K-s test be valid? Either you are generating data from a known distribution, in which case you know the distributions are different or else you are using data coming from an unknown distribution which therefore means you have to use data to estimate the location, shape and scale parameters. Those who came up with the K-S test are much smarter than I so I am obviously misunderstanding something here. What is it that I am possibly misunderstanding that makes this test usable? Is that assummption really weak and just routinely ignored without impact? It just appears to me that the assumption is so rigid as to not have a practical use. For context and the reason I am trying to understand the test better, I am looking at comparing differences in the distribution of travel distances between two groups. I will try providing appropriate example data soon but wanted the question out there as soon as possible.
How does the assumption that the distribution must be fully specified when using a Kolmogorov-Smirnov practically impact comparison of distributions?
CC BY-SA 4.0
null
2023-05-24T01:35:41.297
2023-05-24T05:49:52.703
null
null
297688
[ "hypothesis-testing", "assumptions", "kolmogorov-smirnov-test" ]
616756
1
617190
null
0
27
From my understanding a large value of the C parameter in SVM decreases the number of misclassified instances and narrows the margin. My question is - considering that SVM with linear kernel is used, with soft margin - does increasing the value of C always guarantees that the margin is not getting bigger or there is a chance that it still can?
Does C parameter in Linear SVM guarantee the margin does not increase?
CC BY-SA 4.0
null
2023-05-24T01:36:07.010
2023-05-29T07:51:57.003
null
null
388693
[ "machine-learning", "svm" ]
616757
1
616763
null
1
35
I dug through previous simple slope/effect questions and couldn't find what I was looking for, but happy to be pointed to it if it exists. On p. 17 of Jaccard and Turrisi (2003), they offer the following basic example of a regression equation with a single interaction term: $Y= a + β_1X+β_2Z+β_3XZ+ε$ They explain how to interpret the coefficients for X and Z thus (p. 24): > The coefficient for X estimates the effect of X on Y when Z is at a specific value, namely, when Z = 0. The coefficient for Z estimates the effect of Z on Y when X is at a specific value, namely, when X = 0. These are simple slopes (or "simple effects" as J & T call them). This makes sense to me. However, I don't understand how to interpret the non-interaction coefficients in the following case, which adds W and XW as new predictors: $Y= a + β_1X+β_2Z+β_3W+β_4XZ+β_5XW+ε$ When only one interaction term is present, it is straightforward to interpret $β_1$ and $β_2$. But in a model where X is present in two (or more) interaction terms, how is $β_1$ to be interpreted? Is it now the effect of X on Y when both Z and W = 0, or something else? I'm also guessing the interpretation of $β_2$ remains unchanged as Z only appears in one interaction. Thanks in advance. Reference Jaccard, J., & Turrisi, R. (2003). Interaction effects in multiple regression (No. 72). Sage.
Interpreting simple slopes in OLS regression with multiple interactions
CC BY-SA 4.0
null
2023-05-24T02:18:25.577
2023-05-24T05:30:59.823
null
null
26625
[ "regression", "interaction", "regression-coefficients", "simple-effects" ]
616759
2
null
616746
0
null
#### Theoretical Underpinnings and Reliability First, it seems you have a clear hypothesis on what you want to do, but you don't have as clear theoretical reasons for including some of your variables (I'm sure you do, but they are not listed here). In order to apply any form of inference to your statistics, you should have some basic understanding of what variables are important to include and which are not so important. Some of your variables clearly fit (network building for example), while others do not seem related to your research question (e.g. cost efficiency). I'm guessing your outcome variable, based on your question, is entrepreneurial opportunity, and given it is a 4-point Likert scale composite, it is not surprising that the data is not normally distributed. Is this the case when the composite is aggregated together (such as a sum)? What distribution does it belong to? With some more information, one can ascertain how this should be modeled. If you do end up aggregating your items together, you should at the very least check reliability of your items and see if they even belong together. Traditional ways are Cronbach's $\alpha$, but if you prefer a factor analysis approach, McDonald's $\omega$ is often superior. In any case, you just want to make sure your items are actually related. #### Control Groups My biggest concern is that you do not have a control group to compare against, so a t-test isn't even on the table. I'm not sure how far you are into the data collection process, but I would consider measuring these things for a group that is "urban" so you do not make wild claims about rural women without having some baseline to compare against. While you can run the analysis as-is and compare to some past data, what conclusions you can draw from such an analysis may be very limited depending on a variety of influences. #### Regression and Example In any case, if you create some composites with each of your variables here, you could simply run a regression and attempt to either normalize your response variable or use an appropriate link function for your regression. Here I have simulated a regression in R with Poisson-distributed (aka right-skewed) data to show how it can be done. First, I simply create some data mimicking at least some of your variables. ``` #### Simulate Data #### library(tidyverse) set.seed(123) group <- factor( rep(1:2, each=500), labels = c("urban","rural") ) network <- rpois(n=1000,lambda = 1) * 2 cost <- rpois(n=1000,lambda = 1) * 2 entrep <- rpois(n=1000,lambda = .5) * 1 + (.5*network)+(.5*cost) #### Create Tibble and Check Response #### tib <- data.frame(network,cost,entrep) %>% as_tibble() hist(entrep) # response is right skewed ``` Visualizing our data, we can see our variables are all right-skewed: ``` #### Visualize Data #### tib %>% gather() %>% ggplot(aes(x=value))+ geom_histogram( binwidth = 2, color = "black", fill = "steelblue" )+ facet_wrap(~key, scales = "free") ``` [](https://i.stack.imgur.com/weTZp.png) Then I fit a regression using a Poisson link function to match our skewed variable and suppress the intercept with `-1` to make interpretation easier since this is technically a categorical regression (with group as our categorical factor): ``` #### Run Regression #### fit <- glm( entrep ~ network + cost + group - 1, data = tib, family = poisson ) summary(fit) ``` The results are shown below: ``` Call: glm(formula = entrep ~ network + cost + group - 1, family = poisson, data = tib) Coefficients: Estimate Std. Error z value Pr(>|z|) network 0.166067 0.008632 19.239 < 2e-16 *** cost 0.164832 0.008271 19.928 < 2e-16 *** groupurban 0.137309 0.043286 3.172 0.001513 ** grouprural 0.149783 0.043006 3.483 0.000496 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for poisson family taken to be 1) Null deviance: 2696.36 on 1000 degrees of freedom Residual deviance: 375.24 on 996 degrees of freedom AIC: 2965.8 Number of Fisher Scoring iterations: 4 ``` Because I have suppressed the intercept, each group is compared. We can see that rural group has a slightly higher average entrepreneurial opportunity score than the urban group. We can also determine that the other continuous variables are significant, though we should check their standardized beta coefficients to see the magnitude of the effects: ``` #### Check Betas #### library(lm.beta) lm.beta(fit) ``` We can see that cost has the most substantial impact, yielding a $.16$ standard deviation increase in the response with each standard deviation increase in cost: ``` Call: glm(formula = entrep ~ network + cost + group - 1, family = poisson, data = tib) Standardized Coefficients:: network cost groupurban grouprural 0.15508255 0.16000287 0.02523122 0.02780836 ```
null
CC BY-SA 4.0
null
2023-05-24T04:16:36.217
2023-05-24T04:16:36.217
null
null
345611
null
616760
1
null
null
0
8
When changing the model to a more sophisticated do we ALWAYS need to change the design experiment and/or collecting more data? For example, after fitting a multiple linear model to the data, I checked the model fit by plotting the ordinary residuals (the residuals without normalization) vs. the fitted values and found a pattern in the plot. There's quite a good reason to suspect there's an interaction between the regressors and so I decided to move to a more sophisticated model. In such case, do I ALWAYS have to change the experimental design and/or collect more data to fit the new model in.
Relationship between the model, experimental design and data collection
CC BY-SA 4.0
null
2023-05-24T04:43:14.253
2023-05-24T04:43:14.253
null
null
383728
[ "multiple-regression", "interaction", "experiment-design" ]
616761
1
null
null
1
40
I have a conceptual question to ask that may come off dumb, but I'm just trying to see if I understand A/B testing properly as I've only studied it but not worked much in practice. Let's say we are running an A/B test on ads on some mobile app on a homepage. This experiment tests some UI change, and we are concerned with CTR We calculate the sample size required based off CTR baseline rates, MDE = 1%, alpha = 0.05, and power = 0.8 Let's say we get a sample size required of 1M, and our daily active users (DAU) is 10M. So we conclude we need to allocate ~10% of users to this experiment, and we choose to run it for 28 days because we want to test a 21-day retention metric as well Now our team says because of constraints, we can only run this experiment for 10 days. - Does this impact sample size calculations? - Does the % of DAU we need to allocate change? Intuitively, #2 seems like a yes - meaning if we are going to run the test for a shorter length we would want to increase sample size to get the same confidence in our result. But looking at my inputs for calculating these numbers, the answer seems no? i.e. the sample size will still be 1M, and we still get 10M DAU? Can someone please point out where I might have faulty logic/poor understanding of A/B testing here?
A/B testing and Experiment Length
CC BY-SA 4.0
null
2023-05-24T04:53:07.867
2023-06-02T02:03:45.630
null
null
326432
[ "hypothesis-testing", "statistical-significance", "experiment-design" ]
616762
1
616795
null
1
35
I was reading Yarin gal's Phd [thesis](https://www.cs.ox.ac.uk/people/yarin.gal/website/thesis/thesis.pdf) regarding the use of dropout as a technique to turn a neural network in a bayesian neural network. at page 32 of the pdf (18 for the thesis) he writes that the probability distribution for the output $y^* $ for a new input point $x^*$ can be obtained by integrating over the space of the parameters, i.e. the weights of the neural network: $$ p(y^*|x^*,D)= \int_w p(y^*|x^*,w) p(w|D) \mathrm{d} w ;$$ and he calls this procedure simply inference. but what's the precise name of this procedure? some says bnns rely on [model averaging](https://arxiv.org/pdf/2007.06823.pdf), other sources says that they marginalize over the weight. So what's the correct definition? I think that BNNs approximate numerically the expected a posteriori estimator but i am not sure that's correct.
question on the computation of the predictive uncertainty in bayesian neural networks
CC BY-SA 4.0
null
2023-05-24T04:53:15.467
2023-05-24T15:25:39.780
2023-05-24T12:41:25.630
358871
275569
[ "neural-networks", "bayesian", "predictive-models" ]
616763
2
null
616757
2
null
#### Algebraic Expression We can just as easily reshape the linear equation you have below (I changed $a$ to $\beta_0$ for consistency in notation): $$ y = \beta_0 + \beta_1{X} + \beta_2{Z} + \beta_3{W} + \beta_4{XZ} + \beta_5{XW} + \epsilon $$ Into this congruent equation, re-expressed with $X$ as the conditional effect (see Aiken & West, 1991, p.13): $$ y = (\beta_1 + \beta_4Z + \beta_5{W})X + \beta_0 + \beta_2Z + \beta_3W + \epsilon $$ You can then see that the intercept and individual main effects of $\beta_2Z$ and $\beta_3W$ are unchanged (and algebraically, neither is $\beta_1X$). The only parts that are not additive here are the products formed from $\beta_4XZ$ and $\beta_5XW$. We are only estimating the products of $X$ with $W$ and $Z$ explicitly for interactions. So to be explicit, the $XW$ here is an interaction, albeit a simple two-way interaction between $X$ and $W$. I assume here that three-way interactions have not been modeled in this equation, else it would be included as another coefficient. The interpretation for this model is still the same. You set all the $\beta$ coefficients to zero, thereafter a change in one coefficient is conditional on all the rest being equal to zero. To find out what the effect of $X$ is on the outcome, we can just plug this into our earlier re-expressed equation like so, leaving only our $\beta_0$ and $\beta_1$ alone: $$ y = (\beta_1 + 0 + 0)X + \beta_0 + 0 + 0 + \epsilon $$ Which simplifies to: $$ y = \beta_0 + \beta_1{X} + \epsilon $$ #### Simulated Example Using your example above, we can simulate this directly in R, setting all of the mean and standard deviations to their defaults with $n = 1000$. ``` #### Simulate Data #### set.seed(123) n <- 1000 x <- rnorm(n) y <- rnorm(n) z <- rnorm(n) w <- rnorm(n) ``` Then we can fit the data into a regression which explicitly models the main effects of $X$,$W$,and $Z$, along with the interactions with $X$. ``` #### Fit Data #### fit <- lm( y ~ x + z + w + x:z + x:w ) summary(fit) ``` The results can be shown below: ``` Call: lm(formula = y ~ x + z + w + x:z + x:w) Residuals: Min 1Q Median 3Q Max -2.7916 -0.6280 -0.0261 0.6446 3.4334 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.01657 0.03105 -0.534 0.5937 x 0.02560 0.03073 0.833 0.4050 z 0.05043 0.03112 1.620 0.1055 w -0.01367 0.03104 -0.440 0.6598 x:z -0.05401 0.03003 -1.798 0.0724 . x:w -0.05447 0.03035 -1.795 0.0730 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.9761 on 994 degrees of freedom Multiple R-squared: 0.009623, Adjusted R-squared: 0.004641 F-statistic: 1.932 on 5 and 994 DF, p-value: 0.08655 ``` Using the example I gave earlier, we can simply set $W$ and $Z$ to zero and $X$ to an arbitrary value of $2$. By creating new data, we simply fill this into our linear equation and use `predict` to see what it should come out as: ``` #### Create New Data #### new.data <- data.frame( z = 0, w = 0, x = 2 ) #### Predict New Data #### pred <- predict(fit, newdata = new.data) ``` The estimated value of the prediction is $0.03462692$. If we calculate our coefficients the same way we spoke about earlier, we should get the same result as the `pred` object (I have commented which of our $\beta$ coefficients go where in hash comments next to the R code, as by default R codes the coefficients by their sequential order): ``` #### Check if Predicted Outcome Matches Coefficients #### pred == ( fit$coefficients[1] # beta 0 + (fit$coefficients[2] * 2) # beta 1 + (fit$coefficients[3] * 0) # beta 2 + (fit$coefficients[4] * 0) # beta 3 + (fit$coefficients[5] * 0) # beta 4 + (fit$coefficients[6] * 0) # beta 5 ) ``` By doing so, our prediction comes true: ``` 1 TRUE ``` #### Citation Aiken, L. S., & West, S. G. (1991). Multiple regression: Testing and interpreting interactions. Sage Publications.
null
CC BY-SA 4.0
null
2023-05-24T05:25:05.647
2023-05-24T05:30:59.823
2023-05-24T05:30:59.823
345611
345611
null
616764
2
null
616755
4
null
The one-sample K-S test isn't all that useful, partly for the reason you give. It's not useless; sometimes you do care about a precise null distribution. For example, you might want to know if a distribution of $p$-values is $U[0,1]$, or you might be testing a random number generator ([R uses Massart's inequality](https://notstatschat.rbind.io/2018/08/01/testing-probability-distribution-generators/), which is basically the one-sample K-S test) You can fairly easily generate a null distribution by simulation. That's how the Lilliefors test for Normality works: you compute the K-S statistic comparing your data distribution to a Normal with the same mean and variance. The null distribution is generated by taking numbers actually from a Normal distribution and doing the same thing to them. We work with tables of the Lilliefors test rather than doing simulations each time, but that's because the test was invented back when simulations were slow and difficult. You could do the same thing for any other parametric family if you wanted to. Also, you can do a two-sample K-S test, where the null hypothesis is that the two samples are from the same distribution, but without specifying the distribution. This works: similar maths to what gives you the one-sample test gives you the two-sample test. The other limitation with all of these is that they may have low power against the alternatives you're specifically interested in. Unless you're testing a random number generator, you probably have some idea of important or plausible alternatives, and you would prefer a test that is powerful against those. For example, if you're testing for a power-law distribution, you'd probably want to compare to other well-motivated distributions will long tails, such as exponential or log-Normal, rather than to all possible alternative distributions.
null
CC BY-SA 4.0
null
2023-05-24T05:49:52.703
2023-05-24T05:49:52.703
null
null
249135
null
616765
2
null
518286
5
null
Ito's integral has nothing to do with the area under a curve and no connection with rectangles, random or otherwise. Let me try to share a motivation in the simplest language I can manage. There's really no better justification for Ito's formula than talking about stock prices, so let's suppose we have a stock, United Marshmallow, and an old-fashioned stock ticker. Let's suppose that, at each tick, the price of United Marshmallow goes up by 1 or down by 1. Let's say the stock price starts at 10, and the next six prices are 11, 12, 11, 12, 11, 10. In other words, the fluctuations in prices are: $$+1, +1, -1, +1, -1, -1$$ (I know that under this model, the stock price could go negative, but let's ignore that for now.) Let's say we're a trader and we want to come up with a strategy for trading this stock. At time $t$, all we know is what's happened at that time, so whatever amount of stock we choose to buy (or sell) at time $t$ cannot depend on information after time $t$. Let $W_t$ be the total fluctuation in the stock up to time $t$. We get this by adding up the fluctuations up to that time. So $W_1 = 1$, $W_2 = 2$, $W_3 = 1$, $W_4 = 2$, $W_5 = 1$, $W_6 = 0$. One simple trading strategy is to buy $W_t$ shares at time $t$ and sell them at time $t+1$. The idea here is that, if the stock has gone up a lot, then we should be more inclined to make a bet on it going up again, and if it's gone down a lot, we should be inclined to bet on it going down again. Note that $W_t$ can be negative, which would correspond to selling the stock (short sales are allowed) and then buying it back again. The profit from this strategy is just the sum over the amount we buy at time $t$ multiplied by fluctuation in price from time $t$ to time $t+1$ $$\sum_t W_t (W_{t+1} - W_t)$$ It looks a bit like a Riemann sum, but it's not, because the price can go down, so the "base" of the "rectangle" makes no sense. And also, our $W_t$ has to be strictly on the left-hand side of the interval from $t$ to $t+1$, or else we'd be using future information. Let's look at the cumulative profit we would get if we did this strategy at each possible time point. After time $1$, we bet $W_1 = 1$ dollars, and the next fluctuation is $+1$, so our profit so far is $1 \times 1 = 1$. Next, we bet $W_2 = 2$ dollars, but then the stock goes down, so our "profit" is $2 \times -1 = -2$. The overall profit so far is $1 -2 = -1$. Continuing in this fashion, we get the following table: |time $t$ |2 |3 |4 |5 |6 | |--------|-|-|-|-|-| |cumulative profit by time $t$ |1 |-1 |0 |-2 |-3 | Now let's compare this with an integral. We are doing some sort of "integral" of the function $f(W) = W$, so maybe there's a relationship with the integral of this function? |time $t$ |2 |3 |4 |5 |6 | |--------|-|-|-|-|-| |cumulative profit by time $t$ |1 |-1 |0 |-2 |-3 | |$W_t$ |2 |1 |2 |1 |0 | |$W_t^2/2$ |2 |1/2 |2 |1/2 |0 | Perhaps you notice that they match up exactly if you subtract $t/2$. |time $t$ |2 |3 |4 |5 |6 | |--------|-|-|-|-|-| |cumulative profit by time $t$ |1 |-1 |0 |-2 |-3 | |$W_t$ |2 |1 |2 |1 |0 | |$W_t^2/2$ |2 |1/2 |2 |1/2 |0 | |$W_t^2/2 - t/2$ |1 |-1 |0 |-2 |-3 | We get $\sum^t W_t(W_{t+1} - W_t) = W_t^2/2 - t/2$. You can do the same calculation with any other sequence of $+1$ and $-1$ as the price fluctuations and it will still be true! This is the mathematical fact that underpins Ito calculus. If you take some sort of limit, time becomes continuous, $W_t$ becomes Brownian motion, and you get the famous formula $$\int_0^t W(t) dW = W(t)^2/2 - t/2$$ Here, both sides are random variables. But the reason why the formula is true is not because of some application of the central limit theorem. It's because the mathematical fact that we checked in the example above holds for any set of fluctuations we could have chosen. This formula extends to functions of $W(t)$, and you get the chain rule of Ito calculus. To summarise, Ito's integral is really a way to calculate the (random) profit from a strategy which someone would make if they were trading a stock which moved up and down randomly. It's not an attempt to compute the area under any kind of curve, and it doesn't have much to do with the Riemann integral, except for a superficial similarity in the definition, and absolutely nothing to do with the Lebesgue integral, except that Lebesgue integration is required to make the "passage to the limit" part work formally.
null
CC BY-SA 4.0
null
2023-05-24T06:03:58.923
2023-05-24T06:03:58.923
null
null
13818
null
616767
2
null
527202
0
null
The best way to add the data is during your modelling phase. This is the time when you are doing experimentation and seeking the best metrics (ie accuracy). After you have added more dataset, add more epochs, reduce/adding feature, doing hyperparameter tuning and producing a satisfied (depends on your case), then you can proceed to deployment. The hyperparameter tuning phase is before the deployment, not after it.
null
CC BY-SA 4.0
null
2023-05-24T06:56:42.147
2023-05-24T06:56:42.147
null
null
358853
null
616768
1
616770
null
4
667
I am currently working on my master thesis. I have determined a simulation model for tax returns and to verify this I would like to use true values. I would like to determine the correlation between my simulated and true values. However, the problem is that I have simulated zero values for some years and the true values are >0. Can I determine the correlation in R by means of "cor.test"? In general, what is the most useful way to show the correlation between my simulated and true values? [](https://i.stack.imgur.com/GnoXI.png)
How do I deal with many zero values in terms of correlation?
CC BY-SA 4.0
null
2023-05-24T07:01:02.163
2023-05-24T13:00:43.000
2023-05-24T08:15:01.437
121522
388706
[ "r", "correlation", "error", "zero-inflation" ]
616769
2
null
616725
0
null
The parameters are labeled, so you can use those labels to define a new parameter that is the difference between slopes. Here is an example (where the slopes are indirect effects, but that part isn't relevant for you): [http://www.statmodel.com/discussion/messages/11/24128.html?1602803341](http://www.statmodel.com/discussion/messages/11/24128.html?1602803341)
null
CC BY-SA 4.0
null
2023-05-24T07:52:07.893
2023-05-24T07:52:07.893
null
null
335062
null
616770
2
null
616768
8
null
Zero is a value like any other value to each kind of correlation. Each correlation takes zeros into account in its way: - as implying a deviation from the mean of either variable in the case of Pearson correlation - as ranked suitably in the case of Spearman correlation or Kendall correlation. One commenter suggested a preference for rank correlation without giving a reason. I'd back up here and ask why you think correlation is interesting or useful. Pearson correlation is best thought of as measuring how far the data follow a linear relationship and rank correlation as measuring how far the data follow a monotonic relationship. A presence or even abundance of zeros may or may not be consistent with either idea. It's possible that you seek to use correlation as a measurement of agreement, but neither measure works well for that purpose. Consider that the correlation (any version) between $x$ and $bx$ for $b \gg 0$ is identical to $1$ but such $x$ and $bx$ are very different. The point is strengthened by noting that any $a$ added so that $y = a + bx$ does not affect any correlation. If you want a single measurement of agreement, concordance correlation may help; but it is more likely that you should be looking at the entire pattern of differences, observed MINUS simulated. And, last but certainly not least, a plot is indispensable here.
null
CC BY-SA 4.0
null
2023-05-24T08:04:03.283
2023-05-24T10:57:25.050
2023-05-24T10:57:25.050
22047
22047
null
616772
2
null
616768
9
null
Because you are comparing simulated vs. true values, a correlation between the two is not the best way to evaluate the quality of your simulations. This is easy to illustrate: imagine your model is predicting temperature across a bunch of cities but you accidentally messed up your code and it has added 1000 degrees to every temperature prediction. Adding this 1000 (or 1 million, or any number) does not affect the correlation! You can get a perfect correlation between these crazy simulated temperatures and the true temperatures even though they are terrible predictions. In other words, correlation ignores the offset or bias in the predictions. Here's a simple visualisation of this type of situation: [](https://i.stack.imgur.com/thaLg.png) Instead, a better solution is to compare the true and simulated values using a metric such as Root Mean Square Error (RMSE) or Mean Absolute Error (MAE). These calculate the quantitative error and in the earlier example would tell you that the predictions are off by 1000 degrees. Your plot is informative: it clearly shows the peculiar feature of the model prediction error that you mention, which is that the model is predicting zero across a wide range of true positive values. I would therefore show it alongside any such error metric calculation, ideally with a (partly transparent) 1:1 line to highlight how far each prediction is from the true value.
null
CC BY-SA 4.0
null
2023-05-24T08:12:00.773
2023-05-24T13:00:43.000
2023-05-24T13:00:43.000
121522
121522
null
616773
1
null
null
0
22
I'm working with a dataset in which each row is an event with a number of categorical properties, most of which are also nominal. Each of these properties has around 5-10 possible values, but some columns can have more then 200. There is a column which denotes if an event had a positive, negative or neutral outcome and a column with a certain result code. I already made an analysis on a purely statistical basis of what values have larger or smaller chance to be positive or negative or have a certain result code. This works great when looking at a single property, but when combining them, there are too many categories and they only have a handful of data points in them, making finding statistically significant differences difficult to find. There are two things I want to do with this data. First, I want to see what combinations of parameters have the highest or lowest chance to have a positive result. For this, I'm not sure what to do as there are no numerical columns to use to calculate correlations etc. Summing over certain sets where combinations of parameters are the same (so all events where column A has the same value, column B has the same value etc), but I think that the sets would become too small as there is also a lot of missing data. I also want to use machine learning so the outcome of certain combinations of parameters can be estimated, but again, I'm not sure what to use because of the categorical data. I think one-hot encoding would result in too many features (300+ that are mostly 0) and since the features are nominal, a single column with different numbers would create a bias in the data. Any insights?
How to perform statistics and machine learning on categorical, nominal data with large cardinality
CC BY-SA 4.0
null
2023-05-24T08:24:49.310
2023-05-24T08:35:17.567
null
null
388715
[ "machine-learning" ]
616774
1
null
null
0
11
I have a set of documents like a list of memory register information, a list of pinout information and so on, for different types of equipments that are installed in different facilities. Each facility may have 5 equipments of different types. It sums up 60 facilities in total and so far I have created one set of these documents for 20 facilities. The number of variation of these equipments may increase as I move on, but I can say I have a good sample of what might comes ahead. Is there a way to create a machine learning model which I would throw in these already made documents (consider these documents of the same format) to train the model and to improve this model as it goes, so all I have to do from this point on, besides creating new documents for recently discovered documents and retraining the model, is telling the model what type of equipment I want to generate the set of documents and it comes up with what documents I should pick? What type of model should I go for? Is there any paper or someone who has done something similar?
Automatic document generation using AI
CC BY-SA 4.0
null
2023-05-24T08:25:46.293
2023-05-24T08:25:46.293
null
null
351672
[ "machine-learning", "classification" ]
616775
1
null
null
0
13
I noticed that if you have a simple mediation model X -> M -> Y where all three variables are latent, AMOS doesn't complain and I can have a good fit, well-identified model. However, when I replace X with an observed variable, the model becomes unidentified and I am required to add one constraint. Among all the things I've tried, only one has worked, although it doesn't really make sense in practice: setting the mean of X to 0 (and for some reason by the way, setting it to a value other than 0 doesn't work). Any ideas about what I'm missing would be greatly appreciated!
Simple mediation model with latent variables becomes unidentified with observed factor
CC BY-SA 4.0
null
2023-05-24T08:32:38.193
2023-05-24T08:46:02.593
2023-05-24T08:46:02.593
201461
201461
[ "structural-equation-modeling", "latent-variable", "identifiability", "amos" ]
616776
2
null
616773
0
null
one hot coding is fine, but you need to use sparse matrices ( you only store position of non zero entries to avoid memory overload). there are many ml libraries that handle this. bag of words language models is the typical example of this. ( eg library scikit-learn, glmnet) for combinations you might look into market basket analysis which has computationally efficient methods of finding combinations, and various metrics
null
CC BY-SA 4.0
null
2023-05-24T08:35:17.567
2023-05-24T08:35:17.567
null
null
27556
null
616777
1
null
null
0
11
A validation report of an analytical method (to determine the assay of a product) is given and the goal is to try to determinate the overall uncertainty of it. I started with calculating the standard error of the precision of the method. The precision can be usually decomposed in three parts: the intermediate precision, the repeatability and the reproducibility. In this case, the reproducibility is not given. Let us note $\sigma_P$, $\sigma_{ip}$ and $\sigma_r$ the standard error of the precision, the standard error of the intermediate precision and the standard error of the repeatability. My thinking is, to determine $\sigma_P$, I can use the following relation: $$ \sigma_P = \sqrt{\sigma_{ip}^2 + \sigma_r^2} \ \ (E1). $$ However, I do not see the mathematical justification for that. Furthermore, in the validation report, the repeatability is not given, is only accessible the intermediate repeatability which I do not really know what is it. And a last point, we are in a phase 3 of a drug product development, in the validation report the precision of the phase IIb was calculated and it is around 0.7% but the intermediate precision of the current phase (phase 3) is around 0.7% too. They kept the result of phase IIb for the precision. My questions are: - Is it correct to use equation $(E1)$ to link $\sigma_P$ to the other standard errors? If yes, why? Mathematically it would be similar to suppose that the repeatability $R$ and intermediate precision $IP$ are normally distributed with standard deviation $\sigma_r$ and $\sigma_{ip}$ respectively and that we have a relation of the form $P = R + IP$, where $P$ is the precision. 2.Can I take the intermediate repeatability as the repeatability of the process? 3.I find it a bit suspicious to have a standard error of the precision at the same level of the intermediate precision, shouldn't it be significantly higher (like at least 0.1%)?
The standard error of precision from the standard error of intermediate precision and repeatability of a analytical chemical method
CC BY-SA 4.0
null
2023-05-24T08:38:09.793
2023-05-24T12:03:27.327
2023-05-24T12:03:27.327
383929
383929
[ "standard-error", "error", "measurement-error", "error-propagation" ]
616780
1
null
null
0
35
I have data with income information provided in grouped, income ranges. However, for my research purpose, I want to estimate a continuous distribution, in other words from the shares of each income-range, I then want to derive a possible underlying continuous distribution. For this purpose, I was thinking to fit a generalized beta distribution (a Singh-Maddala one) and then using the estimated parameters to generate a random income continuous variable. However, I do not exactly know how to implement this approach in R. Suppose I have the following: ``` structure(list(PR_SUELDO = structure(c("3", "3", "3", "2", "3", "3", "1", "2", "1", "4", "2", "1", "4", "1", "3", "3", "3", "1", "2", "2", "2", "1", "1", "1", "1", "", "1", "1", "1", "1", "3", "1", "9", "3", "9", "3", "1", "1", "1", "2", "1", "3", "1", "4", "3", "1", "2", "1", "4", "1"), format.stata = "%2s")), row.names = c(NA, -50L), class = c("tbl_df", "tbl", "data.frame")) ``` where 1 to 8 are the income range categories (1 is 0-699, 2: 700-999, 3: 1000-1499, 4: 1500-1999, 5: 2000-2499, 6: 2500-2999, 7: 3000+). I know about the GB2group package and the fitgroup.sm function, but how shall I proceed to fit and generate the underlying continuous variable? Many thanks for your help
Transforming grouped income data into continuous distribution
CC BY-SA 4.0
null
2023-05-24T08:55:37.410
2023-05-25T09:55:14.770
2023-05-25T09:32:43.063
261061
261061
[ "distributions", "gamma-distribution", "fitting" ]
616782
2
null
385686
1
null
A loss function $\mathit{L}:X \to\mathbb{R}^+$ can be motivated as an MLE provided that $\int_X \exp (-b\mathit{L}(x) )dx $ converges for some $b > 0$. That will often be the case, but one can construct counterexamples. Proof: Consider the problem of finding some function $f$ which minimizes the loss $\sum_i \mathit{L}(y_i - f(x_i))$. This will be equivalent to maximizing the conditional log-likelihood $\sum_i \mathscr{L}(y_i - f(x_i))$ provided that $\mathscr{L}$ is an affine transformation of $\mathit{L}$ and that $\mathscr{L}$ is the log of a pdf. In other words, we require that there exists $a \in \mathbb{R},b\in \mathbb{R}^+$ such that $\exp(a - bL(x))$ is a pdf. A function is a pdf if it is non-negative and if it integrates to 1. Non-negativity is guaranteed by the $\exp$ function, so we just require that for some $a,b$: $$ \int_X \exp(a - bL(x)) dx = 1. $$ The constant $a$ can be factored out, so the requirement just becomes that $$ \int_X \exp(- b L(x) ) dx < \infty. \tag{1} $$ I believe that requirement (1) will be failed by the pathological loss function $L(x) = \log(\log(1+|x|))$ and domain $X = \mathbb{R}$: $$ \int_X \exp(- b L(x) ) dx = \int_{-\infty}^{\infty} \exp(- b \log(\log(1+|x|)) ) dx = \int_{-\infty}^{\infty} \frac{1}{(\log(1+|x|))^{b } }dx , $$ which I think diverges for any positive $b$. I'm not sufficiently familiar with different loss functions to know whether there's a 'common' loss function that fails requirement (1), but it shouldn't be too hard to check each one.
null
CC BY-SA 4.0
null
2023-05-24T09:32:04.253
2023-05-28T02:35:14.897
2023-05-28T02:35:14.897
161943
161943
null
616783
1
null
null
0
4
I have a dataset with two columns; one is the total time played by a player on a gaming console and the other is the time played online which will always be less than or equal to the total time played by the player. The data looks like the following [](https://i.stack.imgur.com/s2g5V.png) I was asked to take the mean by dividing the sum of online playtime hours by the sum of total playtime hours whereas to take the std deviation by using the online playtime share. My concern is shouldn't there be consistency in the formula for mean and std deviation, i.e. to take the mean of the share as well? Or will the mean and std deviation nos. make sense the way I am doing?
If we have % share of something, does it make sense to take mean of the absolute nos. and the std deviation of the share values instead
CC BY-SA 4.0
null
2023-05-24T09:34:52.560
2023-05-24T09:34:52.560
null
null
332360
[ "variance", "mean", "standard-deviation" ]
616784
1
null
null
0
9
I just had a quick question about the CIA, conditional unconfoundedness and secelction on observables only. Do these three terms mean the exact same thing or are there differences between the three?
Unconfoundedness vs CIA vs selection on observables
CC BY-SA 4.0
null
2023-05-24T09:48:59.343
2023-05-24T09:48:59.343
null
null
384321
[ "conditional-independence", "selection-bias" ]
616785
1
616791
null
0
46
I am currently writing the research proposal for my thesis in Developmental Psychology. For this, I need to choose the analyses I am going to use for my research questions. In addition, I need to calculate my GPower, but I need to have chosen the analyses beforehand. I can no longer see the wood for the trees, and would like advice on 1) which analyses I need to perform, 2) how to calculate the power with the G*Power programme. My research questions are: - RQ1 Is working alliance conditionally important in explaining the variance in parental involvement among parents from MDIB children in semi-residential settings? - RQ2 Is there a threshold score for working alliance, where working alliance serves as a prerequisite for a successful intervention for improving parental involvement? - RQ3 Is the factor working alliance a moderating factor in the effect of an intervention for improving parental involvement? For all RQ's a pre and post measurement has been performed. I'm tending to ANOVA's, but I'd love some advice. Thank you so much in advance.
What statistical tests do I select for my research questions?
CC BY-SA 4.0
null
2023-05-24T09:50:09.953
2023-06-01T06:45:18.027
2023-06-01T06:45:18.027
121522
388722
[ "gpower" ]
616787
1
null
null
2
60
I'm reading: Clauset, A., Newman, M.E.J., Moore, C., 2004. Finding community structure in very large networks. Phys. Rev. E 70, 066111. [https://doi.org/10.1103/PhysRevE.70.066111](https://doi.org/10.1103/PhysRevE.70.066111) There is written that in an (what I think is undirected, but I might be wrong) network, > The probability of an edge existing between vertices $v$ and $w$ if connections are made at random but respecting vertex degrees is $(k_v k_w)/2m$ Where $k_v$ is the degree of node $v$, $k_w$ is the degree of node $w$, and $m$ is the number of edges in the graph (the sum of all the elements of the adjaciency matrix, divided by 2). I don't understand where this formula comes from. It looks to me that $k_v k_m$ is the number of possible ways in which the two nodes are connected, but then I don't understand why I have to divide by two times the number of edges in the network. Can you enlighten me on the reasons behind this formula?
Probability that two vertices are connected
CC BY-SA 4.0
null
2023-05-24T10:02:03.507
2023-05-27T14:13:56.557
2023-05-26T11:23:37.117
204068
154990
[ "probability", "graph-theory", "networks", "social-network" ]
616788
2
null
157775
1
null
A new method of representing interactions by creating dedicated nodes was proposed and termed "IDAG" since this question was asked. In my understanding the example sentence from question "asbestos exposure causes a change in the direct causal effect of tobacco smoke exposure on risk of mesothelioma" would be represented as: Asbestos → Δ Mesothelioma Tobacco The limitation of this approach is that you would need to represent all other effects on Tobacco and Mesothelioma separately. For details see: Anton Nilsson and others, A directed acyclic graph for interactions, International Journal of Epidemiology, Volume 50, Issue 2, April 2021, Pages 613–619, [https://doi.org/10.1093/ije/dyaa211](https://doi.org/10.1093/ije/dyaa211)
null
CC BY-SA 4.0
null
2023-05-24T10:17:13.333
2023-05-24T10:17:13.333
null
null
233900
null
616789
1
null
null
1
17
I have a problem statement where I have two dataset one labeled where a data point can belong to only one class say class1 or class2 and there's another unlabeled dataset. Now for unlabeled dataset I want to predict the class label where a particular data point can belong to class1 or class2 or both or none of the classes. How to approach this problem?
Multilabel classification problem
CC BY-SA 4.0
null
2023-05-24T10:29:02.200
2023-05-24T11:01:01.347
null
null
388724
[ "machine-learning", "multi-class", "multilabel" ]
616790
1
null
null
0
7
I'm conducting regression analysis using R of the ability of several hundred proteins (and other covariates) to predict ventricle volume in a human sample. As is common in GWAS/PWAS research, I plotted the expected vs. observed p values of each protein to check their overall distribution. As the plot below shows, there is significant deflation from the expected distribution of p values. I found that calculating standard error estimates is a potential way around this, which worked for some of my DVs but not for ventricle volume. I wondered a) why calculating robust standard error estimates did not correct the deflation and b) what I can do to correct this problem? [](https://i.stack.imgur.com/nq9Kw.png)
How to correct deflated p value distribution from qqplot in PWAS/GWAS
CC BY-SA 4.0
null
2023-05-24T10:29:31.087
2023-05-24T10:29:31.087
null
null
387196
[ "p-value", "gwas" ]
616791
2
null
616785
-2
null
Without more detail about your data I cannot say specifically, however here are some general recommendations. If your dependent/outcome variable is continuous and normally distributed, then either ANOVA or linear regression is appropriate. If your independent/predictor variables are categorical then generally ANOVA is appropriate, and if they are continuous then regression is appropriate. Since your data is repeated measures then either repeated measures ANOVA or ANCOVA is appropriate, since these analyses can account for shared variance between measurements. For your third RQ, you can simply run additional analyses (ANOVA/regression) with the two variables of interest; if there is a significant interaction between these two variables then moderation is likely present. Regarding G*power, you need to select 'F tests', then choose whichever test is appropriate for you, 'a priori', then change the expected effect size based on similar research performed previously. Then set your desired power value and 'calculate'.
null
CC BY-SA 4.0
null
2023-05-24T10:39:53.783
2023-05-24T10:39:53.783
null
null
387196
null
616792
2
null
616789
0
null
A classical supervised classification problem is similar to yours. In such a setting, you use the data with the known outcomes to train a model. Once you have confidence in such a model, you use it to predict the category (or probability of category membership) in data where the outcome is not known. The standard machine learning classification models like logistic regressions and neural networks do exactly this and provide you the probability of membership in each of two mutually exclusive categories (or $3+$ mutually exclusive categories, depending on what exactly you do). However, you want to allow for membership in both categories or neither category. That makes this a so-called multi-label problem. The [underlying theory](https://stats.stackexchange.com/q/586201/247274) is not much more complicated. You just predict the probability of each category without the two probabilities having to add up to one; that is, the categories are not mutually exclusive. One of the simple models for such a problem is a bivariate probit model, which can be implemented in `R` through the `VGAM::binom2.rho` function, documented [here](https://search.r-project.org/CRAN/refmans/VGAM/html/binom2.rho.html) with references to textbooks and the primary literature for further reading to dive down the rabbit hold. It is typical to use a threshold of $0.5$ to classify the probabilistic predictions as belonging to the category (above the threshold) or not (below the threshold), though this [might not be the ideal threshold for you](https://stats.stackexchange.com/a/390189/247274), and [the raw probabilities are useful](https://stats.stackexchange.com/questions/464636/proper-scoring-rule-when-there-is-a-decision-to-make-e-g-spam-vs-ham-email) without you having to use any threshold at all. Beyond generalized linear models like bivariate probit, other standard machine learning models can be adjusted for multi-label problems, too. For instance, the `sklearn` [documentation](https://scikit-learn.org/stable/modules/multiclass.html) discusses multi-label implementations for $k$-nearest neighbors, random forest, and neural network models. I have some skepticism about sticking your data into a multi-label model, however. - By training on data where the categories are mutually exclusive, you are telling the model not to expect membership in both categories. This makes it unlikely to predict a high probability of membership in both categories. Maybe this is correct behavior, but maybe it is not. - If the unlabeled data allow for membership in both categories (or neither category) yet the training data do not, I wonder what else has changed. The unlabeled data might not play by the same rules or even similar rules, meaning that the model you train on the labeled data might not be a good one for the unlabeled data, despite strong (validated) performance on the labeled data. Worse, you have no way to check this, since you literally lack labels on the unlabeled data set, rather than having them but holding them out from the training specifically to check performance on unseen data.
null
CC BY-SA 4.0
null
2023-05-24T10:53:17.057
2023-05-24T11:01:01.347
2023-05-24T11:01:01.347
247274
247274
null
616793
1
null
null
2
48
Firstly, my apologies for the basic nature of this question. I am numerate but stats has always befuddled me. I have had a look through earlier questions. I have installed a red mason bee box. It has 174 tubes of 3 different types - sleeved 87, empty 44 and plugged 43. Currently the occupied tubes number, sleeved 7, empty 12 and plugged 7. The nesting season continues. I also have installed a similar box where there are currently only 2 occupied tubes and instinctively know that this must be insignificant but would like to understand where the transition is so that I can improve my chances next year. I am conscious that early on the bees have a high degree of choice in selecting a specific tube but that as more tubes are taken that choice will become more restricted. I guess that this is a secondary conundrum - when is bees choice so restricted that the preference cannot be effectively determined? I have looked at ANOVA and plugged the data into a tool as 3 groups with 1,1,1,0,0,0,0 etc but frankly even the output is not clear to me. I would be very grateful for advice on testing the data to be clear on the best choice for next year’s box.
Help with determining statistical significance of bees’ decisions
CC BY-SA 4.0
null
2023-05-24T11:04:42.697
2023-05-24T20:34:02.123
2023-05-24T20:34:02.123
11887
388725
[ "hypothesis-testing", "multiple-comparisons", "degrees-of-freedom" ]
616794
2
null
616689
0
null
> Is this an upper bound on the performance of a SVM using any features extracted from the images? I SAY NO, at least not necessarily. From the comments: > If the classifier reaches a certain accuracy by using all the available information (the raw pixel data), is it even possible to improve this by using any feature extraction methods on the data? I SAY YES I can imagine two scenarios. - You overfit to all of the pixels. Reducing the feature space dimension through some feature extraction technique leads to a simpler model with less opportunity to overfit, possibly leading to improved out-of-sample performance. - You use your domain knowledge to extract useful features that an SVM might struggle to figure out on its own, and these features provide improved ability to distinguish between the categories.
null
CC BY-SA 4.0
null
2023-05-24T11:04:56.147
2023-05-24T11:04:56.147
null
null
247274
null
616795
2
null
616762
1
null
I did not know how bayesian neural networks perform this operation prior to this answer, however I think the equation you mention is not specific to Bayesian neural networks but is indeed a generic Bayesian model averaging operation. You can refer to [Fragoso and Neto 2018](https://onlinelibrary.wiley.com/doi/10.1111/insr.12243) for an introduction and review of bayesian model averaging techniques. The equation you state corresponds to equation (6) from this review (albeit with a, more generic, continuous formulation instead of a discrete one), and computes the marginal posterior distribution of a model output across all models. In your case the chosen output is the model's final prediction $y^*$, but you could perform this operation on different outputs (e.g intermediate embedding layers of a neural network) It can indeed be termed inference (in the machine learning sense) since you are making a prediction on unseen data based on a set of previously trained models. Regarding the Bayesian neural network, after a quick glance over the Jospin et al 2022 paper you mentioned I think the two following quotes answers your question: > A BNN is defined slightly differently across the literature, but a commonly agreed definition is that a BNN is a stochastic artificial neural network trained using Bayesian inference. > Stochastic neural networks are a type of ANN built by introducing stochastic components into the network. This is performed by giving the network either a stochastic activation (Figure 3b) or stochastic weights (Figure 3c) to simulate multiple possible models θ with their associated probability distribution p(θ). Thus, BNNs can be considered a special case of ensemble learning. Thus the bayesian averaging performed for bayesian neural networks corresponds to averaging (e.g through MCMC) over the stochastic parameters of your network (the $w$ from your equation corresponds to activation functions or weights values). During training, instead of fixing precise values for your weights, you learn the parameters of the chosen distributions for the activation functions and weights (e.g mean and average), which results in an ensemble of possible neural networks. At inference time you sample this ensemble and perform bayesian averaging.
null
CC BY-SA 4.0
null
2023-05-24T11:25:36.593
2023-05-24T15:25:39.780
2023-05-24T15:25:39.780
358871
358871
null
616796
2
null
138740
1
null
Take the example of a house - We want to build a model to calculate the price of the house based on certain criteria such as location, area (size), age, etc. Feature - all "input" to the model Variables - all Features + Price (the output). In essence, Features (input variables) + output variable(s) of a model = Variables
null
CC BY-SA 4.0
null
2023-05-24T11:33:38.300
2023-05-24T11:33:38.300
null
null
388728
null
616797
1
null
null
0
32
Proper scoring rule is a concept used for evaluating density forecasts. What would be an equivalent for evaluating point forecasts? E.g. mean squared error seems like a proper metric for evaluating forecasts that target the expected value of the underlying random variable. This is because the forecast that truly minimizes the expected squared error (the population counterpart of the mean squared error) actually is the expected value. Meanwhile, mean absolute error does not seem proper for the same goal. On the other hand, it would seem proper when the target is the median of the underlying random variable. So what term is there to describe the propriety of a metric for evaluating point forecasts?
Equivalent of proper scoring rule for point forecasts
CC BY-SA 4.0
null
2023-05-24T11:41:22.340
2023-05-24T11:41:22.340
null
null
53690
[ "terminology", "model-evaluation", "scoring-rules" ]
616798
1
null
null
1
12
I'm developing a GNN for missing links prediction following this [blog post](https://github.com/pyg-team/pytorch_geometric/discussions/7264) for PyG library. I'm using almost the same GNN with a different dataset. Altough my dataset is similar to the MovieLens example (a bi-partite graph with some links and some missing links, with different features), I want to develop a more precise predictor rather than a recommender (at least a more precise recommender). The following ROC and PR curves represent my current results: [](https://i.stack.imgur.com/sWd4i.png) [](https://i.stack.imgur.com/HT4sM.png) From my newbie evaluation on binary classification tasks (since the missing links problem is somehow generalized to a binary classification), I need to improve my precision. Is it right? If so, where should I start? Changing my GNN model or changing the dataset trying to include more positive samples during the training? Basically, (I think) I know what I want, which is a higher precision, with more true positives and fewer false positives. However, I don't know how to start this optimization. Is there any technique or metric I should use to get a better tip?
How to start GNN optimization to get higher precision?
CC BY-SA 4.0
null
2023-05-24T11:48:42.280
2023-05-24T11:48:42.280
null
null
103715
[ "binary-data", "roc", "precision-recall", "graph-theory", "curves" ]
616799
1
null
null
-1
28
When I run a logistic regression using sm.Logit (from the statsmodel library), part of the result looks like this: [](https://i.stack.imgur.com/tOkpA.png) The variable y is academic success (average greater than 10). The problem is that I am lost in front of this table (there is only the beginning), I do not know how to interpret the results. In the coef column, can I interpret only the sign or the value too? For "absences" I can say what with -0.0459? Also what does column "z" mean and what can I say on it? The confidence interval I understood but the beginning is quite blurry for me, can you help me please?
How to interpret the results of a logistic regression in python
CC BY-SA 4.0
null
2023-05-24T12:02:59.393
2023-05-24T12:13:34.097
null
null
388731
[ "logistic", "python", "interpretation", "regression-coefficients", "statsmodels" ]
616800
2
null
195246
0
null
There have been a lot of compromises that handwave the relationship. The default polynomial contrasts are far from ideal, although I'm sure some would vehemently argue this. I have found that stairstep contrasts manage to capture a monotonic, ordered variable's effects quite well. ``` ordered_factor <- function(fact_var) { ord_fact <- factor(fact_var, ordered=TRUE) categories <- levels(fact_var) n_cat <- length(categories) cont <- matrix(0, n_cat, n_cat-1) cont[col(cont)<row(cont)] <- 1 rownames(cont) <- categories colnames(cont) <- paste(categories[2:n_cat], categories[1:(n_cat-1)], sep=" vs. ") contrasts(ord_fact) <- cont return(ord_fact) } ``` This is from [https://aarongullickson.netlify.app/post/better-contrasts-for-ordinal-variables-in-r/](https://aarongullickson.netlify.app/post/better-contrasts-for-ordinal-variables-in-r/)
null
CC BY-SA 4.0
null
2023-05-24T12:09:58.937
2023-05-24T12:09:58.937
null
null
28141
null
616801
2
null
616799
2
null
The logistic regression model is $$ E[y|X_1,X_2,\dots,X_k] = \sigma(\beta_0 + \beta_1 X_1 + \beta_2 X_2 + \dots + \beta_k X_k) $$ The `coef` column displays the $\beta_0,\beta_1,\dots$ parameters, where $\beta_0$ is described as `const`. On the interpretation of the parameters you can read in the following threads: - interpretation of model coefficient in logistic regression - Regression coefficient interpretation in binary logistic regression - Binary logistic regression: interpretation of regression coefficients The `z` column shows the $Z$ values discussed in: - Logistic Regression Z-value - Z test vs Wald Test in logistic regression - Why use a z test rather than a t test with proportional data? Those are just examples of multiple threads we have, so check other questions tagged as [logistic-regression](/questions/tagged/logistic-regression) for more details.
null
CC BY-SA 4.0
null
2023-05-24T12:13:34.097
2023-05-24T12:13:34.097
null
null
35989
null
616802
1
616803
null
2
62
In an experiment I repeat some task $n$ times and count how often the task completes successfully. This gives the number $m$. Therefore the success ratio is $$ r = \frac{m}{n} $$ To calculate the error I just would use $$ \Delta r = \frac{\Delta m}{n} = \frac{\sqrt{m}}{n} $$ but then when I have for example $n=10$ trials with all trials successfully ($m=10$) I would get a success rate of 100% with error 31.6 %, meaning the real success rate is larger than 68.3%. But on the other side the failure rate would be 0% with error 0% implying there could never be an error, which is incorrect. Also it is unsymmetrical. So how to properly calculate the errors on the success/failure rate so that I can make statements like ``` The success rate is larger then X % The failure rate is smaller than Y % ```
How to calculate the error on the ratio of a counting experiment?
CC BY-SA 4.0
null
2023-05-24T12:33:18.337
2023-05-24T12:50:57.287
null
null
388733
[ "error", "count-data", "ratio" ]
616803
2
null
616802
5
null
One possible approach is via Laplace's Rule of Succession. For $m$ successes and $n-m$ failures, the probability for success for the next trial is $$p=\frac{m+1}{n+2}.$$ This means for $m=10$ and $n=10$ a success rate of $p=11/12$ or $p\simeq0.92$. Hence the probability for a failure is still 0.08. This approach is part of Bayesian statistics. Anyone interested should have a look at Information Processing, Volume 1, by David J. Blower.
null
CC BY-SA 4.0
null
2023-05-24T12:50:57.287
2023-05-24T12:50:57.287
null
null
382413
null
616804
1
null
null
0
20
I have two types of 3d scan datasets over 200 forest plots. One type of data have higher resolution (more detailed information on the landscape) than the other. I derived 10 forest metrics from these dataset such as height, volume, crown cover and etc. I also have field data of animal abundance and species richness in those plots. I would like to see the relationship between the animal abundance and species richness and 3d metrics of forest plots using the mixed effects model (glmer). Models from single datasets found that some metrics are significant for animal abundance. I also fused the data from two types of sensors hoping it can improve the models. But what I found is pretty surprising. Some variables that are important from single dataset models are not important in fused models. I would appreciate any thoughts and ideas why this is happening.
Fused data improve the model performance compared to models from single dataset?
CC BY-SA 4.0
null
2023-05-24T12:53:24.477
2023-05-25T04:35:30.507
2023-05-25T04:35:30.507
266980
266980
[ "lme4-nlme", "model" ]
616805
1
null
null
0
17
Under the generate_weights() command using the {tidysynth} package for executing the synthetic control method in R, users are given a number of optimization options. However, I cannot find documentation on what these optimization options do as {tidysynth} is no longer supported on CRAN (if anyone knows why, I would also like to know that as well). You can still access reference material for {tidysynth} on GitHub and it has the following code as an example of their optimization options: ``` generate_weights(optimization_window = 1935:1949, margin_ipop = .02, sigf_ipop = 7, bound_ipop = 6) ``` Barring the `optimization_window` option, I am unsure of what the other options are doing and I cannot find documentation discussing these options. What I can find with [avaliable resources](https://www.rdocumentation.org/packages/tidysynth/versions/0.1.0/topics/generate_weights) refers to the documentation of `ipop` but such documentation is not provided.
Understanding Optimizer Options Using tidysynth
CC BY-SA 4.0
null
2023-05-24T13:24:25.773
2023-05-24T13:24:25.773
null
null
360805
[ "r", "causality", "synthetic-controls" ]
616806
1
null
null
0
18
I am currently reading [Mixed Effects Models for Complex Data](https://www.routledge.com/Mixed-Effects-Models-for-Complex-Data/Wu/p/book/9780367384913) by Lang Wu. On page 136, the author mentions the Monte Carlo EM algorithm as a way of treating missing covariate in the mixed effect model. Say we have ($\mathbf{y}_i, \mathbf{z}_i, \mathbf{b}_i$) as complete data for $i = 1,..., n$ where $\mathbf{b}_i$ are random effects and $\mathbf{z}_i = (\mathbf{z}_{mis,i}, \mathbf{z}_{obs,i}) $ are the vector of covariates corresponding to unobserved and observed elements. The E-step is an expectation of the log likelihood, ie. \begin{equation} \begin{split} & Q_i(\boldsymbol{\theta}|\boldsymbol{\theta}^{(t)}) = \iint \{\log f(\mathbf{y}_i|\mathbf{z}_i, \mathbf{b}_i, \boldsymbol{\beta}, \sigma^2) + \log f(\mathbf{z}|\boldsymbol{\alpha}) + \log f(\mathbf{b}_i|D)\} \\ & f(\mathbf{z}_{mis,i}, \mathbf{b}_i|\mathbf{z}_{obs,i}, \mathbf{y}_i, \boldsymbol{\theta}^{(t)})d\mathbf{z}_{mis,i}d\mathbf{b}_i \end{split} \end{equation} where $\boldsymbol{\theta} = (\boldsymbol{\alpha}, \boldsymbol{\beta}, \sigma^2, D )$ are parameters associated with the density of Z, the mixed effect model and the random effect. I understand the need to generate samples from $f(\mathbf{z}_{mis,i}, \mathbf{b}_i|\mathbf{z}_{obs,i}, \mathbf{y}_i, \boldsymbol{\theta}^{(t)})$. Book mention that this could be done by rejection sampling. The book mentions on 137 that since \begin{equation} f(\mathbf{z}_{mis,i}|\mathbf{z}_{obs,i}, \mathbf{b}_i, \mathbf{y}_i, \boldsymbol{\theta}^{(t)}) \propto f(\mathbf{z}_i|\boldsymbol{\theta}^{(t)}) f(\mathbf{y}_i|\mathbf{z}_i, \mathbf{b}_i, \boldsymbol{\theta}^{(t)}) \end{equation} and \begin{equation} f( \mathbf{b}_i| \mathbf{z}_{mis,i},\mathbf{z}_{obs,i}, \mathbf{y}_i, \boldsymbol{\theta}^{(t)}) \propto f(\mathbf{b}_i|\boldsymbol{\theta}^{(t)}) f(\mathbf{y}_i|\mathbf{z}_i, \mathbf{b}_i, \boldsymbol{\theta}^{(t)}) \end{equation} then "we only need to generate samples from the right-hand sides which can be accomplished using rejection sampling methods since the density functions on the right-hand sides are all known". My question is regarding the implementation details. My understanding of it is that, we are not actually generating samples from the right-hand side, but we are actually using some proposal distribution which shares the same domain of $\mathbf{z}_{mis,i}, \mathbf{b}_i$ and then using uniform distribution U(0,1) to accept or reject? - Say only one covariate is missing and that $z_{mis,i}$ is binary. Do I sample from a Bernoulli distribution, say $h(z)$? I understand that the target density function has to be bounded by h(z). I am not seeing how $f(\mathbf{z}_i|\boldsymbol{\theta}^{(t)}) f(\mathbf{y}_i|\mathbf{z}_i, \mathbf{b}_i, \boldsymbol{\theta}^{(t)})$ would be bounded by sampling from this proposed Bernoulli distribution - We only accept both $z_{mis,i}, \mathbf{b}_i$ when they are BOTH accepted? - I am trying to figure out the implementation details and would appreciate any advice. Thank you in advance!
Understanding Monte Carlo EM in a mixed effects model context
CC BY-SA 4.0
null
2023-05-24T13:32:00.120
2023-05-24T13:32:00.120
null
null
58910
[ "mixed-model", "monte-carlo", "expectation-maximization" ]
616807
1
null
null
0
40
Let's say I have a univariate (no intercept, for simplicity) regression with normally distributed errors, $$y_i = \beta x_i + e_i, \text{ where }e_i \sim N(0,\sigma^2),$$ And I would like to find MLE estimators of $\beta$ and $\sigma$. Writing down the likelihood, we get $$L(y|x, \beta, \sigma) = \prod_{i=1}^N \frac{1}{\sqrt{2\pi}\sigma} \exp(-\frac{(y_i-\beta x_i)^2}{2\sigma^2})$$ $$\log L(y|x, \beta, \sigma) = - \frac{N}{2}\log 2\pi - N\log \sigma -\sum_{i=1}^N\frac{(y_i-\beta x_i)^2}{2\sigma^2}$$ Taking derivatives w.r.t. $\beta$ and $\sigma$, we can quickly find $$\hat\beta = \frac{\sum_{i=1}^N x_iy_i}{\sum_{i=1}^N x_i^2}$$ $$\hat\sigma^2 = \frac{1}{N}\sum_{i=1}^N(y_i-\hat\beta x_i)^2$$ Now, let's say we scale down the regression by $\sqrt\sigma$, and get $$y'_i = \beta x'_i + e'_i, \text{ where }$$ $$e'_i \sim N(0,\sigma),$$ $$ y'_i=y_i/\sqrt\sigma$$ $$ x'_i=x_i/\sqrt\sigma$$ If we rewrite the log-likelihood, we get $$\log L(y'|x', \beta, \sigma) = - \frac{N}{2}\log 2\pi - \frac{N}{2}\log \sigma -\sum_{i=1}^N\frac{(y'_i-\beta x'_i)^2}{2\sigma}$$ Substituting $y'_i$ and $x'_i$ with original $y_i$ and $x_i$, we get $$\log L(y'|x', \beta, \sigma) = - \frac{N}{2}\log 2\pi - \frac{N}{2}\log \sigma -\sum_{i=1}^N\frac{(y_i-\beta x_i)^2}{2\sigma^2}$$ Maximizing this w.r.t. $\beta$ and $\sigma$, we get $$\hat\beta = \frac{\sum_{i=1}^N x_iy_i}{\sum_{i=1}^N x_i^2}$$ $$\hat\sigma^2 = \frac{2}{N}\sum_{i=1}^N(y_i-\hat\beta x_i)^2$$ There must be something I am missing, but why do we get twice as large $\hat\sigma^2$ in this case?
MLE estimation of scaled regression
CC BY-SA 4.0
null
2023-05-24T13:45:51.963
2023-05-24T15:44:30.843
2023-05-24T14:48:41.233
388736
388736
[ "regression", "maximum-likelihood" ]
616808
1
null
null
0
22
Long story short, I'm seeing in the literature that linear instrumental variables models are identifiable, even in the presence of unobserved confounders. The unobserved confounding aspect befuddles me, since it is not clear where this insight came from. Briefly, given a linear instrumental variables setup where $X = Z\beta + U\theta + \epsilon, \; \epsilon \sim N(0, \sigma_{\epsilon})$ $Y = X\alpha + U\phi + \delta, \; \delta \sim N(0, \sigma_{\delta})$ where $Z\in \mathbb{R}^z$ are instruments $X \in \mathbb{R}^x$ are the exogenous variables, $Y \in \mathbb{R}^y$ are the endogenous variables and $U \in \mathbb{R}^u$ are the unobserved confounders. In the instrumental variables setup, the average causal effect $\mathbb{E}[Y|do(x)]$ is of interest. If $U$ is observed, I can see how this works out. But it isn't clear to me how an unobserved $U$ gets dropped / marginalized out in the linear setting, and I have not been successful finding the original proof, even though this was hinted in Bowden + Turkington 1984, Pearl 2008 and Rubin + Imbens 2015. Any references or pointers would be appreciated. P.S. This question is similar to what was asked [here](https://stats.stackexchange.com/questions/550939/identifiability-of-multivariate-instrumental-variable-model), but the collider insight is only part of the story, since we know that ACE is not identifiable in a non-parametric setting. P.S.S. This has been originally posted on [mathoverflow](https://mathoverflow.net/questions/447446/instrumental-variable-identifiability-in-the-presence-of-unobserved-confounders), before I realized that this forum was better suited for causal inference questions
Instrumental variable identifiability in a linear setting in the presence of unobserved confounders
CC BY-SA 4.0
null
2023-05-24T13:58:18.117
2023-06-01T09:22:28.343
2023-05-24T14:04:58.563
79569
79569
[ "causality", "instrumental-variables", "identifiability", "2sls", "collider" ]
616809
1
null
null
0
12
Let us consider a regression model $ y = X_1 \beta_1 + X_2 \beta_2 + u $ where $y$ is $n \times 1$, $X_1$ is $n \times p_1$, $X_2$ is $n \times p_2$. Assume that $rank(X_1)=p_1$ and $rank(X_2)=r_2 \le p_2$. By multiplying $M_1 = I_n - X_1(X_1'X_1)^{-1} X_1'$ to the above model, we have $ M_1y = M_1 X_2 \beta_2 + M_1 u $ I don't know how to obtain $rank(M_1 X_2)$. If $M_1$ were positive definite, we have $rank(M_1 X_2)=rank(X_2)$. However, unfortunately, $M_1$ is positive semidefinite. I'm wondering under what condition $rank(M_1 X_2)=rank(X_2)$ holds? I have checked many textbooks, but this is not discussed at all even for the full rank case with $rank(X_2)= p_2$. Any comments would be appreciated!
rank in a partitioned regression
CC BY-SA 4.0
null
2023-05-24T14:00:36.320
2023-05-24T14:00:36.320
null
null
111064
[ "regression", "multiple-regression" ]
616810
1
null
null
1
14
Suppose we have a single random variable $X$, with mean $\mu_X$ and variance $\sigma^2_X$, then we can approximate the expected value and variance of $\mathbb{E}(f(X))$ using a second-order Taylor expansion ([here](https://en.wikipedia.org/wiki/Taylor_expansions_for_the_moments_of_functions_of_random_variables)), where $f$ is some function. Is a similar result known for the multivariate case? From my research, I'm only able to find the Taylor expansion for the multivariate case ([here](https://math.libretexts.org/Bookshelves/Calculus/Supplemental_Modules_(Calculus)/Multivariable_Calculus/3%3A_Topics_in_Partial_Derivatives/Taylor__Polynomials_of_Functions_of_Two_Variables), eqn 6).
Expectation and variance of second-order Taylor approximation for the multivariate case
CC BY-SA 4.0
null
2023-05-24T14:20:48.920
2023-05-24T14:20:48.920
null
null
290294
[ "multivariate-analysis", "random-variable", "expected-value", "approximation", "taylor-series" ]
616811
1
617070
null
0
35
I am going through the backdoor criterion and how we get from an expression involving do to one which doesn't as below. [](https://i.stack.imgur.com/OKtDr.png) What i don't quite get is how to rewrite this estimand in terms of propensity scores. e.g. imagine we denote this propensity score by e(v2) - i get that the propensity score acts as a mediator between our v2 variable and drug, but what i don't get is how this statistical estimand on right hand side can be rewritten in terms of propensity scores. All i can think is that P(drug | v2) = e(v2). Is there a way to rewrite this estimand in terms of propensity scores? Trying to understand formula against prop scores.
How can you rewrite the estimand in terms of propensity scores? Dowhy question
CC BY-SA 4.0
null
2023-05-24T14:27:44.243
2023-05-27T15:32:40.100
null
null
250242
[ "causality", "propensity-scores", "causal-diagram" ]
616812
2
null
616787
1
null
Here is my thinking on this problem: If you have $m$ randomly chosen edges in a graph, then you have $2m$ entries in the adjacency matrix. If we focus on the row of this matrix for vertex $w$, there is a $\frac{k_w}{2m}$ probability the non-diagonal entries will be 1. If there are $k_v$ entries in the column for vertex $v$, each of these has the same probability of being in the row for vertex $w$: $$P(v\&w) = k_v · \frac{k_w}{2m}$$ If I've overlooked a dependency or my counting protocol is under or over counting, I hope someone will let me know so this post can be adjusted accordingly.
null
CC BY-SA 4.0
null
2023-05-24T14:32:18.113
2023-05-24T14:32:18.113
null
null
199063
null
616813
1
null
null
0
9
I just did a MANOVA to investigate the effect of the my IV on my three DV's. However, now I want to see if my Moderator Variable has anything to do with it. Can I just put my MV as a covariates? It then becomes a MANCOVA right? My apologies if this question has been asked, I looked for it but did not find anything that answered my question. I feel like its so simple, but as i am overthinking it, it confuses me more and more. Please help and save my thesis.
MANOVA for a Moderator variable?
CC BY-SA 4.0
null
2023-05-24T14:40:11.447
2023-05-24T15:02:32.957
2023-05-24T15:02:32.957
388741
388741
[ "hypothesis-testing", "interaction", "spss", "manova", "mancova" ]
616814
1
null
null
1
91
I'm interested in the following inference / filtering problem in a hidden Markov model setting. Suppose we have a simple random walk $x_t\in\mathbb{Z}$ and observations are "images" consisting of iid pixels $y_{n,t}$ of the form $$ y_{n,t} \sim \mathcal{N}(\epsilon\delta_{n,x_t},1) $$ where $\delta_{j,k}$ is a Kronecker delta and $\epsilon$ represents the signal to noise that gives the "visibility" of the trajectory in the image i.e. the size of the spike at $x_n$ relative to the background noise. As usual, filtering corresponds to finding the posterior distribution $p(x_T|y_{1:N,1:T})$ conditional on the "images" or "movie" $y_{1:N,1:T}$. It is straightforward to set up inference by the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm), but I'd like to know if there are any results about this setting, which seems quite natural. Unlike a Kalman filter, the observations are nonlinear. Just for illustration, here's a simple numerical experiment show a random walk trajectory $x_t$, the measurements $y_{n,t}$, and the posterior $p(x_T|y_{1:N,1:T})$. Although the path is barely visible in the image $y_{n,t}$ it is easily inferred [](https://i.stack.imgur.com/Wt5vF.png) My question is: has the inference problem been studied before, and if so, what is known about it?
Inferring a random walk from noisy "images"
CC BY-SA 4.0
null
2023-05-24T15:24:50.010
2023-05-24T18:13:08.430
2023-05-24T18:13:08.430
388738
388738
[ "bayesian", "hidden-markov-model", "kalman-filter", "particle-filter", "filter" ]
616816
1
null
null
0
24
I am facing a problem in Hausdorff and IoU, where they stop learning when reaching a specific value! While the loss and dice metric keeps changing. surface_distance also has a problem since it is always equal to "inf". I need an explanation of why this problem happens. So I could understand the solutions. Note: I am using Monai framework with builtin functions to calculate those metrics. I am doing 3D image segmentation using U-Net and the Hausdorff is the function HausdorffDistanceMetric from Monai framework and it is explain here: [https://github.com/Project-MONAI/MONAI/blob/dev/monai/metrics/hausdorff_distance.py](https://github.com/Project-MONAI/MONAI/blob/dev/monai/metrics/hausdorff_distance.py) These are three measurements (it is the same for the following 30 epochs): Testing Test_loss: 0.1801 Test_Metric: 0.6199 hausdorff: 24.8 surface_distance: inf IoU: 0.4916495382785797 current epoch: 1 current mean dice: 0.6199 best mean dice: 0.6199 at epoch: 1 Testing Test_loss: 0.1544 Test_Metric: 0.6456 hausdorff: 24.8 surface_distance: inf IoU: 0.4916495382785797 current epoch: 2 current mean dice: 0.6456 best mean dice: 0.6456 at epoch: 2 Testing Test_loss: 0.1111 Test_Metric: 0.6889 hausdorff: 24.8 surface_distance: inf IoU: 0.4916495382785797 current epoch: 3 current mean dice: 0.6889 best mean dice: 0.6889 at epoch: 3
Hausdorff and IoU are stopped changing while dice metric is decreasing
CC BY-SA 4.0
null
2023-05-24T15:47:24.140
2023-05-24T18:16:07.083
2023-05-24T18:16:07.083
380942
380942
[ "machine-learning", "model-evaluation", "distance", "metric", "dice" ]
616819
1
null
null
1
22
given the following GJR-GARCH(1,1) $y_t = \sqrt{h_t} \epsilon_t$ where = $h_t= \alpha_0+(\alpha_1+\bar{\alpha_1}\mathbb{I}(y_{t-1}<0))y_t^2+\beta_1h_t$ and $\alpha_0>0$ $\alpha_1,\beta_1>0$ , $\alpha_1+\bar{\alpha_1}>0$ $\mathbb{I}(y_{t-1}<0))$ takes value 1 when return are negative - under which condition $y_t$ is stationary? - write the log-likelihood function ocnditioning on $y_1$ and setting $h_1$ equals to the unconditional variance and compute the gradient in the following points $(\alpha_0, \alpha_1 ,\bar{\alpha_1},\beta_1)' = (0.2,0.1,0,0,8)'$ for the first point I said that $y_t$ is stationary when $\phi<1$where $\phi= (\alpha_1+0.5\bar{\alpha_1}+\beta_1)$ for the second point I compute the log-likelihood function equal to $\mathbf{lnL(y_2,...y_n|\alpha_0,\alpha_1,\bar{\alpha_1},\beta_1,y_1)= -\frac{n-1}{2}\log{2\pi}-\frac{1}{2}\sum_{t=2}^{n}\log{h_t}-\frac{1}{2}\sum_{t=2}^{n}\frac{y_t^2}{h_t}}$ the gradient should be $ z_t\begin{pmatrix} 1 \\ y_{t-1}^2\\ 0.5y_{t-1}^2\\ h_{t-1} \end{pmatrix}$ $\frac{{\partial \ln L}}{{\partial z_t}}(y_2, \ldots, y_n \mid z_t,y_1)= \frac{1}{2}\sum_{t=2}^n (\frac{1}{h_t }z_t)(\frac{y_t^2}{h_t}-1)$ should I just substitute the given values in the expression of $h_t$ to be done? if someone can tell me if I got the correct conditioned log-likelihood and how to compute the gradient in the given points, I would really appreciate
gradient of the conditioned log-likelihood from GJR-GARCH model
CC BY-SA 4.0
null
2023-05-24T16:14:07.240
2023-05-25T06:51:59.443
2023-05-25T06:51:59.443
362147
362147
[ "likelihood", "garch", "gradient" ]
616821
2
null
616714
1
null
> (Intercept)=-1.298719 means the average size fish has a exp(-1.298719)= 0.272 odds of an empty stomach Only when `fZone=Rankin`, the reference level. Interpretation holds because you have presented "standard length" to the model as centered to 0 at the average size fish. > fZoneWest=0.122594 the odds of empty stomach in West compared to my ref. level (Rankin) increase by exp(0.122594)=1.130425 Only for the average fish size, at `center_s1=0`. I find it confusing, particularly in more complicated models, to work in the odds scale. I prefer to work in the coefficient scale, where coefficients add, and only exponentiate to odds (or convert to probability) at the end of the calculations. > center_sl:fZoneWest=-0.002926 means for every 1 unit above average in size, the odds of an empty stomach decrease by exp(-0.002926)=0.9970783, compared to my ref. level That's incorrect. The interaction is the extra change beyond what you would predict based on the reference-level coefficients. You have, at the fZone reference level `Rankin`, a `center_sl` coefficient of `-0.038851` for the log-odds difference per unit of `center_sl`. The `center_sl:fZoneWest=-0.002926` is what you add to that coefficient when `fZone=West`. exp(-0.038851-0.002926)=0.959, for the change in odds per unit change in `center_sl` when`fZone=West`. In this type of model, centering a continuous predictor doesn't affect its own coefficients, only the [coefficients of predictors with which it interacts](https://stats.stackexchange.com/q/417029/28500). In your model if you didn't center `s1`, the `sl` coefficient would still be`-0.038851` for the slope at the fZone reference level `Rankin`, but the Intercept and the coefficients for other levels would be those corresponding to an (extrapolated) `s1` value of 0. Shawn Hemelstrand has a [helpful recent answer](https://stats.stackexchange.com/a/616763/28500) that might help you think this through. In that answer, think of $X$ as your original continuous `s1` and the $Z$ and $W$ covariates as the two dummy variables that represent your `fZone`. Then work through what happens to the interpretation of regression coefficients when you replace $X$ with $X-\bar X$ in those equations (equivalent to your converting `s1` to `center_s1`).
null
CC BY-SA 4.0
null
2023-05-24T16:27:11.597
2023-05-24T16:27:11.597
null
null
28500
null
616824
2
null
281609
3
null
I think it makes sense to look at the concept of [Extreme Classification](https://www.microsoft.com/en-us/research/project/extreme-classification/) (XC) where we deal with thousands of labels; ["The Extreme Classification Repository"](http://manikvarma.org/downloads/XC/XMLRepository.html) has a lot of good examples. A key difference to "standard classification" is that when working in these extreme (and often multi-label) classification use cases, we are moving towards a "metric at top $k$" approach; we compute our metric of interest using the top $k$ labels predicted (usually ranked by predicted scores). Metrics like precision@k and nDCG@k are the obvious choices and that is because we are needing to have a more "lenient" while still relevant metric - a bit like going from FWER to FDR view in hypothesis testing. Aside from these two, there are also some more specialised metrics like propensity-scored prediction at $k$ which are also relevant and potentially helpful as they can be more easily associated with misclassification costs. Another point to mention are what are called [classifier chains](https://en.wikipedia.org/wiki/Classifier_chains). In this methodology we usually build a series for classifiers to predict the presence or absence of a particular class in addition to the classes predicted by the previous classifiers in the chain. This is usually employed in multi-label classification but it generalises reasonably to XC too, as we partition our output space such that we have a more compact output space to deal with in each "link" of the chain. This plays in the general idea of reducing the output space either by standard dimensionality reduction techniques (e.g. random projections, or word embeddings). Read et al. (2021) [Classifier Chains: A Review and Perspectives](https://arxiv.org/abs/2305.16179) is a nice read on this. A final point is that sometimes we need specialised database infrastructure to perform XC as scalability in real-time setting might be an issue. (but this is primarily a data engineering point) Most applications are unsurprisingly associated with online advertising and/or product recommendations; XC problems are often overlapping with multi-label learning so their literature also overlaps. A reasonably well-cited source in this niche area of ML is [Parabel: Partitioned Label Trees for Extreme Classification with Application to Dynamic Search Advertising](https://dl.acm.org/doi/abs/10.1145/3178876.3185998) (2018) by Prabhu et al. if you want to read more on this. Lie et al. (2021) [The Emerging Trends of Multi-Label Learning](https://arxiv.org/abs/2011.11197) seems also good and gives out a very nice hierarchy too.
null
CC BY-SA 4.0
null
2023-05-24T17:34:19.773
2023-05-27T01:20:44.407
2023-05-27T01:20:44.407
11852
11852
null
616825
2
null
174527
0
null
While classical statistics gives you an equation for leverage in a linear model (e.g., the equation given by Agresti (2015), page 59), such an analogue need not exist for the complicated models you are using. The spirit of the leverage given by the referernce is that any one observation might have a huge influence on the model fit, meaning that dropping that observation would result in dramatically different predictions. A classical linear regression allows you to calculate the leverage using a closed-form solution. I do not expect this to exist for the complicated models you use, especially if you get into tuning hyperparameters using cross-validation like I suspect you do (or at least would be common in much machine learning work). Thus, instead of having an equation to calculate the leverage of each observation, you drop the observation, fit the model on all of the other observations, and check by how much the predictions change. I could see a strategy going something like this. - Fit the model to all data and record the predicted values. - Drop an observation, and fit the model on the remaining data. - Compare the predictions on the smaller model to the predictions by the model. Perhaps calculate the variance, mean absolute deviation, or interquartile range of the changes in predictions, where smaller changes indicate less leverage. - Drop another observation and repeat, then another obervation, and so on. - If you store the measures of leverage, you can plot a distribution or look at summary statistics to see how influential the particular observations are. If there is a tight cluster, then no individual observation is particularly influential. If some observations result in large changes, you can drill down further and determine which observation is responsible. (I think this is what you meant by "leave-one-out" and agree with that stance.) The upside to such an approach is that it is a fairly straightforward coding exercise, rather than a gigantic mess of a mathematical derivation. The downside is that you have to fit a model over and over (oh, the joys of computational statistics). REFERENCE Agresti, Alan. Foundations of Linear and Generalized Linear Models. John Wiley & Sons, 2015.
null
CC BY-SA 4.0
null
2023-05-24T17:44:41.760
2023-05-24T17:44:41.760
null
null
247274
null
616826
2
null
313471
1
null
As an aside, Mann-Whitney U is NOT a test of central tendency (aka median). It is a test of stochastic dominance (a random value from one sample is, on average, more likely to be higher that a random value from the other sample). Only in some very specific (and hard to verify) cases does it become a test of the medians: either when the 2 samples are symmetric (and then median=mean and it becomes also a test of means), or when the 2 samples are NOT symmetric, but they have exactly the same shape (how to prove this? An exercise "left to the student"?)
null
CC BY-SA 4.0
null
2023-05-24T17:50:05.270
2023-05-24T17:50:05.270
null
null
380096
null
616827
1
null
null
1
29
Suppose that $f(y|\boldsymbol{\theta})$ is a distribution belonging to the exponential family (e.g., a normal distribution). I'm wondering if there is a name for the following distribution, which is the normalized, powered version of $f(y|\boldsymbol{\theta})$: $$f^*(y|\boldsymbol{\theta})=\frac{f(y|\boldsymbol{\theta})^\gamma}{\int_\mathcal{y}f(y|\boldsymbol{\theta})^\gamma dy}$$ I'm primarily interested in knowing the name of such a distribution because it will help me look further into it. However, if you don't know of its name (or whether a name exists), I appreciate any help that can point me in the right direction. I do know that the numerator can generally be called a power likelihood, but I'm more interested in the normalized version.
Is there a name for a normalized, powered distribution from the exponential family?
CC BY-SA 4.0
null
2023-05-24T18:45:01.110
2023-05-24T18:45:01.110
null
null
257939
[ "distributions", "normalization", "distribution-identification" ]
616828
1
null
null
0
14
I am estimating the difference between quantities of owned homes in particular census tracts between ACS 2021 5 yr estimates and ACS 2016 5 yr estimates. I'd like to know if the quantities of owned homes differ significantly between these two time periods. I am following ACS documentation in the 2018 ACS general handbook, primarily [chapter 7](https://www.census.gov/content/dam/Census/library/publications/2020/acs/acs_general_handbook_2020.pdf) and [chapter 8](https://www.census.gov/content/dam/Census/library/publications/2018/acs/acs_general_handbook_2018_ch08.pdf). Please note, while these link to the 2018 handbook, the formulas I reference match the analogous formulas in the [2020 handbook](https://www.census.gov/content/dam/Census/library/publications/2020/acs/acs_general_handbook_2020.pdf), but are conveniently available chapter by chapter. When I produce an estimate of significance following formula (3) from chapter 7 to determine if the difference of the raw counts from the two time periods is significantly different from 0 at the 90% level, I find that the difference is not significantly different from 0. When I calculate the percent change and associated margin of error (from formulas (7) and (8) in chapter 8), I observe that the CI does not contain 0, so I believe the percent change to be significantly different from 0. My primary question is this: How can the difference in counts be not significant, while the percent change is significant? Is this a case of trying to compare apples and oranges (and therefore significance may disagree) or am I incorrect in my process, causing erroneous disagreement? Math below for reference: Count2016 = 365, MOE2016 = 170, SE2016 = 103.48... Count2021 = 193, MOE2021 = 104, SE2021 = 63.31... Percent change CI calculation: pct_change_est = ((193-365)/365) * 100 = -47.12 pct_change_moe = (1/365)sqrt(104^2 + (193/365)^2170^2)*100 = 37.66 This implies a 90% CI is made from -47.12 +/- 37.66, which will not cross 0 and is therefore significant. Z-score calculation of raw difference: abs((193-365)/sqrt(63.31^2+103.48^2)) = 1.42... As 1.42 < 1.645, the critical value for the 90% level of the normal distribution, I conclude that the raw difference is not significantly different from 0. Thank you for any insight and help you can provide!
Significance disagrees between raw count z-score and percent change CI
CC BY-SA 4.0
null
2023-05-24T19:11:06.520
2023-05-25T12:04:39.223
null
null
388763
[ "statistical-significance", "confidence-interval", "percentage", "census" ]
616829
2
null
616117
0
null
When you make $m$ times a sample of $n$ i.i.d. random variables, then you have generated in all $m\cdot n$ i.i.d. random variables. All these $m\cdot n$ variables are still independent of each other, the computer takes care of that. We could just number them from $X_1$ to $X_{mn}$. Now you have stored them row by row in a matrix, so row 1 consists of $X_1,..., X_n$, row 2 consists of $X_{n+1},...,X_{n+n}$, row 3 of $X_{2n+1},...,X_{2n+n}$ and so on, and row $m$ of $X_{(m-1)n+1},...,X_{mn}$. If you look at any column, for example the first column $(X_1, X_{n+1},\dots, X_{(m-1)n +1})$, or the $j$-th, $(X_j, X_{n+j},\dots, X_{(m-1)n +j})$, the variables are still independent of each other. This also holds when the simulated $X_1, \dots, X_{mn}$ have a distribution that is not normal. You only need that they are independent :-) In many texts about linear regression, one writes $$ \mathbf{Y} = \beta_0 + \beta_1 \mathbf{X} + \epsilon $$ and in reality means $$ \begin{array}{1} Y_1\\ \vdots\\Y_n \end{array} \begin{array}{1} =\\ \ \\= \end{array} \begin{array}{1} \beta_0 +\beta_1 X_1 +\epsilon_1\\\qquad \vdots\\ \beta_0 +\beta_1 X_n +\epsilon_n \end{array}. $$
null
CC BY-SA 4.0
null
2023-05-24T19:19:53.220
2023-05-24T19:31:50.537
2023-05-24T19:31:50.537
237561
237561
null
616830
1
null
null
0
21
I need your help to define the optimal budget allocation using a marketing mix model. I have built a multiplicative model to account for diminishing return and interaction between my IVs, and I am asked to decide how to allocate a 30% budget more compared to the previous quarter (Q2). How could I do? I would like to find the optimal amount of budget allocated to each of my IVs. I built two different multiplicative models: one for a subset containing data about Q2, and one for a dataset containing data about Q3 in order to derive the different impact on my IVs in different quarters. Subsequently, I increased the value of the IVs which have a greater impact in Q3 compared to Q2. I use this updated data to predict my new output in order to find the way to maximize my sales. Basically I am playing around allocating different amount of budget across 3 IVs (33% 33% 33%; 50% 25% 25% etc..) Would you have a better solution?
Budget allocation using multiplicative regression model
CC BY-SA 4.0
null
2023-05-24T19:27:00.210
2023-05-24T19:27:00.210
null
null
357414
[ "regression", "python", "predictive-models", "marketing" ]
616831
1
null
null
0
10
Assume we have two sets of data $X_1$ and $X_2$ drawn from two different distributions. Are the loss of the empirical risk minimizer of $X_1$ on $X_2$: $l_{X_2}(f_{X_1})$ the same as the loss of the empirical risk minimizer of $X_2$ on $X_1$: $l_{X_1}(f_{X_2})$ the same? If not, is there any existing result characterizing this out-of-distribution generalization bound?
Generalization bound on out of distribution data
CC BY-SA 4.0
null
2023-05-24T19:29:28.407
2023-05-24T19:29:28.407
null
null
272776
[ "bounds", "out-of-sample", "pac-learning" ]
616832
1
null
null
0
16
I am currently looking into developing a scale to measure a latent construct. I believe the latent construct is a property of conversation sequences. The scale I am developing has 4 a priori defined factors with about 6 items each. We are looking to recruit ca. 250 participants who will use the scale approximately 5 times for 5 rated objects (conversation sequences) in a single session. The sample of "rateable" objects will be larger than 5 and divided into 5 different categories defined a priori. The objects will be drawn at random such that from each category, 1 object will be drawn (no replacing). While the division into categories ensures some level of comparability between the objects each participant will rate, I have no way of estimating the variance between "rateable objects" occurring within these categories. I assume that the validity of the scale will increase with a larger sample size of "rateable" objects. I also assume that the reliability of the scale will decrease with a larger sample size of "rateable objects" as overlap of rated objects across participants will decrease. My questions are the following: What rationale can I follow to simultaneously maximise validity (by increasing the sample size of rateable objects as much as possible) and reliability (by decreasing the sample size of rateable objects as much as possible)? Is there any literature I can look into? Many thanks in advance!
How to determine sample size of rated objects for scale development
CC BY-SA 4.0
null
2023-05-24T19:35:42.693
2023-05-24T19:56:10.223
2023-05-24T19:56:10.223
388654
388654
[ "probability", "survey", "sample", "scale-parameter" ]
616835
2
null
616740
0
null
Instead of dichotomizing the continuous predictor(s), which is generally a [bad idea](https://stats.stackexchange.com/q/68834/28500), take advantage of the probability structure provided by the binary predictor. Then evaluate your continuous predictor(s) in a continuous probability model; don't jump to reduce that continuous probability to a yes/no decision. Any comparison between predictors should be done on the same cases. You can calculate a [Brier Score](https://stats.stackexchange.com/q/68834/28500), which for a binary outcome can be taken as the mean-square difference between predicted outcome probabilities and observed outcomes. Unless the expensive "traditional" binary predictor is used as the definition of the disease, then presence/absence of the biomarker only provides a probability of presence/absence of the disease. That lets you calculate the Brier score for the binary-predictor model. The continuous probability model will provide continuous probability estimates case by case for a Brier score. You can use the [Akaike Information Criterion](https://en.wikipedia.org/wiki/Akaike_information_criterion#Definition) (AIC) to compare models built on the same data set but with the different biomarker choices. That's typically available in the summary of a logistic regression model. Some question whether it's appropriate for comparisons of non-nested models, but the consensus seems to be that it's OK. See [this page](https://stats.stackexchange.com/q/116935/28500) and its many links. You say that the biomarkers are "used in the diagnosis of a disease," which implies that there are other clinical variables used in diagnosis. A better approach might be to build logistic regression models that include both the biomarker(s) and the clinical variables, and see how much information the biomarker(s) add. [Frank Harrell's post on added value](https://www.fharrell.com/post/addvalue/) explains how to do that, with an extensively worked-through example of a model for significant coronary artery disease. AIC comparisons could be done here, too. Note that there are many pitfalls in developing biomarkers, described in Harrell's post on "[How to Do Bad Biomarker Research](https://www.fharrell.com/post/badb/)". His [Regression Modeling Strategies](https://hbiostat.org/rmsc/) contains much useful advice on regression modeling. In particular for your interests, see the parts of Chapter 2 on allowing for non-linearity of outcome associations with continuous predictors, and Chapter 4 on how to build models without overfitting.
null
CC BY-SA 4.0
null
2023-05-24T20:48:06.973
2023-05-24T20:48:06.973
null
null
28500
null
616836
1
null
null
0
5
I PAC-bound I learned consider the following decompistion: $R(f) = R_n(f) + \left(R(f) - R_n(f)\right)$ where $R(f)$ is the test error, $R_n(f)$ is the training error $\left(R(f) - R_n(f)\right)$ is defined as generalization error. However in practice, we often talk about test/validation error with a slightly different definition, where the error is w.r.t. a validation/test set where training data is not included. I wonder in this case how to derive the corresponding generalization error.
extending PAC bound result to validation set generalization error
CC BY-SA 4.0
null
2023-05-24T21:13:52.517
2023-05-24T21:13:52.517
null
null
272776
[ "pac-learning" ]
616837
1
null
null
1
13
I have generated a time series data set of measurements that are a bit noisy and I want to apply kernel smoothing to the data. My time series data is not regular however, meaning that the time difference between consecutive observations can be variable so I want to account for that within the weighting of the kernel smoothing. I give an example dataset below and apply kernel smoothing with a couple different bandwidth sizes as well as a rolling mean with different window sizes for comparison. My impression was that kernel smoothing worked by taking a weighted average of the adjacent points weighted by how close they are (here in time) to the "focal observation". The bandwidth controls the relative contribution of adjacent points (essentially the weights given to adjacent points given how close they are) where larger bandwidths results in more smoothing (i.e. larger weights to closer observations). In contrast, a rolling mean takes a regular uniform average of observations within a window around the focal observation and so that all observations within the window contribute equally to the mean regardless of how far away they are from the focal observation. However, when I applied this in practice, I don't see this in my results. Is my understanding wrong? Is my code wrong? What I see is that when I apply kernel smoothing with a bandwidth of 1/2 or 1, it doesn't change the raw data, when I apply kernel smoothing with a bandwidth of 2 or 3, the results are equivalent, and when I apply a rolling mean of window size +/- 1 or +/- 2 these are equivalent to a kernel smooth with bandwidths of 2 or 4 respectively. Can anyone explain to me what's going on? Thanks! ``` ################# # Simulate data # ################# # Simulate a time series data set with different underlying processes that # last a random length with variable sampling frequencies library(dplyr) library(ggplot2) library(tidyverse) # Set random seed set.seed(1) # Total # of process bouts n_mode = 5 # Average sampling rate sample_rate = 10 #hz # Durations can be irregularly long durs = rgamma(n_mode, shape = 2.5, rate = .5) # Total length of time collected times = cumsum(durs) # Ending times # Random order of modes order = c(3,1,3,2,1) # Stick in a data frame df = data.frame(modes = order, duration = durs, end_time = times) # Determine starting timestamp of bout df$start_time = c(0, times[1:(length(times)-1)]) # Reorder columns df = df %>% select(modes, duration, start_time, end_time) # For each mode for(i in 1:nrow(df)){ # By each row df_sample = df[i,] # Generate an idealized sequence of data with perfect really fast sampling ts_ideal = seq(df$start_time[i], df$end_time[i], by = 1/10000) # Randomly sample observations so that the total sampling rate ends up # at an average of observations a second time_ss = sample(ts_ideal, size = sample_rate * df_sample$duration, replace = FALSE) # Put in order time_ss = time_ss[order(time_ss)] # Get "time stamp" second time_s = floor(time_ss) # Extract sub-second info ss = time_ss - time_s # Put into time df times_df = data.frame(index = i, mode = df_sample$mode, time_s, ss, time_ss) # If first iteration, generate combined df # Otherwise add to combined df if(i == 1){comb = times_df}else(comb = rbind(comb, times_df)) } # Generate variable by process type (mode) for(i in 1:length(unique(comb$index))){ # subset by index df_index = comb[comb$index == i, ] # given the mode if(unique(df_index$mode) == 1){ # signals df_index$x_raw = rep(0, length(df_index$time_ss))} if(unique(df_index$mode) == 2){ # signals df_index$x_raw = sin(2 * pi * df_index$time_ss)} if(unique(df_index$mode) == 3){ # signals df_index$x_raw = rep(1, length(df_index$time_ss))} # Combine if(i == 1){df_comb = df_index}else(df_comb = rbind(df_comb, df_index)) } # Add some error (noise) df_comb$x_raw = df_comb$x_raw + rnorm(n = nrow(df_comb), mean = 0, sd = .05) # Define as factor df_comb$mode = as.factor(df_comb$mode) # Pull out transition times of x for visualization trans_times = df_comb$time_ss[df_comb$index!=lag(df_comb$index)] # Remove NA trans_times = trans_times[!is.na(trans_times)] ############# # Visualize # ############# df_comb %>% ggplot(aes(x = time_ss, y = x_raw, color = mode, group = 1))+ geom_point(aes(color = mode, group = 1))+ geom_line(aes(color = mode, group = 1))+ scale_color_manual(values = c("red", "blue","green"))+ #Color by mode geom_vline(xintercept = trans_times, linetype = "dashed") # Delineate transitions #################### # Kernel smoothing # #################### df_comb$x_ks1_2 = ksmooth(time(df_comb$time_ss), df_comb$x_raw, bandwidth = 0.5)$y # Bandwidth is 1 (does not appear to work)- is same as raw df_comb$x_ks1 = ksmooth(time(df_comb$time_ss), df_comb$x_raw, bandwidth = 1)$y # Bandwidth is 2 (first that appears to work) df_comb$x_ks2 = ksmooth(time(df_comb$time_ss), df_comb$x_raw, bandwidth = 2)$y # Bandwidth is 3 (is same as 2) df_comb$x_ks3 = ksmooth(time(df_comb$time_ss), df_comb$x_raw, bandwidth = 3)$y # Bandwidth is 4 (higher smoothing) df_comb$x_ks4 = ksmooth(time(df_comb$time_ss), df_comb$x_raw, bandwidth = 4)$y ######## # Plot # ######## df_comb %>% pivot_longer(cols = x_raw:x_ks4, names_to = "type", values_to = "x") %>% ggplot(aes(x = time_ss, y = x, group = type, color = type))+ geom_point()+ geom_line()+ scale_color_manual(values = c("x_raw" = "black", "x_ks1_2" = "red", "x_ks1" = "orange", "x_ks2" = "green", "x_ks3" = "blue", "x_ks4" = "purple"))+ #color by bandwidth geom_vline(xintercept = trans_times, linetype = "dashed") # Delineate transitions ############################################################## # Why do different bandwidths produce the same numbers ??!! # ############################################################# # Kernel smoothing bandwidth of 3 same as kernel smoothing bandwidth of 2? sum(df_comb$x_ks2 == df_comb$x_ks3)/nrow(df_comb) # 100% same # Kernel smoothing bandwidth of 1 and kernel smoothing bandwidth of 1/2 same as raw? sum(df_comb$x_ks1 == df_comb$x_raw)/nrow(df_comb) # 100% same sum(df_comb$x_ks1_2 == df_comb$x_raw)/nrow(df_comb) # 100% same ################## # Rolling Window # ################## # Quarter second window (+/- 1 observation) setDT(df_comb)[, "x_rm1_4" := frollapply(x_raw, n = (floor(sample_rate/4) + 1), FUN = mean, align = "center", adaptive = F, na.rm = T)] # Half second window (+/- 2 observations) df_comb[, "x_rm1_2" := frollapply(x_raw, n = sample_rate/2, FUN = mean, align = "center", na.rm = T)] ######## # Plot # ######## df_comb %>% pivot_longer(cols = c(x_raw, x_rm1_4, x_rm1_2), names_to = "type", values_to = "x") %>% ggplot(aes(x = time_ss, y = x, group = type, color = type))+ geom_point()+ geom_line()+ scale_color_manual(values = c("x_raw" = "black", "x_rm1_4" = "green", "x_rm1_2" = "purple"))+ #color by smoothing window width geom_vline(xintercept = trans_times, linetype = "dashed") # Delineate transitions ################################################### # Plot both Kernel and Rolling Mean on same Graph # ################################################### df_comb %>% pivot_longer(cols = c(x_raw, x_rm1_4, x_ks2, x_rm1_2, x_ks4 ), names_to = "type", values_to = "x") %>% ggplot(aes(x = time_ss, y = x, group = type, color = type))+ geom_point()+ geom_line()+ scale_color_manual(values = c("x_raw" = "black", "x_rm1_4" = "red", "x_ks2" = "orange", "x_rm1_2" = "green", "x_ks4" = "blue" ))+ #color by smoothing window width geom_vline(xintercept = trans_times, linetype = "dashed") # Delineate transitions # Plot by each mode in case differences are very tiny df_comb %>% filter(index == 1) %>% pivot_longer(cols = c(x_raw, x_rm1_4, x_ks2, x_rm1_2, x_ks4 ), names_to = "type", values_to = "x") %>% ggplot(aes(x = time_ss, y = x, group = type, color = type))+ geom_point()+ geom_line()+ scale_color_manual(values = c("x_raw" = "black", "x_rm1_4" = "red", "x_ks2" = "orange", "x_rm1_2" = "green", "x_ks4" = "blue" )) df_comb %>% filter(index == 2) %>% pivot_longer(cols = c(x_raw, x_rm1_4, x_ks2, x_rm1_2, x_ks4 ), names_to = "type", values_to = "x") %>% ggplot(aes(x = time_ss, y = x, group = type, color = type))+ geom_point()+ geom_line()+ scale_color_manual(values = c("x_raw" = "black", "x_rm1_4" = "red", "x_ks2" = "orange", "x_rm1_2" = "green", "x_ks4" = "blue" )) ########################################################## # Why is kernel smoothing the same as rolling means ??!! # ########################################################## # Round to same number of decimal places so precision isn't the underlying difference df_comb$x_rm1_4 = round(df_comb$x_rm1_4, 12) df_comb$x_rm1_2 = round(df_comb$x_rm1_2, 12) df_comb$x_ks2 = round(df_comb$x_ks2, 12) df_comb$x_ks4 = round(df_comb$x_ks4, 12) # Pull out observations where the rolling mean and kernel smooth is not the same df_comb[df_comb$x_rm1_4!=df_comb$x_ks2,] df_comb[df_comb$x_rm1_2!=df_comb$x_ks4,] # Eyeball the data cbind(df_comb[1:6, "x_rm1_4"], df_comb[1:6, "x_ks2"], df_comb[1:6, "x_rm1_4"] == df_comb[1:6, "x_ks2"]) ``` ```
Kernel Smoothing for Time Series data
CC BY-SA 4.0
null
2023-05-24T22:00:57.397
2023-05-24T22:00:57.397
null
null
354343
[ "r", "time-series", "kernel-smoothing", "smoothing", "moving-window" ]
616838
2
null
616837
0
null
So, it turns out that all I needed to do was to remove the time() function within the kernel smoothing function and use my variable by itself (it's already a numeric representation of time, not a posix_ct type object) and it performs as expected. So my code should been: ``` # THIS WORKS df_comb$x_ks1_2 = ksmooth(df_comb$time_ss, df_comb$x_raw, bandwidth = 0.5)$y # THIS DOES NOT df_comb$x_ks4 = ksmooth(time(df_comb$time_ss), df_comb$x_raw, bandwidth = 4)$y ``` This is because the time() function converted by numeric time into a sequential time series treating each individual observation as an integer. I think this is why it wouldn't allow me to use an uneven number for my bandwidth or go below 1 for the bandwidth value. Well, if anyone needs a practice data set (I created it to represent accelerometer data with different behavior types "modes") in which to test out methods and better understand how they work...here you go!
null
CC BY-SA 4.0
null
2023-05-24T22:00:57.397
2023-05-24T22:00:57.397
null
null
354343
null