Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
614050
1
614051
null
2
25
The definition of the range has been given as the difference between the largest and the smallest value of the distribution. This, in my perception, tells us the discrepancy between the smallest and largest value and does not measure the dispersion from the central value. So, my question is how does that help us in finding the dispersion of the observation from the central value of the same? In other words, why do we consider range as a measure of dispersion?
what do we mean by range in descriptive statistics?
CC BY-SA 4.0
null
2023-04-25T05:26:33.540
2023-04-25T07:00:31.870
2023-04-25T07:00:31.870
56940
386483
[ "descriptive-statistics", "dispersion", "range" ]
614051
2
null
614050
2
null
> In statistics, dispersion (also called variability, scatter, or spread) is the extent to which a distribution is stretched or squeezed. ([Wikipedia](https://en.wikipedia.org/wiki/Statistical_dispersion)) It does not say “from the central value”. Dispersion is about how the data is spread out, obviously, if it has a big range it is very spread out. Some dispersion statistics like variance or [MAD](https://en.wikipedia.org/wiki/Median_absolute_deviation) measure the spread relative to the central value, but others like range or [IQR](https://en.wikipedia.org/wiki/Interquartile_range) do not.
null
CC BY-SA 4.0
null
2023-04-25T05:40:43.850
2023-04-25T06:52:21.717
2023-04-25T06:52:21.717
35989
35989
null
614052
2
null
307099
0
null
I just wrote a module for this on python. Hope it helps. [https://github.com/3zhang/Python-Lasso-ElasticNet-Ridge-Regression-with-Customized-Penalties](https://github.com/3zhang/Python-Lasso-ElasticNet-Ridge-Regression-with-Customized-Penalties)
null
CC BY-SA 4.0
null
2023-04-25T06:22:49.900
2023-04-25T06:22:49.900
null
null
68424
null
614053
2
null
92960
2
null
I wrote a module for this purpose. You can take a look: [https://github.com/3zhang/Python-Lasso-ElasticNet-Ridge-Regression-with-Customized-Penalties](https://github.com/3zhang/Python-Lasso-ElasticNet-Ridge-Regression-with-Customized-Penalties)
null
CC BY-SA 4.0
null
2023-04-25T06:23:55.143
2023-04-25T06:23:55.143
null
null
68424
null
614061
1
null
null
0
26
This is a fixed effect panel model in Stata. Income group has 4 categories: Low income, Lower middle income, Upper middle income and High income. Here below, I put the interaction coefficient values for the model. I would like to know the effect of financial service (findex) on Carbon emissions. Other control variables are: per capita gdp, consumption expenditure, renewable energy and trade amount. How will I interpret the coefficient value from the interaction? ``` Co2pc | Coef. Std. Err. t P>|t| [95% Conf. Interval] Low income | -.0165589 .0194985 -0.85 0.396 -.0548202 .0217024 Lower middle income | .010723 .0103074 1.04 0.298 -.0095029 .0309488 Upper middle income | .042023 .0081153 5.18 0.000 .0260985 .0579475 ``` I tried this code: ``` xtreg Co2pc findex GDPpc Cons_exp Energy Trade i.incomegroup##c. findex,fe ```
How to interpret the interaction coefficient
CC BY-SA 4.0
null
2023-04-25T06:51:52.713
2023-05-19T22:39:14.127
2023-05-19T22:39:14.127
11887
387764
[ "interaction", "interpretation", "stata" ]
614062
2
null
130237
1
null
See also this paper on the relation between OR, RR, and HR Stare, J. & Maucort-Boulch, D. Odds ratio, hazard ratio and relative risk. Advances in Methodology and Statistics 13, 59–67 (2016). Available in [https://mz.mf.uni-lj.si/article/view/159/262](https://mz.mf.uni-lj.si/article/view/159/262)
null
CC BY-SA 4.0
null
2023-04-25T07:26:27.207
2023-04-26T08:09:55.633
2023-04-26T08:09:55.633
386494
386494
null
614063
1
null
null
1
27
I have a question about the Spearman rank correlation coefficient. In this reference ([https://arxiv.org/pdf/2211.16224.pdf](https://arxiv.org/pdf/2211.16224.pdf), Fig. 4) they use the spearman correlation coefficient instead of the standard correlation coefficient to perform PCA. This is for me appealing because it removes the need of normalization when the variables have different units. But the theory I know about PCA diagonalizes the Pearson correlation matrix. So my question is: - Is it common to perform PCA on top of the Spearman rank correlation coefficient? Is there a theory behind this?
Spearman coefficient and PCA
CC BY-SA 4.0
null
2023-04-25T07:40:15.750
2023-04-25T07:40:15.750
null
null
70458
[ "self-study", "spearman-rho" ]
614067
1
null
null
1
22
I am doing latent class analysis but the degrees of freedom are negative. Because of that I should do parameter restrictions. How do I choose the way I restrict my parameters?
Negative degrees of freedom in latent class analysis
CC BY-SA 4.0
null
2023-04-25T08:09:57.487
2023-04-25T08:36:53.787
null
null
386502
[ "restrictions" ]
614069
2
null
614067
0
null
You can reduce the number of latent classes or latent statuses or add parameter restrictions to reduce the number of parameters being estimated. Since you did not provide any details about the problem you are facing, I am afraid that the advice can only be general as well. With regards to implementing the first option: Instead of fitting a model with many latent classes or freely estimated parameters, you should start with a smaller number of classes and gradually increase them. This will help you avoid overfitting and understand how each additional class contributes to the model's performance. In order to select the optimal number of classes, you can compare the model fit statistics (e.g., Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), or adjusted BIC). Parameter restrictions you can identify in many ways: - Theoretical considerations: You can use your knowledge of the research domain to inform your decisions about which parameters should be restricted. Identify relationships among the variables that can be supported by prior research or theoretical considerations, and use this information to impose constraints on the model. - Equality constraints: You should consider imposing equality constraints on some of the parameters across latent classes. For instance, you can assume that certain item-response probabilities or transition probabilities are equal across different classes. This will reduce the number of freely estimated parameters and increase the degrees of freedom. - Redundant parameters: You can inspect the parameter estimates and correlations to identify redundant parameters or parameters that do not provide meaningful information about the latent classes. These parameters can be fixed to a constant value or constrained to be equal across classes. - Cross-validation: You can use cross-validation to test the stability of the parameter estimates and evaluate the model's performance on different subsets of the data. This can help you identify overfitting and choose appropriate parameter restrictions. - Sensitivity analysis: You can perform sensitivity analysis to assess the impact of different parameter restrictions on the model fit and classification accuracy. This can help you identify the most appropriate parameter restrictions and understand the trade-offs between model complexity and fit.
null
CC BY-SA 4.0
null
2023-04-25T08:36:53.787
2023-04-25T08:36:53.787
null
null
53580
null
614073
1
null
null
2
39
I am currently looking at a dataset of Fair Market Rents which are determined at different percentiles over the years - for example, nationally in 1983 they were all set at the 40th percentile, and in 2005 you had some areas set at 40 and some at 50. I would ideally like to try to standardize the data for one particular state in some way to allow for easier comparison (e.g. transform all the 40th and 45th percentiles to the 50th percentile) over time. As reasonably pointed out in [this thread](https://stats.stackexchange.com/questions/535410/mean-and-sd-for-a-normal-distribution-given-value-of-a-percentile), fundamentally having only a single percentile point does not allow estimation of two other distinct values. However, does this extend to if one has a range of percentile values? This doesn't fit with a sampling distribution approach, since all the data is again taken at a single percentile already, though it's a bit difficult to tell from the documentation how often this adjustment is done from raw data (e.g. phone surveys) vs. predictive methods and models (e.g. local CPI adjustment) - though this is likely a moot point given the raw data is not released to my knowledge. I do have access to a wide range of rent values often calculated at that same singular percentile which has its mean and variance in various areas of the country, with the above caveats, which is not a single point. Plus, a 40th percentile value is definitionally part of a normal distribution of rents. Is this ever sufficient information to make a reasonable guess about the overall mean or variance, thus allowing at least a somewhat acceptable standardization to the mean using z-tables? Or would I still just be guessing with only a "single" data point?
Can I reasonably estimate the population mean and standard deviation from a large sample all taken at a single percentile?
CC BY-SA 4.0
null
2023-04-25T09:00:13.523
2023-05-04T02:10:54.127
2023-04-25T09:23:59.603
56940
386510
[ "normal-distribution", "quantiles", "standardization", "point-estimation" ]
614074
5
null
null
0
null
Let $f$ be a probability density function, $\mu$ and $\sigma>0$ two real-valued parameters. The family of pdfs $f(x-\mu)$ indexed by $\mu$ is called the location family with standard pdf $f$ and $\mu$ is called the location parameter for the family. The family of pdfs $(1/\sigma)f(x/\sigma)$ indexed by $\sigma$, is called the scale family with standard pdf $f$ and $\sigma$ is called the scale parameter of the family. The effect of introducing the location parameter $\mu$ is to shift the density $f$ so that its shape is unchanged. On the other hand, the effect of introducing the scale parameter $\sigma$ is to stretch ($\sigma>1$) or to contract ($\sigma<1$) the graph of $f$. The joint introduction of $\mu$ and $\sigma$ leads the location-scale family: The family of pdfs $(1/\sigma)f((x-\mu)/\sigma)$ indexed by the parameter $(\mu, \sigma)$ is called the location-scale family with standard pdf $f$; $\mu$ is called the location parameter and $\sigma$ is called the scale parameter. For any random variable $X$, whose probability distribution function belongs to such a family, the distribution function of $Y = a + b X$, also belongs to the family. Some examples of distributions belonging to the location-scale family are the: - normal distribution - Student's $t$ distribution - logistic distribution
null
CC BY-SA 4.0
null
2023-04-25T09:03:45.410
2023-04-25T12:36:33.193
2023-04-25T12:36:33.193
56940
56940
null
614075
4
null
null
0
null
A set of probability distributions (e.g., normal, t-student, logistic, etc.) that share a specific form. Many of the distributions in the location-scale family are workhorse distributions in statistics having convenient statistical properties.
null
CC BY-SA 4.0
null
2023-04-25T09:03:45.410
2023-04-25T12:36:05.840
2023-04-25T12:36:05.840
56940
56940
null
614079
1
614082
null
1
56
I'm planning to run a large amount of regressions to test for an association between medical conditions. As my dependent variable is always age-dependent, I include age as a covariate. For example, I wish to check if drinking regularly affects the risk of a heart attack, e.g.: `heart_attack ~ age + drinker_status` Now there is a possibility that `drinker_status` and `age` have an interaction. Ideally I would inspect the data and plot it, but I'm wondering if there is a systematic approach to deciding whether to include this interaction. Based on discussions on CrossValidated and elsewhere, I've come across the following options: - Run each model with and without the interaction and choose based on AIC. - Run each model with and without the interaction and choose based on R². - Run each model with and without the interaction and choose based on p-value and/or coefficient of the interaction term (this one seems very arbitrary and did not find much support). - Always use the model with the interaction term so as to include the maximum of available information (This makes for more difficult interpretation of the association though, as some p-values become non-significant even for very biologically plausible dependent variables. My aim is not the modelling of the e.g. heart-attack risk, but simply to test for an association of heart-attack and drinker-status. In my field it is customary to go almost exclusively by p-value cutoff. What would be a good way to go about this association testing it in this context?
Inclusion of interaction term and interpretation
CC BY-SA 4.0
null
2023-04-25T09:46:18.440
2023-04-25T15:03:47.123
null
null
386463
[ "regression", "multiple-regression", "interaction", "predictor", "association-measure" ]
614082
2
null
614079
1
null
What are you interested in? You say that you want to test an association between heart-attack and drinker-status, but if you are only interested in an association, why adjust for age at all? If you are interested in a causal association, you will need more information: what are all relevant variables that may lead to confounding, are causal inference assumptions met, are you fitting the right model, etc. Regardless, I would raise a 5th option: choose whether or not to add the interaction in the model based on your (and if applicable/available supervisors'/experts') knowledge. If you believe there is likely an interaction between drinker_status and age, then you should add it to your model. The reason for this 5th option is that your choice to include a variable/interaction is not data driven: the data you collected are a random subset of the total (theoretical supra)population and may therefore contain spurious associations or miss spurious associations, meaning the model you fit based on those data is not an actual representation of reality. For instance, you might adjust for an association in your data because this association was 'statistically significant' in the model, but are in truth adjusting for a variable that happens after the exposure and before the outcome, meaning you are adjusting part of the effect away.
null
CC BY-SA 4.0
null
2023-04-25T10:26:19.227
2023-04-25T10:26:19.227
null
null
385890
null
614083
1
null
null
1
15
In my study, participants were presented with descriptions of several acts of interpersonal betrayal and asked if they would want wo find out about the betrayal if they had been the victim of it or if they would prefer to remain deliberately ignorant (Variable name: DI). Additionally, I varied the relationship with the perpetrator between experimental conditions (Condition: friend vs. stranger) and, among other things, measured the emotional costs associated with finding out about the betrayal (Costs). I then ran a mixed generalized linear model with random effects for participants and scenarios, DI as the criterion, and costs and condition as predictors. ``` mreg = glmer(DI ~ (1|id)+1|Scenario)+Costs*Condition, family = binomial('logit'),data=dfgDICg) ``` I find a significant interaction between condition (stranger) and costs and would like to further explore this interaction. Judging from the plot it seems as if the relationship between costs and DI is less pronounced in the stranger condition. Which post-hoc test would be most appropriate in this case? Do you have suggestions on how to implement it in R? It would be great to get some feedback on this! Thank you very much in advance.
Explore interaction glmer
CC BY-SA 4.0
null
2023-04-25T10:28:05.557
2023-04-25T14:53:01.753
null
null
386520
[ "generalized-linear-model", "interaction" ]
614087
2
null
614037
3
null
The documentation (check `?factanal`) says: > The uniquenesses are technically constrained to lie in [0, 1][0,1], but near-zero values are problematical, and the optimization is done with a lower bound of control$lower, default 0.005 (Lawley & Maxwell, 1971, p. 32). Uniqueness is the variance of the unique factors, in terms of linear regression is a kind of residual variance, and thus it cannot be zero. Situations with uniqueness equal to zero in factor analysis are known as ‘Heywood cases’. Now, increasing `control$lower` to a threshold far from zero, implies that the required solution is to be found in a narrower interval than the original [0,1] at the risk of missing the maximum of the likelihood. Thus, in most practical cases this option should be left as default unless you know what you are doing. In any case, this is not at all useful towards choosing the number of latent factors. For the latter, there are already [good methods](https://en.wikipedia.org/wiki/Factor_analysis): - Kaiser rule, e.g. all factors with eigenvalues $\geq 1$; - Likelihood ratio test, e.g. the minimum amount of factors needed in order to not reject $H_0$; - Scree plot.
null
CC BY-SA 4.0
null
2023-04-25T10:43:57.603
2023-04-25T19:40:13.327
2023-04-25T19:40:13.327
56940
56940
null
614088
2
null
24339
0
null
If this is for a forecasting purpose, I advice that you do a forecasting segmentation as Intermittent time series can be categories into 4 segments(i.e Intermittent, Lumpy, Smooth, Erratic). Using ADI(Average Demand Interval) AND CV^2(Co-efficient of Variation). After doing this, you can then apply different forecasting methods for each segmentation. An intermittent time series refers to a set of data points that are observed irregularly or sporadically over time. Unlike a regular time series, where data is collected at a fixed interval, intermittent time series may have gaps or missing values between observations. For example, if you are tracking the number of customers who visit a store, and you only record the data when someone makes a purchase, you would have an intermittent time series. This is because you are not collecting data at regular intervals but only when there is an event (i.e., a customer makes a purchase). Intermittent time series can be challenging to analyze and forecast because of the irregularity in the data. However, there are statistical methods and techniques that can be used to analyze intermittent time series and make predictions based on the available data.
null
CC BY-SA 4.0
null
2023-04-25T11:03:12.860
2023-04-25T11:03:12.860
null
null
386526
null
614089
1
null
null
0
12
I try to apply dimensionality reduction to a multidimensional data set (with numerical features) with significant outliers. I have managed to identify outliers with Isolation Forest but now I'm in a quandary how to handle them: - As the features are on a different scale, I'm expected to do some normalization as a first step, but having these strong outliers usual methods like PCA are out of question as they're usually sensitive to outliers. - Omitting the observation seems suboptimal as it increases bias and anyway, I'm going to work on a time series forecasting later on where each time step is expected to be filled in the given interval, I cannot leave out observations. - Maybe I could do clipping/winsorizing on the individual features but I'm vary of this as it disregards feature correlation. Is there a good way in this situation? Maybe any dimensionality reduction method which is robust to outliers?
How to apply dimensionality reduction to a data set with outliers?
CC BY-SA 4.0
null
2023-04-25T11:13:12.033
2023-04-25T11:13:12.033
null
null
72735
[ "outliers", "dimensionality-reduction", "extreme-value", "data-preprocessing" ]
614090
2
null
97083
0
null
The optimal value function $v_*(s) := \max_\pi v_\pi(s)$ fulfills the Bellman optimality equation $$ v_\pi(s) = \max_a R(s,a) + \gamma \sum_{s'} P(s'|s,a) v_\pi(s').$$ Also we know by Banach's fixed point theorem that any other value function $v_\pi$ that fulfills this equation is equal to $v_*$, which implies that $\pi$ must be an optimal policy as well. If we have the statements in the previous paragraph we can argue as follows: the policy iteration algorithm stops once the policy improvement doesn't change $\pi$. That is the exactly the case, when $v_\pi$ fulfills the Bellman optimality equation. Thus $\pi$ must be equal to an optimal policy, once policy iteration stops.
null
CC BY-SA 4.0
null
2023-04-25T11:17:32.193
2023-04-25T11:17:32.193
null
null
386527
null
614091
1
null
null
1
65
I predict a continuous variable by taking the average of $N$ model predictions. The models are different in terms of their functional form, i.e. a tree model, a neural net, etc. Is the average SHAP value for some input variable across those $N$ models equal to the SHAP value I would get by treating the aggregate model as a single model? Put differently, if I wanted to derive the SHAP value of an input variable for my aggregate (average) model prediction, is taking the average of the individual SHAP values for that respective input variable across models the correct approach?
SHAP values of Ensemble Model
CC BY-SA 4.0
null
2023-04-25T11:35:01.470
2023-04-26T08:44:35.570
null
null
182258
[ "interpretation", "ensemble-learning", "shapley-value" ]
614092
1
null
null
0
43
So I have a model: $log(y)= \beta_0 + \beta_1dog +\beta_2cat$. I want to test for heteroskedasticity using a Breusch-Pagan test and the variables cat and dog. Should by formula be `bptest(log(data$y)~ data$dog + data$cat)` or `bptest(data$y ~ data$dog + data$cat)`?
Breusch Pagan R test which function is correct
CC BY-SA 4.0
null
2023-04-25T11:40:39.517
2023-04-25T11:57:07.870
2023-04-25T11:57:07.870
298817
371754
[ "r", "regression", "heteroscedasticity", "breusch-pagan" ]
614094
1
null
null
1
41
I am using GAMs to model the occurrence of a species across 7 sites. I specified Site as a factor, and the response to each variable (smooths) is allowed to vary across sites. See a sample of code below: ``` model <- bam(Y ~ offset(log(offset)) + Site + s(X1, by = Site, bs = "tp", m=1, k=15) + s(X2, by = Site, bs = "tp", m=1, k=15) + s(X3, by = Site, bs = "tp", m=1, k=15) + s(X4, by = Site, bs = "tp", m=1, k=15) + s(X5, by = Site, bs = "tp", m=1, k=15), family = nb, data = data, method = "fREML", cluster = cl, discrete = T, rho = r1, AR.start = data$start.event, select=T) ``` The `summary()` function returns significant relationships for most predictors, yet most of the partial dependance plots are completly flat, which puzzles me... On the graph below, each line is a site (as ordered in the model) and each column is a variable (Xi). Some of these flat smooths come out with low p-values in the GAM summary. Am I missing something there ? [](https://i.stack.imgur.com/FPRFd.jpg) What could be a reason for this ? ``` Family: Negative Binomial(24.328) ``` Link function: log ``` Formula: Y ~ offset(log(offset)) + Site + s(X1, by = Site, bs = "tp", m = 1, k = 15) + s(X2, by = Site, bs = "tp", m = 1, k = 15) + s(X3, by = Site, bs = "tp", m = 1, k = 15) + s(X4, by = Site, bs = "tp", m = 1, k = 15) + s(X5, by = Site, bs = "tp", m = 1, k = 15) + s(X6, by = Site, bs = "tp", m = 1, k = 15) Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -5.9005 0.3435 -17.179 < 2e-16 *** Site2 0.5624 0.6199 0.907 0.3644 Site3 1.7944 2.9218 0.614 0.5392 Site4 1.9610 0.3709 5.287 1.4e-07 *** Site5 0.4001 3.1490 0.127 0.8989 Site6 2.1811 1.0353 2.107 0.0353 * Site7 -1.4402 1.7289 -0.833 0.4049 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(X1):Site1 7.566e-01 14 0.067 0.226480 s(X1):Site2 1.217e+00 13 0.162 0.104868 s(X1):Site3 7.943e-05 12 0.000 0.393940 s(X1):Site4 5.057e-01 14 0.042 0.270652 s(X1):Site5 1.792e-03 13 0.002 < 2e-16 *** s(X1):Site6 1.311e+00 14 0.124 0.202924 s(X1):Site7 2.487e-04 13 0.000 0.364268 s(X2):Site1 1.763e-05 12 0.000 < 2e-16 *** s(X2):Site2 1.077e-04 14 0.000 0.000790 *** s(X2):Site3 4.084e-05 13 0.000 0.027148 * s(X2):Site4 1.333e-04 13 0.000 0.000148 *** s(X2):Site5 2.063e+00 12 0.285 0.065660 . s(X2):Site6 2.361e-05 14 0.000 < 2e-16 *** s(X2):Site7 2.261e+00 13 0.588 0.001283 ** s(X3):Site1 2.525e+00 14 0.391 0.052133 . s(X3):Site2 2.594e-04 14 0.000 0.278446 s(X3):Site3 2.544e-05 14 0.000 4.97e-06 *** s(X3):Site4 6.631e+00 14 5.045 < 2e-16 *** s(X3):Site5 4.185e+00 14 1.352 1.56e-05 *** s(X3):Site6 4.474e-05 14 0.000 < 2e-16 *** s(X3):Site7 3.923e-01 14 0.041 0.125654 s(X4):Site1 2.877e-05 11 0.000 0.078420 . s(X4):Site2 1.118e+00 12 0.175 0.036045 * s(X4):Site3 2.248e-05 10 0.000 3.47e-05 *** s(X4):Site4 2.635e-05 9 0.000 0.500774 s(X4):Site5 2.885e+00 11 4.209 < 2e-16 *** s(X4):Site6 4.288e+00 14 1.094 0.000167 *** s(X4):Site7 1.495e+00 13 0.294 0.017538 * s(X5):Site1 4.504e+00 14 3.434 < 2e-16 *** s(X5):Site2 2.961e+00 14 1.050 2.48e-05 *** s(X5):Site3 3.370e-01 14 0.033 0.228794 s(X5):Site4 4.620e+00 14 1.669 3.52e-06 *** s(X5):Site5 5.495e-04 14 0.000 0.002088 ** s(X5):Site6 7.250e+00 14 4.226 < 2e-16 *** s(X5):Site7 4.742e-01 14 0.034 0.179324 s(X6):Site1 8.345e-05 14 0.000 < 2e-16 *** s(X6):Site2 1.995e+00 14 0.431 0.008962 ** s(X6):Site3 1.810e+00 9 2.581 2.01e-06 *** s(X6):Site4 3.503e-05 8 0.000 0.003170 ** s(X6):Site5 2.583e+00 10 3.750 < 2e-16 *** s(X6):Site6 2.980e+00 10 6.812 < 2e-16 *** s(X6):Site7 5.722e-05 14 0.000 0.008897 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = 0.771 Deviance explained = 75.3% fREML = 2326.1 Scale est. = 1 n = 1816 ``` [](https://i.stack.imgur.com/09hP2.png)
Flat smooth term but significant relashionship GAM
CC BY-SA 4.0
null
2023-04-25T11:43:01.103
2023-04-25T16:41:33.957
2023-04-25T16:41:33.957
247492
247492
[ "generalized-additive-model", "mgcv", "partial-plot" ]
614096
1
614098
null
0
16
I have a dataset on contaminant exposure of small critters, for which it visually appears to be a large increase in egg production over time for treatments exposed to the contaminant. The selected model shows no significant effect. However, most of the high producing animals died 15 days before the end of the experiment (total 55 days), and if I shorten the dataset in the model and exclude these last 15 days then I get a significant effect of the contaminant. I am wondering if there is any valid reason to analyze the shortened time period in this manner when subjects are dying/lost.
Can I shorten dataset for time-dependent analysis of factor due to death of subjects?
CC BY-SA 4.0
null
2023-04-25T12:33:14.177
2023-04-25T12:43:42.333
null
null
380763
[ "regression" ]
614098
2
null
614096
1
null
The most important thing here is that you are not HARKing: hypothesizing after results are known. It is natural that we would prefer to publish 'statistically significant' results, but this should be done based on pre-defined analyses. You have seen both your original model and the model when excluding the last 15 days and now know that the model with 15 days excluded actually gives you a significant effect. It is then easy to try and find reasons why the model with 15 days excluded would actually be a more valid model, but ask yourself the following: - If both models were statistically significant for the contaminant, would you have cared about the 40 days (15 days excluded) model? - If the 55 day model was statistically significant but the 40 day model was not, would you have cared whether the 40 day model was valid? - If both models were not statistically significant, would you have cared whether the 40 day model was valid? Other than that, knowing if the model is valid is difficult to know without model specifications (what model, how was the model fit, etc.)
null
CC BY-SA 4.0
null
2023-04-25T12:43:42.333
2023-04-25T12:43:42.333
null
null
385890
null
614099
1
null
null
0
28
We want to compute a cluster analysis in a not well-researched field and therefore created about 60 questions. Those questions are supposedly aspects of factors, but we are not sure yet, to which extent. But our primary purpose is to analyse the object structure of the sample. Not only since 60 variables are far too many for our sample size, we would like to compute an EFA first and then use the same sample for a cluster analysis using the newly found factors. I thought a lot about the implications of using the same sample twice but can't find a problem here, since both approaches are not dependent on each other. I think they complement each other well. The EFA leads to a reduction in details. But maybe I don't see a crucial aspect here. Edit: I Rewrote the title to have EFA more prominent.
Is there an issue doing EFA first, followed by a cluster analysis (with the newly found factors) with the same sample?
CC BY-SA 4.0
null
2023-04-25T12:49:03.813
2023-05-30T09:48:23.423
2023-05-30T09:48:23.423
364083
364083
[ "clustering", "methodology", "exploratory-data-analysis" ]
614101
2
null
158618
0
null
I wonder if the MBESS R package can be helpful here? Two functions of potential interest: - ss.power.reg.coef: sample size for a targeted regression coefficient in MBESS, and - ss.power.rc: sample size for a targeted regression coefficient in MBESS I'm new to this package. Any comments/warnings/advises for the practical usage of these two functions are well welcomed.
null
CC BY-SA 4.0
null
2023-04-25T12:54:42.253
2023-04-25T12:54:42.253
null
null
256614
null
614102
1
null
null
0
29
I have a machine learning problem that I solve via nonlinear regression. I have 80 samples totally and try to understand is it useful to gather more data. For this, I plot learning curves in the following way: - I randomly split the data into 60/20 train/validation samples - I implement the training on different number of training samples: 20,23,26...,60. - For all iterations, I test on the same validation dataset (20 samples). So in the end, I get Validation Error vs. Number of training samples - Repeat steps 1-3 with different random seeds for the train/validation split 5 times - Finally, for each number of training samples, the error is averaged over the 5 splits [](https://i.stack.imgur.com/rI3Mb.png) Now, I need to understand two things: - Standard deviation over different splits for the validation is much higher then for the training. Does it mean, that the number of validation samples is not enough? Or maybe from split to split the data varies too much? - Do I still have the potential to decrease the validation error and improve the model performance, if I gather more data? The plots don't look saturated to me.
Proper conclusions from learning curves
CC BY-SA 4.0
null
2023-04-25T13:29:30.517
2023-04-25T13:38:55.670
2023-04-25T13:38:55.670
383681
383681
[ "machine-learning", "nonlinear-regression", "validation", "training-error" ]
614103
1
null
null
1
14
I have a vector $x = (x_0, \ldots, x_T)$ and given this vector, I would like to sample an index $k$ between $0$ and $T$. The probability of sampling index $k$ is given by a weight $w_k$ that is a function of $x_k$. In other words $$ \mathbb{P}(\mathsf{k} = k \mid x) = w_k, $$ where $w_k = w_k(x_k)$. I will assume these weights sum up to one $\sum_{k=0}^T w_k = 1$. Basically this is a Categorical distribution conditioned on $x$. I would like to write a measure-theoretic expression for the Markov Kernel corresponding to this operation. > Note: I understand that this is a discrete sampling procedure, but I am trying to understand measure theory better. I have a decent understanding of the theory of Markov Kernels, but I am struggling to construct a Markov Kernel given a practical sampling procedure. This is basically for me to see how one would go about writing the expression for the kernel. # Attempt I am looking to write down something (even if pseudo-correctly) as $K(x, dk)$. I understand that $dk$ makes no sense in a discrete setting. Would the kernel be this? $$ K(x, dk) = \sum_{k\in dk} w_k $$ I basically want to go from the definition of a Markov Kernel ([https://en.wikipedia.org/wiki/Markov_kernel](https://en.wikipedia.org/wiki/Markov_kernel)) to a practical expression.
Expression for Markov Kernel sampling indeces in $\{0, \ldots, T\}$ according to weights depending on another variable
CC BY-SA 4.0
null
2023-04-25T13:51:16.097
2023-04-25T13:51:16.097
null
null
146552
[ "probability", "mathematical-statistics", "conditional-probability", "markov-chain-montecarlo", "measure-theory" ]
614107
1
null
null
0
6
I am currently evaluating a new device compared to a gold standard device. Each person is measured at the same time with both devices and I want to know how to investigate the results. My question is how to approach the data analysis. Calculate the difference between the 2 measures and then? I need to be able to show statistical significance. I had assumed a 2 tailed ttest would be used, but got confused. Just some general advice would be appropriated. I'm assuming there's no difference between the devices. Thankyou in advance.
Approach to analysing data on 2 medical devices on the same patient
CC BY-SA 4.0
null
2023-04-25T14:22:10.007
2023-04-25T14:22:10.007
null
null
386541
[ "hypothesis-testing", "t-test", "mean", "differences" ]
614109
2
null
614021
1
null
The most useful test for multicollinearity in a generalized linear model, particularly with multi-level categorical predictors like yours, is based on the variance-covariance matrix of the coefficient estimates. If the [vif() function](https://stats.stackexchange.com/q/136004/28500) you invoked is from the R [car package](https://cran.r-project.org/package=car), then you are on the right track. As I understand the `coeftest()` function, however, it does not return the robust variance-covariance matrix. It generates and uses that matrix for significance testing of the individual coefficients, but then discards it. With the `car` `vif()` function, you need to pass to it an object that has the variance-covariance matrix available via `vcov(object)`. This might already be implemented somewhere, but you could generate the robust matrix directly from the `sandwich` package, define a new class for that matrix, and write a `vcov()` function for that class that just returns the matrix. Then submit the newly-classed matrix to `vif()`. Or you could adapt the code of `vif()` as shown in the [page linked above](https://stats.stackexchange.com/a/361793/28500) to work directly on the matrix. All that said, it's not clear what you gain by evaluating multicollinearity at this stage of the analysis. All (non-perfect) multicollinearity does is to increase the variance of the individual coefficient estimates of correlated predictors. If the model is working well, do you really care about that variance inflation? How would you change the direction of your study based on what you found?
null
CC BY-SA 4.0
null
2023-04-25T14:29:11.943
2023-04-25T14:29:11.943
null
null
28500
null
614110
1
null
null
0
134
Suppose I have the following dataset that shows the faculty composition of different colleges and the total number of drop outs that happened in an academic year. Here is some R code for this problem: ``` set.seed(123) n <- 100 college_id <- paste0("college_", 1:n) percent_engineering <- runif(n, 0, 100) percent_science <- runif(n, 0, 100 - percent_engineering) percent_liberal_arts <- runif(n, 0, 100 - percent_engineering - percent_science) percent_other <- 100 - percent_engineering - percent_science - percent_liberal_arts number_of_dropouts <- sample(0:100, n, replace = TRUE) number_of_students <- sample(200:1000, n, replace = TRUE) df <- data.frame(college_id, percent_engineering, percent_science, percent_liberal_arts, percent_other, number_of_dropouts, number_of_students) df$drop_out_rate <- (df$number_of_dropouts/df$number_of_students)*100 ``` I could be interested in trying to answer the following question: Is it possible that when colleges tend to be more engineering based are more correlated with larger dropout rates in general? (e.g. lets say I don't have access to the number of drop outs per faculty per college) A very basic way to explore this question could be to calculate correlations between faculty compositions and drop out rates: ``` library(ggplot2) plot_list <- list() percent_cols <- c("percent_engineering", "percent_science", "percent_liberal_arts", "percent_other") for (col in percent_cols) { # Calculate the Pearson and Spearman correlation coefficients pearson_cor <- cor(df[[col]], df$drop_out_rate, method = "pearson") spearman_cor <- cor(df[[col]], df$drop_out_rate, method = "spearman") # Create the scatter plot with a line of best fit p <- ggplot(df, aes_string(x = col, y = "drop_out_rate")) + geom_point() + geom_smooth(method = "lm") + labs( x = col, y = "Dropout Rate", title = paste("Relationship between", col, "and Dropout Rate"), subtitle = paste( "Pearson correlation =", round(pearson_cor, 2), "| Spearman correlation =", round(spearman_cor, 2) ) ) # Add the plot to the list plot_list[[col]] <- p } # Assign the plots to individual variables g1 <- plot_list[["percent_engineering"]] g2 <- plot_list[["percent_science"]] g3 <- plot_list[["percent_liberal_arts"]] g4 <- plot_list[["percent_other"]] library(gridExtra) grid.arrange(g1, g2, g3,g4 ,ncol = 2) ``` [](https://i.stack.imgur.com/TKsFE.png) At first glance, this approach might be suitable to study correlations between total drop out rates and the composition of different faculties (e.g. engineering) - however, I think there might be some problems with this approach. One of the first problems that comes to mind is that this approach is not taking into account the effect of other variables when calculating correlations. As a humorous example, perhaps Science Students and Engineering Students hate each other - and when a college has more than x% Science Students and y% Engineering Students, this results in fights and these fights result in drop outs. Thus, the patterns and trends that would be observed during this analysis between pairs of variables might not necessarily be only attributable to those pairs. As such, I think it would be a better idea to fit a Regression Model to this data and then be able to find out the "average effect of each increase in faculty percent on drop out rate (adjusted over the other variables". But as such, what are the limitations of performing and making conclusions from this type of correlation analysis? Thanks!
Limitations in Correlation Analysis
CC BY-SA 4.0
null
2023-04-25T14:36:13.530
2023-04-29T18:59:17.867
null
null
77179
[ "correlation" ]
614111
2
null
614083
0
null
If the interaction term is significant, then you already have evidence for a different association between outcome `DI` and `Costs` depending on `Condition`. As you modeled `Costs` as having a strict linear association with `log-odds(DI)`, the magnitude of the interaction coefficient tells you how much the slope of that association changes between the 2 values of `Condition`. Plots of the modeled probability of `DI` as a function of `Costs`, with separate curves for the 2 values of `Condition`, is one good way to illustrate your results. There are several packages for post-modeling analysis that can generate such displays. The [emmeans package](https://cran.r-project.org/package=emmeans), for example, allows you to show results in the log-odds or the probability scale, with standard errors and the possibility for multiple-comparison corrections. It only returns results with the random effects set to 0, but that shouldn't be a problem for your application.
null
CC BY-SA 4.0
null
2023-04-25T14:53:01.753
2023-04-25T14:53:01.753
null
null
28500
null
614112
1
null
null
0
8
I have purchase panel data of consumers over a period of time (1.5 year). Within this period of time there is an intervention of +/ - 0.2 years implemented. this purchase data consist out of the purchase of products. These products can belong to a productgroup (in total 10 productsgroups) I am interested in the effect of the intervention on the sales of the product for each category (and compare them?). Furthermore, i do have some interaction variables with the intervention variable, so i can analyze what the effect of these variables is during such intervention As i have panel data for each invidual it is the most powerfull to determine this effect at an invidual level. However, i am not sure how to determine if i should use a fixed, a random or mixed model. Can someone give me suggestions/advice how to approach this?
Determine whether to use fixed effect, random effect or mixed effect model?
CC BY-SA 4.0
null
2023-04-25T14:59:27.063
2023-04-25T14:59:27.063
null
null
386543
[ "mixed-model", "fixed-effects-model" ]
614113
1
614748
null
1
56
I am using glmnet for feature selection, given a gaussian dependent variable. Part of my code is like this: ``` lambda_seq <- 10^seq(2, -2, by = -0.1) cv_output <- cv.glmnet(data.matrix(x_vars), y_var, alpha = 1, lambda = lambda_seq, nfolds = 5) ``` How can I use glmnet to select exactly three (continuous) predictor variables (features), from a large set? Three is best for me, given my limited dataset size. I guess I can write a loop and change lambda till I get three predictors. But is there something more efficient and elegant? If not, what is the best strategy for getting the optimal lambda (one that gives exactly three predictors) with a loop? To give some more context, I want to formulate a 3-variable model as part of a leave-one-out cross validation loop. Hence, the steps should be automated.
Getting glmnet to select excatly given number of features?
CC BY-SA 4.0
null
2023-04-25T15:01:21.323
2023-05-10T09:45:45.940
2023-04-28T14:54:11.117
182069
182069
[ "r", "feature-selection", "lasso", "glmnet" ]
614115
2
null
614079
0
null
The answer from @rjjanse (+1) covers several important points. There's a fundamental problem with what you seem to be doing, however: > I'm planning to run a large amount of regressions to test for an association between medical conditions. That sounds like a whole series of individual outcome-versus-predictor regressions, controlling for `age` as a covariate. That will get you into trouble in several ways, in particular the [multiple-comparisons problem](https://en.wikipedia.org/wiki/Multiple_comparisons_problem), [omitted-variable bias](https://en.wikipedia.org/wiki/Omitted-variable_bias), and the inability to identify associations with outcome that depend on combinations of predictors beyond `age`. Frank Harrell's [Regression Modeling Strategies](https://hbiostat.org/rmsc/) provides much guidance on proper design. To understand the issues in causal analysis, read [Hernán and Robins](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/).
null
CC BY-SA 4.0
null
2023-04-25T15:03:47.123
2023-04-25T15:03:47.123
null
null
28500
null
614116
1
null
null
0
26
Disclaimer: I am relatively new to time series analysis, so I am not sure if my way of thinking makes sense. I have a time series of a few years, that I can model relatively well (respectable R squared, and normally distributed errors). However, in the last year of the data set, a series of effects happened that I am almost sure of had a large combined effect on the observed variable. Evidence of this is the fact that for this year, the residuals are no longer normally distributed and show a large upward bias. I would like to capture this 'combined effect' in the model, by looking at the residuals. I am wondering whether it is possible to regard the residuals as a sum of a signal (of this combined effect) and noise, and to model it accordingly. Is this a valid method?
How to handle an effect in a time series that enters the series at a later time?
CC BY-SA 4.0
null
2023-04-25T15:16:46.433
2023-04-25T15:16:46.433
null
null
384768
[ "time-series" ]
614117
1
null
null
0
19
Consider a two-dimensional stochastic field with power spectral density function $S_{ff}(k_1,k_2)$ given by: $$S_{ff}(k_1,k_2)=σ^2\frac{b_1b_2}{4\pi}\exp{\left(-\left(\frac{b_1k_1}{2}\right)^2-\left(\frac{b_2k_2}{2}\right)^2\right)}$$ $$-k_{1u}\le k_1\le k_{1u},\ -k_{2u}\le k_2\le k_{2u}$$ where $k_{1u}$ and $k_{2u}$ are the upper cut-off numbers. As stated in the literature, the following criterion is used to determine the values of $k_{1u}$ and $k_{2u}$: $$\int_0^{k_{1u}}\int_{-k_{2u}}^{k_{2u}}S_{ff}(k_1,k_2)dk_1dk_2=(1-ε)\int_0^{\infty}\int_{-\infty}^{\infty}S_{ff}(k_1,k_2)dk_1dk_2$$ where $ε$ a very small number ($ε=0.001$). In my case, with given parameters $b_1$ and $b_2$, I get the following equation: $$\text{erf}\left(\frac{b_1k_{1u}}{2}\right)\text{erf}\left(\frac{b_2k_{2u}}{2}\right)=1-ε$$ So my question is how can I solve the last equation for $k_{1u}$ and $k_{2u}$? Assuming $\text{erf}\left(\frac{b_1k_{1u}}{2}\right)=\text{erf}\left(\frac{b_2k_{2u}}{2}\right)$ and solving the last equation this way seems pretty vague... Thanks in advance
Spectral Representation: Calculation of upper cut-off numbers
CC BY-SA 4.0
null
2023-04-25T15:19:30.640
2023-04-25T15:19:30.640
null
null
null
[ "self-study", "mathematical-statistics", "stochastic-processes", "spectral-analysis" ]
614118
1
null
null
1
90
Given a multivariate regression, how can I test if each element in the coefficient matrix is statistically significant? Would doing a t-test be right? $$\mathbf{Y}=\mathbf{X}\mathbf{B}+\mathbf{E},$$ where $\mathbf{Y}, \mathbf{X}$ and $\mathbf{E} \in R^{n\times m}$ $(n>m)$ and $\mathbf{B}\in R^{m \times m}$. You can treat each column of $\mathbf{Y}$ and $\mathbf{X}$ as a time series. I found [this](http://users.stat.umn.edu/%7Ehelwig/notes/mvlr-Notes.pdf) where it goes through multilinear regression (p.43) and also goes through inferences about the coefficient (p.75). But it considers each row of the coefficient in the test. My question is about testing each element of the coefficient to see if it is significant.
Test for multivariate regression coefficients
CC BY-SA 4.0
null
2023-04-25T15:36:48.063
2023-05-01T04:06:22.697
null
null
312007
[ "regression", "multiple-regression", "multivariate-analysis", "multivariate-regression" ]
614119
1
null
null
0
22
In skipgram we predict the context words. That is the output layer before applying the softmax function is a number $V$ of words, where $V$ is the dictionary size. But each word is represented as a vector. So we have $V$ vectors in the output layer. And now we want to apply softmax to those vectors to get a vector of dimensionality $V$, where each component represents the probability of a word appearing with an input word. But how do we apply the softmax function to vectors? By definition the softmax function takes as input a single vector, but we have $V$ vectors each of a chosen dimensionality $N$.
How does softmax work for vectors?
CC BY-SA 4.0
null
2023-04-25T15:55:38.993
2023-04-25T15:56:19.157
2023-04-25T15:56:19.157
386535
386535
[ "word-embeddings", "softmax", "word2vec" ]
614120
1
null
null
0
10
Suppose there are three variables A, B and C, that are thought to have a three-directional relationship. This means A is related to B and C, B also influences A and C, and C affects A and B. I wondered which analytic approaches could be taken to assess this complex relationship? I can think of - multiple regression, f.i., A ~ B + C. - Mediation analysis if we believe that B moderates the influence of A on C (f.i., A and C are behaviours, while B is an endogeneous factor).
Analysing three-directional relationships
CC BY-SA 4.0
null
2023-04-25T15:55:39.017
2023-04-25T15:55:39.017
null
null
277811
[ "correlation", "multiple-regression", "generalized-linear-model", "linear-model" ]
614121
2
null
608435
0
null
It appears that the situation described above has been addressed in Efron and Tibshirani's book, "An Introduction to the Bootstrap" (1994), specifically in Chapter 8 on more complex data structures. The authors consider the two-sample problem as a more complicated data structure than the one-sample problem. In the case of two samples, they recommend constructing each bootstrap replication by independently resampling from each of the two samples and then recomputing the bootstrap test statistic. They further suggest that more complex data structures can be handled by ensuring that the bootstrapping procedure mimics how the original data was generated and that the test statistic is computed from the bootstrap resamples in the same way as the original estimate. In the above case, the four-sample case is a straightforward extension of the one- and two-sample cases, and therefore the procedure outlined above should be valid. As for computing P-values, one approach is to invert the confidence-interval (CI) construction. Specifically, the P-value can be obtained by finding the smallest $\alpha$ such that the $1-\alpha$ CI does not contain the null hypothesis value (0 in the above case). This should be equivalent to the P-value formulas given above.
null
CC BY-SA 4.0
null
2023-04-25T16:10:55.287
2023-04-25T16:10:55.287
null
null
382320
null
614122
1
null
null
0
26
I'm kind of knew in statistics, and I'm working in a time predictor related to process events. As an accuracy measurement, I'm using MAE, but my advisor asked me to use a statistical test to validate the statistical significance of the results I got with my model compared with the baseline models. He suggested the F-test and Kruskal-Wallis, but I lack in experience on the subject of statistical tests. Which one I should be using to validate, or should I be using another test? My data looks like this: [](https://i.stack.imgur.com/EwPjL.png) And the distribution is as the following picture [](https://i.stack.imgur.com/dZYwa.png)
How can I prove statistical significance between model results?
CC BY-SA 4.0
null
2023-04-25T16:17:51.930
2023-04-25T16:19:52.370
2023-04-25T16:19:52.370
386553
386553
[ "hypothesis-testing", "statistical-significance" ]
614123
1
null
null
1
18
I have two samples I am using to evaluate the effect of a treatment. Group $T$ contains 60 treated objects and their scores and group $C$ contains 60 control objects and their scores. The two groups are paired, so for every object in $T$ there is a corresponding object in $C$ that is identical in every way except for the treatment. Normally I would use a paired [Permutation Test](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.permutation_test.html) to evaluate whether the treatment results in a significant increase in score--randomly permuting each object in each pair between the two groups gives me a null distribution of my test statistic (average score in the group). However the catch in this case is that each of my scores also have a known distribution and I want my test to account for the variance in the scores. My gut tells me that I can still use a permutation test by first sampling scores from their distribution and then repeating the permutation test as normal (effectively getting an expected p-value over scores). However I don't think I've ever seen this done before and would love to hear if there is a better way I can go about this. Below is a concerete example to illustrate the point --- $$T=[X_t, Y_t,...]$$ $$C = [X_c, Y_c, ...]$$ Scores for $X_t = [0.5, 0.51, 0.48, ...]$ Scores for $X_c = [0.48, 0.46, 0.5, ...]$ Scores for $Y_t = [1, 1.2, 0.8, ...]$ Scores for $Y_c = [0.9, 0.95, 0.7, ...]$ Here we have paired objects between $T$ and $C$ which have different means associated with each object and also different levels of variance between pairs. My question is how I should be accounting for the fact that there is variance in the scores for each object in my permutation test. Everything is in-silico so assume I can get arbitrarily many samples from distributions.
How to evaluate whether a treatment effect is significant given variance in measurement
CC BY-SA 4.0
null
2023-04-25T16:21:29.520
2023-04-25T16:21:29.520
null
null
349988
[ "hypothesis-testing", "simulation", "permutation-test", "permutation" ]
614124
2
null
612589
0
null
There are two different questions at play here - How to model the autocorrelation function, $\rho(t)$, for irregularly spaced data - How to adjust for autocorrelation for irregularly spaced data You've somewhat answered the second question with the use of `corAR1`, but note that this still assumes the autocorrelation function $\rho(t) = \alpha^t$ for any $t \in \mathbb{R}$ (instead of $t \in \mathbb{Z}$), and can still potentially face model misspecification. Question 1 Assuming stationarity, I personally like to work with the semi-variogram function, $\gamma(t)$, instead: \begin{align*} \gamma(t) = \frac{1}{2}\mathbb{E}[(R_{t+s} - R_s)^2] \end{align*} where $R(t)$ is the residual at time $t$ within the same `ID`. Let $\sigma^2 = \text{Var}(R_t)$ be the common variance at all time points (implied by stationarity), we have \begin{align*} \rho(t) = 1 - \frac{\gamma(t)}{\sigma^2} \end{align*} To estimate $\gamma(t)$, we can fit a smoothing curve to the observed half-squared differences between residuals vs the time differences: \begin{align*} \widehat{\gamma}_{i,jj'} = \frac{1}{2}(R_{ij} - R_{ij'})^2 \quad \text{vs} \quad t_{i,jj'} = |\tau_{ij} - \tau_{ij'}| \end{align*} where $\tau_{ij}$ are the real times at which the $j$th measurement on the $i$th individual is observed. ``` library(nlme) library(npreg) #Half pairwise differences of residuals fit_conditional = lme(y ~ x1+x2, random = ~1|ID, data = TimeSeries, method = "ML") TimeSeries$resid = resid(fit_conditional) semivariogram.data = TimeSeries %>% group_by(ID) %>% summarize(gamma = c(dist(resid)^2/2), t = c(dist(tm))) #Normalize time scale to hours semivariogram.data$t = semivariogram.data$t/3600 #Residual variance s2 = var(TimeSeries$resid) #Smoothing spline mod.smooth = ss(semivariogram.data$t, semivariogram.data$gamma, nknots = 10) #Predict smoothed semivariogram myfit = predict(mod.smooth, x = 1:6663, se = F) names(myfit) = c("t", "gamma") #Compute autocorrelation function from this myfit$rho = 1 - myfit$gamma/s2 ggplot(myfit, aes(x=t, y=rho)) + geom_line() + ylim(-1,1) ``` [](https://i.stack.imgur.com/D5bDD.png) The autocorrelation here is essentially zero except near the very end, where variability is introduced due to the fewer pairs of half-squared differences. Question 2 Besides the `lme` approach, you can also consider a GEE approach, where even if you get the correlation structure wrong, you're guaranteed correct standard errors as long as your mean model is correctly specified. So, we can still adjust for an AR(1) structure, but if it's misspecified, no worries. ``` library(geepack) mod = geeglm(y ~ x1+x2, id=ID, data=TimeSeries, corstr="ar1", waves=tm) ```
null
CC BY-SA 4.0
null
2023-04-25T16:36:13.407
2023-04-25T16:36:13.407
null
null
117159
null
614125
1
null
null
1
18
I would like to estimate the distribution of 1-minute averages for a process. I have sixty-five 1-week averages to work with. I also have the peak 2-minute averages for those sixty-five weeks. Intuitively, I feel there should be some sort of relationship between the variability of the 1-week averages and the variability of the 1-minute averages. I also suspect I should be able to estimate the variability of the process with the differences between the 1-week averages and the week's 2-minute peak. I'm ok assuming that the process values are normally distributed. Data points per minute may vary somewhat, but if needed we could assume there are 60 data points per minute. Any thoughts are appreciated!
Estimating distribution of minute averages from week averages
CC BY-SA 4.0
null
2023-04-25T16:59:07.603
2023-04-25T16:59:07.603
null
null
386555
[ "time-series", "distributions" ]
614126
1
null
null
0
8
I'm trying to specify a regression model. The data spans 6 time periods in a patient population. Periods 1-3 are blood glucose samples, and 4-6 are troponin samples (cardiac damage marker). So, a conditional growth model considering only the time-varying predictor in R would be, I think: ``` troponin ~ glucose*time_period + (time_period|patient) ``` assuming data in long format: |patient |time_period |glucose |troponin | |-------|-----------|-------|--------| |1 |1 |4 |NA | |1 |2 |3 |NA | |1 |3 |2 |NA | |1 |4 |NA |10 | |1 |5 |NA |23 | |1 |6 |NA |7 | |2 |1 |5 |10 | and so on... So, I've got 2 problems: - How do I reformat the data to handle separation of the predictor and outcome in the time dimension? - How do specify a model handling time variation (and autocorrelation) in both predictor and outcome? Could I do that with a multilevel model?
How to specify a regression model with a time-varying predictor and time-varying outcome?
CC BY-SA 4.0
null
2023-04-25T16:59:33.147
2023-04-25T16:59:33.147
null
null
45752
[ "regression", "repeated-measures", "panel-data", "time-varying-covariate" ]
614127
2
null
613879
1
null
Frank Harrell explains [here](https://stats.stackexchange.com/a/207157/28500) that measures based on ROC curves aren't very sensitive ways to compare different models. Also, unless you have a few tens of thousands of events, you don't have enough cases to use train/validation/test splits reliably, as Harrell explains [here](https://www.fharrell.com/post/split-val/) in general. (The power of survival models depends on the number of events, not the number of cases as does linear regression.) Furthermore, in your generation of `df2` you seem to have ignored an important warning in the [answer to which you link](https://stats.stackexchange.com/a/610284/28500): > Transformation of continous data to categorical means an information loss... From those perspectives, you are not "doing it the right way." One approach would be to take a bootstrap sample (of the same size, but with replacement) from from the full data set, perform the entire modeling process (cross validation to get penalty, etc.) with each of the modules on the bootstrap sample, and compare the 2 models' abilities to predict results on the entire data set. Repeat for a few hundred bootstrap samples, and combine the results.* Finally, if your model doesn't also include outcome-associated clinical variables, it probably won't be very useful. See [this recent report](https://doi.org/10.1016/j.crmeth.2023.100461) on better ways to use LASSO and other approaches for combining genomic data with clinical data for survival analysis: David Wissel, Daniel Rowson, and Valentina Boeva, "Systematic comparison of multi-omics survival models reveals a widespread lack of noise resistance," Cell Reports Methods 3, 100461 (2023). In response to comment The [document you cite](https://static-content.springer.com/esm/art%3A10.1038%2Fs41375-019-0604-8/MediaObjects/41375_2019_604_MOESM1_ESM.pdf) includes a different bootstrap-resampling approach for comparing two "modules." If you have precalculated continuous scores for all samples based on each of your 2 modules, then you can use that approach very simply. Identify the interquartile range for each of your module's scores. Call them something like `IQR1` and `IQR2`. ``` IQR1 <- IQR(module1) IQR2 <- IQR(module2) ``` Set up a null vector to store the results of the bootstrapping. ``` resultStore <- NULL ``` Set a random seed for reproducibility. ``` set.seed(97675) ``` Then do the bootstrapping. Repeat the following a large number of times, say 999. - Take a bootstrap sample from the full data set. That can be done from your dataSet simply via: ``` bootSample <- dataSet[sample.int(nrow(dataSet),replace=TRUE),] ``` - Do your Cox fits on bootSample for each of the modules. ``` bootFit1 <- coxph(Surv(time, status) ~ module1 + otherCovariates, data = bootSample) bootFit2 <- coxph(Surv(time, status) ~ module2 + otherCovariates, data = bootSample) ``` - Get the log-hazard-ratio across the IQR for each module's scores, and their difference: ``` logHR1 <- coef(bootFit1)[[1]] * IQR1 ## module1 is the first coefficient logHR2 <- coef(bootFit2)[[1]] * IQR2 ## module2 is the first coefficient logHRdiff <- logHR2 - logHR1 ``` - Append that difference to the result-storage vector ``` resultStore <- c(resultStore, logHRdiff) ``` After your multiple bootstrap samples, model fits, and calculations, your `resultStore` has an estimate of the distribution of the differences between log-hazard ratios for the two modules if you had applied the modules to new data samples. Put them in order: ``` resultStore <- resultStore[order(resultStore)] ``` and if you did 999 bootstrap samples then the 25th and 975th in order give you 95% confidence limits for the difference between the modules. If a difference of 0 is outside those confidence limits, then you have evidence that one of the modules is superior. That seems to be the approach used in the link you provided. With continuous scores, you might be advised to use a regression spline rather than a linear fit in the Cox model, so you'd have to adapt the `logHR` calculations to get the actual logHR between the 25th and 75th percentiles of the corresponding module's scores. There also are potentially [better ways to evaluate the bootstrapping results](https://stats.stackexchange.com/a/357498/28500). And this doesn't deal with potential randomness in how you developed the score to start, which the approach I suggested earlier could address. But it might be good enough for your purpose. --- *You also might try to to compare Akaike Information Criterion (AIC) values on the 2 models as built on the full data set, but that requires some extra care as explained on [this page](https://stats.stackexchange.com/q/25817/28500).
null
CC BY-SA 4.0
null
2023-04-25T17:52:39.083
2023-04-27T13:11:27.050
2023-04-27T13:11:27.050
28500
28500
null
614129
1
null
null
0
36
Suppose I have a measure $G$ that follows a Dirichlet Process, $$G \sim DP(H_0,\alpha)$$ where $H_0$ is some base measure. Is there a closed form solution for the expected value of $G$?
Expected value (and variance) of a Dirichlet Process
CC BY-SA 4.0
null
2023-04-25T18:32:43.047
2023-04-25T22:09:25.867
2023-04-25T22:09:25.867
385180
385180
[ "nonparametric", "density-estimation", "dirichlet-process" ]
614130
1
null
null
0
51
I know that it is not possible to run a fixed effects probit model, when fixed effects are at the individual level. In other words, it is not possible to estimate $\alpha_i$ for each individual $i$ in the sample. However, is it possible to include fixed effects that are not at the individual level? For example, suppose I am estimating the impact of race on hiring probability, and I want to control for occupation. Can I include occupation fixed effects? I assume so but want to confirm. At what point can probit not handle fixed effects?
Probit with "fixed effects"
CC BY-SA 4.0
null
2023-04-25T18:36:01.770
2023-04-25T20:49:55.033
2023-04-25T20:49:55.033
71679
373485
[ "fixed-effects-model", "probit", "identifiability" ]
614131
1
614134
null
1
16
I am new to time series analysis. But I am well versed in data science and machine learning. One thing that confused me was modeling the data. For example, I want to create an LSTM or regression model, I have stationarized all my variables (first-order differences). My dependent variable/target is stationary and I can now predict $t+1$, but my goal is to predict value $t+8$ with the data at time $t$ with a model $$ \Delta_8 Y_{t+8}\equiv Y_{t+8} - Y_t=\beta X_{1t}+\beta X_{2t}+....+\beta X_{10t} $$ (direct multi-step forecast). My question here is whether it is the right approach to forecast the change between the target variable ($\Delta_8 Y_{t+8}=Y_{t+8} - Y_t$) , i.e. its value now and its value eight hours later? According to time series approaches and theories, does summing differences/deltas cause stationarity, unit root or any other time series related problem?
Direct multi-step forecasting an integrated time series from a stationary model
CC BY-SA 4.0
null
2023-04-25T18:58:52.733
2023-04-25T19:22:38.020
2023-04-25T19:22:38.020
53690
279061
[ "time-series", "forecasting", "data-transformation", "differencing" ]
614132
1
null
null
0
26
I have a set of panel data and want to do a fixed-effect regression. My data consists of four years of survey data with (nearly) identical structure. My dependent variable (for each year) is a satisfaction dummy that can be answered with either yes or no. My independent variables are several characteristics people might have (i.e. sex, age, their plan/contract, etc.). These characteristics are all categorical with at most 5 categories. I am supposed to find out which characteristics are most likely to lead to high satisfaction, while dismissing the change in pricing over time. My problem is that all the data is aggregated. All I know is that a certain percentage of all satisfied/dissatisfied customers has a certain plan, is in a certain age group and so on. I cannot infer if a satisfied man is below age 30 for example, as I do not have individual data. Which approach could I take to do a regression with my aggregated data? I feel like the answer to this question is not particularly difficult, but I am a software engineer, not a data scientist, so while I have some formal education in the field, I have almost no practical experience.
Multiple regression with aggregate data
CC BY-SA 4.0
null
2023-04-25T19:00:20.990
2023-04-25T19:00:20.990
null
null
386566
[ "multiple-regression", "multivariate-analysis", "panel-data", "fixed-effects-model", "aggregation" ]
614133
1
null
null
0
13
First of all, I apologize in advance for errors and/or usage of unspecific terms. Let's say I have the median and IQR for some variable for a "test" dataset and a set of "reference" datasets, and I want to know how different or abnormal the distribution of the variable in the test dataset is in comparison/context of the reference datasets. The underlying motivation is to identify variables that are sufficiently different enough to warrant further investigation of the validity/quality of the variable in the test dataset. Intuitively, the more heterogeneous the variable in the reference datasets, the less worried I would be about the difference between the test and references (and vice versa). Let's say the information I have is as follows: |Dataset |Median |IQR | |-------|------|---| |TEST |9 |4 | |REFERENCE_A |11 |4 | |REFERENCE_B |14 |6 | |REFERENCE_C |12 |2 | |REFERENCE_D |16 |3 | |REFERENCE_E |14 |5 | One test I can imagine is - derived vaguely from ANOVA - finding the ratio between [the variance of difference in medians between the test and reference datasets] and [the within-dataset variance], then using the F-distribution to arrive at a p-value. $Numerator = ((9-11)^2 + (9-14)^2 + (9-12)^2 + (9-16)^2 + (9-14)^2)/(5-1) = 112/4 = 28$ $Denominator = (4^2 + 4^2 + 6^2 + 2^2 + 3^2 + 5^5)/(6-1) = 106/5 = 21.2$ $F-statistic = Numerator / Denominator = 28/21.2$ Using scipy, p-value = scipy.stats.f.sf($numerator/denominator, df_n, df_d$) = scipy.stats.f.sf($28/21.2, 4, 5$) = ~$0.377$ Then, I could set some arbitrary level of p, below which I can flag for further investigation. Please let me know if there's any kernel of merit to this approach, if so, any corrections, and if not, any alternatives. Thanks!
Test for comparing one distribution to a set of distributions from summary statistics
CC BY-SA 4.0
null
2023-04-25T19:09:52.240
2023-04-25T19:09:52.240
null
null
184813
[ "hypothesis-testing", "statistical-significance" ]
614134
2
null
614131
2
null
Stationarity is needed for obtaining well-behaved parameter estimates of the model. Once that is done, you can obtain the forecast $\hat Y_{t+8}$ of $Y_{t+8}$ by $Y_t+\widehat{\Delta_8 Y}_{t+8}$. This step is not invalidated by concerns of unit roots and such.
null
CC BY-SA 4.0
null
2023-04-25T19:22:11.773
2023-04-25T19:22:11.773
null
null
53690
null
614135
2
null
614130
1
null
- This question only matters if the design is actually panel data. It's no different from logistic regression on panel data in this respect as well. - In short, you can absolutely fit fixed effects to panel data in a probit model. The main objective of the data modeling strategy is that the response data are mutually conditionally independent so that the standard statistical inferential approaches are accurate. - Random effects are only one way of achieving number 2. Fixed effects do not necessarily mean that you adjust for a dummy variable for each subject in an analysis. You may, for instance, adjust for predictive effects - such as age, income, employment status, health history, etc. etc. A "random effect" is nothing more than a sum of all the unobserved patient-level variables. - If the panel data contain clusters that are large enough and contain enough variability, you can still adjust for individual level fixed effects. The problem of course is that for a large number of sparse clusters, these fixed effect terms become unstable and it's desirable to employ an estimator with shrinkage properties, either REML GLMM (your random effect) or even a Ridge estimator (penalized regression with L2 penalty).
null
CC BY-SA 4.0
null
2023-04-25T19:27:05.993
2023-04-25T19:27:05.993
null
null
8013
null
614136
1
null
null
2
48
One of the things I've struggled with is the issue of effectively summarizing some sort of time series metric (i.e. independent variable) for use in a regression analysis. The most common solution that I've encountered (which may not be the best) is to compute a meaningful average. Although I can get a single number I still have the nagging feeling that it doesn't really capture what I need. Here's an example graph: [](https://i.stack.imgur.com/542Lo.png) Let's assume that there are 3 data points which when plotted over time we can see different types of growth. What I wish to capture/model is that the "flattens" represents the possibility that item sale (or whatever it represents) is eventually going to take a downturn or just stop selling. The "growth" curve is something that won't immediately stop but for the time being (e.g. 1 month) it's a safe bet to assume item popularity. Here are my questions: - How can/should we capture the "shapes" of such time series based independent variables for regression? Does it even matter or is the average the best we can do? - I thought of computing "slopes" (start, end) but as the graph shows (approximately) that there such a computation could lead to similar values for both flattens and growth curves. Is this even a thing? - What could/should I use? Note: These shapes are just examples and the actual time series could be rather fluctuating. What I think I'm interested in is a way to capture a notion of a general trend in some way (if at all). I understand trends could be captured via moving averages, but I don't know how to capture "up/down-ness" or summarize it in a meaningful way. I'm hoping this doesn't become a two-step problem where we first train a model on "learning" these curves and what they represent and then feeding the output into the regression. That would be too complicated for now but I don't even know if that's a valid solution or should I just stick with averages.
How to summarize a time series independent variable for regression?
CC BY-SA 4.0
null
2023-04-25T19:39:17.297
2023-04-26T16:57:21.167
2023-04-25T21:27:27.060
4426
4426
[ "regression" ]
614137
1
null
null
0
57
I am new to time series and would appreciate help in this matter. I have a time series with the following graph as a result of applying the `plot_acf` in Python. Given this, can it be inferred that this series can be predictable via Machine Learning algorithms? I have already tried LSTM and N-HiTS to predict and the MAE is very large compared to the MAE using a baseline algorithm (the baseline MAE is 20% of the MAE using the previously mentioned algorithms) [](https://i.stack.imgur.com/U1h2B.png) I am adding the data that originated this ACF plot, which corresponds to actual sales figures: - 324 - 281 - 691 - 281 - 410 - 346 - 86 - 43 - 389 - 43 - 194 - 22 - 0 - 130 - 65 - 173 - 86 - 281 - 0 - 0 - 65 - 43 - 86 - 65 - 0 - 0 - 130 - 65 - 130 - 173 - 151 - 43 - 0 - 43 - 173 - 108 - 173 - 130 - 65 - 0 - 0 - 0 - 86 - 86 - 0 - 22 - 22 - 22 - 22 - 22 - 43 - 43 - 22 - 65 - 130 - 130 - -86 - 22 - 173 - 43 - 86 - 43 - 86 - 22 - 65 - 0 - 43 - 43 - 0 - 194 - 0 - 0 - 108 - 108 - 130 - 86 - 86 - 22 - 43 - 86 - 22 - 0 - 0 - 43 - 86 - 22 - 86 - 22 - 0 - 108 - 0 - 65 - 108 - 22 - 86 - 43 - 65 - 43 - 0 - 108 - 86 - 0 - 22 - 65 - 151 - 0 - 43 - 86 - 151 - 43 - 43 - 22 - 108 - 0 - 108 - 0 - 43 - 65 - 43 - 108 - 86 - 0 - 151 - 22 - 0 - 108 - 65 - 65 - 22 - 0 - 43 - 22 - 22 - 65 - 43 - 130 - 151 - 108 - 0 - 130 - 151 - 130 - 65 - 0 - 130 - 43 - 0 - 0 - 22 - 43 - 0 - 65 - 108 - 22 - 65 - 0 - 130 - 86 - 86 - 281 - 216 - 22 - 173 - 108 - 173 - 302 - 410 - 86 - 65 - 65 - 22 - 86 - 22 - 194 - 86 - 130 - 86 - 216 - 108 - 173 - 432 - 238 - 151 - 194 - 194 - 324 - 22 - 367 - 324 - 238 - 367 - 410 - 216 - 497 - 259 - 108 - 281 - 281 - 216 - 108 - 259 - 216 - 130 - 65 - 173 - 86 - 65 - 43 - 43 - 86 - 130 - 194 - 108 - 194 - 238 - 108 - 22 - 43 - 65 - 173 - 86 - 151 - 151 - 130 - 22 - 151 - 86 - 281 - 86 - 259 - 65 - 86 - 173 - 65 - 259 - 173 - 108 - 238 - 130 - 151 - 259 - 259 - 151 - 389 - 65 - 259 - 173 - 238 - 108 - 43 - 65 - 173 - 65 - 65 - 216 - 151 - 302 - 86 - 259 - 130 - 86 - 151 - 65 - 238 - 43 - 86 - 130 - 65 - 130 - 259 - 22 - 432 - 173 - 216 - 108 - 130 - 410 - 324 - 86 - 475 - 130 - 410 - 86 - 216 - 151 - 173 - 85 - 302 - 173 - 259 - 281 - 281 - 86 - 173 - 216 - 302 - 22 - 108 - 43 - 108 - 389 - 245 - 43 - 43 - 216 - 22 - 151 - 302 - 259 - 194 - 346 - 43 - 302 - 194 - 151 - 194 - 173 - 43 - 43 - 281 - 367 - 43 - 367 - 43 - 0 - 65 - 86 - 151 - 108 - 281 - 130 - 22 - 86 - 389 - 324 - 216 - 346 - 43 - 216 - 43 - 130 - 497 - 65 - 22 - 43 - 238 - 324 - 0 - 65 - 43 - 86 - 216 - 238 - 43 - 108 - 194 - 108 - 605 - 302 - 130 - 281 - 86 - 86 - 302 - 173 - 216 - 194 - 86 - 259 - 151 - 389 - 194 - 454 - 65 - 130 - 518 - 216 - 43 - 302 - 22 - 151 - 216 - 65 - 302 - 389 - 281 - 410 - 259 - 238 - 151 - 216 - 389 - 194 - 216 - 130 - 194 - 108 - 65 - 475 - 194 - 216 - 43 - 86 - 86 - 130 - 410 - 324 - 173 - 65 - 86 - 151 - 324 - 475 - 281 - 454 - 130 - 259 - 389 - 216 - 86 - 216 - 194 - 130 - 454 - 194 - 108 - 108 - 43 - 86 - 65 - 238 - 65 - 108 - 86 - 65 - 22 - 108 - 216 - 65 - 65 - 346 - 281 - 86 - 302 - 151 - 324 - 194 - 151 - 151 - 108 - 43 - 0 - 0 - 65 - 324 - 281 - 151 - 0 - 0 - 389 - 302 - 432 - 540 - 108 - 346 - 0 - 194 - 238 - 259 - 130 - 238 - 389 - 238 - 518 - 86 - 281 - 0 - 0 - 108 - 43 - 0 - 0 - 0 - 65 - 0 - 0 - 0 - 0 - 0 - 0 - 108 - 194 - 0 - 216 - 238 - 22 - 43 - 259 - 216 - 22 - 410 - 238 - 86 - 151 - 281 - 259 - 151 - 173 - 432 - 43 - 238 - 65 - 389 - 43 - 173 - 324 - 130 - 367 - 194 - 86 - 65 - 65 - 302 - 367 - 367 - 130 - 216 - 454 - 65 - 0 - 432 - 108 - 0 - 216 - 194 - 108 - 281 - 259 - 108 - 259 - 173 - 238 - 367 - 324 - 259 - 324 - 346 - 346 - 194 - 259 - 151 - 151 - 238 - 65 - 346 - 130 - 562 - 43 - 108 - 173 - 43 - 151 - 65 - 151 - 86 - 43 - 194 - 86 - 22 - 0 - 0 - 65 - 43 - 151 - 43 - 108 - 86 - 0 - 130 - 65 - 65 - 108 - 65 - 22 - 0 - 86 - 0 - 130 - 43 - 108 - 86 - 43 - 108 - 238 - 259 - 324 - 497 - 324 - 346 - 389 - 259 - 324 - 151 - 302 - 389 - 216 - 281 - 216 - 324 - 216 - 194 - 259 - 43 - 108 - 302 - 151 - 22 - 389 - 454 - 194 - 324 - 734 - 0 - 108 - 238 - 22 - 0 - 432 - 108 - 43 - 0 - 22 - 43 - 346 - 43 - 65 - 130 - 86 - 302 - 238 - 475 - 216 - 389 - 281 - 497 - 302 - 65 - 475 - 432 - 259 - 173 - 108 - 216 - 43 - 65 - 173 - 238 - 108 - 86 - 173 - 43 - 173 - 518 - 151 - 281 - 238 - 216 - 151 - 238 - 86 - 173 - 108 - 43 - 108 - 65 - 0 - 259 - 216 - 216 - 22 - 216 - 281 - 259 - 238 - 130 - 86 - 238 - 216 - 173 - 216 - 346 - 648 - 216 - 43 - 302 - 302 - 108 - 259 - 216 - 216 - 194 - 173 - 86 - 194 - 130 - 259 - 194 - 86 - 389 - 65 - 410 Do you recommend using other methods to determine the predictability of the time series? Many thanks for the help provided.
Is my time series predictable?
CC BY-SA 4.0
null
2023-04-25T20:01:02.057
2023-04-26T15:13:18.763
2023-04-26T09:46:07.783
386570
386570
[ "machine-learning", "time-series", "predictive-models", "autocorrelation", "forecastability" ]
614138
1
null
null
0
18
the standard result is that with a uniform prior on p (from 0 to 1) and binomial signals (h H signals and (n-h) L signals from n draws, each with probability p), the posterior mean is (h+1)/(n+2) and the ex-ante probability of observing h H's is 1/(n+1). it's beautifully short formulae. are there some papers or books or treatises that consider variations thereon for use with undergraduate students? For example, one can entertain a parameter that increases the accuracy of the signal (e.g., x% of the time, the signal is pure noise, or the signal is drawn with probability p/2 + 1/2, or ...); or a more extreme-loaded prior (a more central prior is easy, because one can start with the posterior after one H and one L signal). All derivable, but it would be good to see a whole lot of variations in one go-to reference place.
Variations on Uniform Priors with Binomial Signals?
CC BY-SA 4.0
null
2023-04-25T20:15:37.523
2023-04-25T20:48:55.787
2023-04-25T20:48:55.787
22311
73384
[ "references", "uniform-distribution", "beta-binomial-distribution" ]
614139
1
null
null
0
21
In a comment one user said that the true guard against overfitting is the adopted priors but, for example, in bayesian neural networks we still have priors on the weights and the common advice is to use a test dataset to check if there is overfitting
Why there is no need for a test dataset when using bayesian inference methods?
CC BY-SA 4.0
null
2023-04-25T20:22:24.620
2023-04-25T20:22:24.620
null
null
275569
[ "bayesian", "markov-chain-montecarlo", "overfitting", "bayesian-network" ]
614140
1
null
null
0
18
I'm working with a dataset to train a credit risk model. The dataset already has flags for how it should be split so results can be reproduced, it has a column with two values `DEV`, for development, and `OOT` for out-of-time. But then again I have another column that determines if the register will be used for train or test. What I find confusing is that for the `OOT` sample I have both `train` and `test` rows. Shouldn't the `OOT` rows only be used for validation? Or is it a common practice to split the `OOT` sample into train and test and include the `train` part to train the model?
What is the explanation of splitting an Out-Of-Time Sample into train and test?
CC BY-SA 4.0
null
2023-04-25T20:52:23.130
2023-04-25T20:52:23.130
null
null
157548
[ "train-test-split", "credit-scoring" ]
614141
1
null
null
0
17
In [this](https://stats.stackexchange.com/questions/13617/how-is-the-intercept-computed-in-glmnet) question, it's explained how the intercept is fit under normal linear regression. It is given that it is calculated using $$\beta_0 = \bar{y} - \sum_{j=1}^p \hat{\beta}_j \bar{x}_j$$ How is this done in the logistic case, because surely $\bar{y}$ is nothing sensible that should be used?
How is the intercept fit in glmnet when doing logistic regression?
CC BY-SA 4.0
null
2023-04-25T21:35:08.953
2023-04-25T21:35:08.953
null
null
290294
[ "r", "logistic", "lasso", "linear", "glmnet" ]
614145
1
null
null
1
16
Given a scenario where I compare a single column(independent variable) in a data set to the other columns(dependent variables). If I find the outliers from the independent variable. What could I test for by removing and keeping the outliers. What measurements could I take that could show the difference the outliers have for comparing the independent variable to the dependent variables? Although it is not a good thing to remove values from the raw data I want am curious to the approaches I could take with this.
Measurements for calculating the difference of the Outliers has on a Data Set
CC BY-SA 4.0
null
2023-04-25T22:10:48.977
2023-04-25T22:10:48.977
null
null
386575
[ "r", "regression", "probability", "mathematical-statistics", "dataset" ]
614146
2
null
613905
3
null
Assume that the definition of the AR(1) process includes a specification of $X_0$ (absent which $X_1 = aX_0+ \varepsilon_1$ is difficult to interpret). The simplest choice is $X_0 = 0$ which makes \begin{array} {}X_1 &= \varepsilon_1\\ X_2 &= \varepsilon_2 + a\varepsilon_1\\ X_3 &= \varepsilon_3 + a\varepsilon_2 + a^2\varepsilon_1\\ \vdots &= \ddots \end{array} For the process to be strictly stationary, it is necessary that $X_1$ and $X_2$ be identically distributed. But, since $\varepsilon_t$ is a white noise process, we have that \begin{array} {}\operatorname{var}(X_1) &= \sigma^2\\ {}\operatorname{var}(X_2) &= \operatorname{var}(\varepsilon_2) +a^2 \operatorname{var}(\varepsilon_1) + 2a\operatorname{cov}(\varepsilon_2,\varepsilon_1)\\ &= \sigma^2 + a^2 \sigma^2 + 0\\ &= \sigma^2(1 + a^2)\\ &\neq \sigma^2 = \operatorname{var}(X_1) \end{array} and so the process cannot possibly be a strictly stationary process. If $|a| < 1,$ then the variance of $X_n$ converges to $\dfrac{\sigma^2}{1-a^2}$. So, what happens if we set $X_0$ to have variance $\dfrac{\sigma^2}{1-a^2}$ and to be independent of the white noise process? Well, then \begin{align} \operatorname{var}(X_1) &= \dfrac{a^2\sigma^2}{1-a^2}+ \sigma^2\\ &= \dfrac{\sigma^2}{1-a^2}\\ &= \operatorname{var}(X_0). \end{align} Inductively, we get that all the $X_i$ have the same variance $\dfrac{\sigma^2}{1-a^2}$. However, equality of variances is not the same as equality of distributions, and there is no guarantee that the process is stationary even to order $1$, let alone be strictly stationary.. I will leave it as an exercise for the OP to determine whether this process is a weakly stationary process (also called wide-sense-stationary process, or, in the time-series literature, stationary process). Finally, if the white noise process is a Gaussian white noise process (which requires, among other things, that all the random variables are jointly Gaussian), then for $|a|< 1$, and $X_0 \sim N\left(0,\frac{\sigma^2}{1-a^2}\right)$, the process $\left\{X_n\colon n \geq 0\right\}$ is a strictly stationary (Gaussian) process.
null
CC BY-SA 4.0
null
2023-04-25T22:18:29.543
2023-04-25T22:18:29.543
null
null
6633
null
614147
1
null
null
1
10
I've conducted a questionnaire asking participants to rate biscuit samples based on factors such as taste and look. The ratings ranged from strongly like to strongly dislike. What is the best method to analyse/compare the ratings of different samples? For example, compare the appearance ratings of sample 1 with the appearance ratings of sample 2. Based on researching online, I was thinking about using Mood's Median Test and/or Wilcoxon Matched-pairs Signed-rank Test, but I'm uncertain if these are suitable.
How to analyse ordinal data for five food samples
CC BY-SA 4.0
null
2023-04-25T22:19:01.630
2023-04-25T22:19:01.630
null
null
366374
[ "spss", "ordinal-data", "likert", "wilcoxon-signed-rank" ]
614148
1
null
null
3
45
When setting out experiments, I want to make sure treatment groups are as balanced as possible. Instead of using randomization, I've started to use the following process. I first collect some measurements from the potential experimental units I have at my disposal. Then, I generate all the different possible combinations of treatment groups using all these potential experimental units (yes - I know it's computationally intensive). Finally, I see which combination keeps means, variances, and possibly higher-order moments of the measurements I took as consistent as possible across groups. I'm wondering if I need to take into account degrees of freedom somehow. I know the process I described above is something you do before an experiment even begins and isn't really related to the final statistical analysis, but I can't help thinking about degrees of freedom with all these parameters I'm calculating. As an aside, I've decided that, when you're placing n experimental units into each treatment group, that it only makes sense to calculate up to the nth or the (n - 1)th moment, but this idea is still a work in progress. Does anyone have any thoughts? Thank you in advance for your help - I'm very grateful.
Optimizing Treatment Group Assignments
CC BY-SA 4.0
null
2023-04-25T22:31:17.627
2023-05-31T08:31:36.930
2023-04-25T22:44:44.590
315722
315722
[ "experiment-design", "combinatorics", "degrees-of-freedom", "random-allocation" ]
614149
1
null
null
1
41
Let $\mathcal X$ (observation space) and $\mathcal P$ (prediction space) be non-empty sets. Let $P$ be a variadic function (predictive model) mapping tuples of elements of $\mathcal X$ into $\mathcal P$. Let $\ell: \mathcal X \times \mathcal P \rightarrow \mathbb R$ be a loss function. The predictive task is then defined as follows: - $x_0$ is generated by some unknown mechanism, $M$. - We observe $x_0$. - We generate a prediction, $p_1 = P(x_0)$. - $M$ generates $x_1$. - We observe $x_1$ and $\ell(p_1, x_1)$. - We generate a prediction, $p_2 = P(x_0, x_1)$. - We observe $x_2$ and $\ell (p_2, x_2)$. - and so on... with the goal being to minimize some cumulative function of the losses. Given this situation, is there any natural structure (point, probabilistic, etc.) that we can put onto $\mathcal P$ in terms of $\mathcal X$ that doesn't "mislead" us, but will still allow us to make more "holistic" predictions that convey potential uncertainty, etc. than just letting $\mathcal P := \mathcal X$ ? I know that "all models are wrong...", but is there any underlying danger in using a prediction space that "makes" further assumptions about the data? such as letting $\mathcal P$ be a set of probabilistic predictions over $\mathcal X$ when we initially made no assumptions about how $x_0, x_1, \dots$ were generated (e.g. they could be deterministically, adversarially, or otherwise-generated). I'm aware that the study of "prediction of individual sequences" does address some of this, but from what I've seen, there isn't too much on how to choose $\mathcal P$ or otherwise create a model $P$ that doesn't rely on weighting/considering the input of "experts" (other predictive models). An example would be as follows: Let $\mathcal X := \{0, 1\}$. We don't make any assumption on how the $x_0, x_1, \dots \in \mathcal X$ are generated. They could be deterministically, probabilistically, adversarially (say w.r.t to our previous predictions), or otherwise-generated. Is it then a violation of our assumptions (or lack thereof) to use a predictive model that maps into a prediction space $\mathcal P \neq \mathcal X$? such as letting $\mathcal P$ be the set of all Bernoulli distributions and $P$ be the function mapping the tuple of elements in $\mathcal X$ to the empirical PMF of the elements of the tuple (fraction of 0's and fraction of 1's). Hopefully more succinctly: Does making a probabilistic (or other type of) prediction for a data point assume that that data point was/will be generated probabilistically (or otherwise)?
Do "types" of predictions "need" to match assumptions about data?
CC BY-SA 4.0
null
2023-04-25T22:55:12.510
2023-04-26T05:54:17.700
2023-04-26T05:54:17.700
386577
386577
[ "predictive-models", "modeling", "uncertainty", "probabilistic-forecasts" ]
614150
1
null
null
1
21
Say my randomization wasn't very effective, so I have two groups, each with 100 individuals and the difference in success rates is 3% before the treatment has ever been administered. I want to know the lift in success rate that can be attributed to the treatment exposure. A statistician on my team proposed a regression model where we include $X={group, exposure}$. And we include both experiment data as well as pre-experiment data. This should de-couple the bias in the control group, as distinct from the effect of treatment exposure. (However, it would not explain the source of bias, which is obviously of interest, too.) Then the difference in binomial model / logistic regression predictions could be used to determine the lift due to treatment exposure. $$ X=\{1, treatment, exposure\} $$ $$ B = [intercept,\quad \theta_{group} \quad, \theta_{exposure}]$$ $$ Z = Binomial(n,k | sigmoid(B*X)) $$ $$ lift = Z(treatment=1, exposure=1) - Z(treatment=1, exposure=0)$$ I'm curious, is this an established design for measuring lift when experiment data is biased? Does it have a name? And how would you explain why this makes sense to a non-technical audience? I've read that 'difference in difference' method might be what I'm describing but I'm not certain of this.
How to account for bias in experiment data when quantifying treatment efficacy?
CC BY-SA 4.0
null
2023-04-26T00:24:01.303
2023-04-26T00:24:01.303
null
null
288172
[ "regression", "logistic", "experiment-design", "bias-correction" ]
614151
1
null
null
1
50
I have a classification task for which I have a binary outcome -- however, I need to know the level of confidence in classifying each individual in with medical record (previously referred to as row). The specific classification problem is how to ascertain if someone is pregnant based on medical records (there are approximately 200 available predictors: ICD codes, procedure codes, age, etc.). I will be using a random forest classification tree algorithm. Confidence could be a residual for the row, specificity, sensitivity/Recall, or f-score for the decision node for associated with a rows terminal node. Note that I am using ensemble learning a neural network and logistic regression are already included.
Confidence score random forest classification
CC BY-SA 4.0
null
2023-04-26T00:46:53.220
2023-05-28T05:01:42.923
2023-05-28T05:01:42.923
247274
138931
[ "machine-learning", "classification", "random-forest", "residuals" ]
614152
2
null
613961
1
null
I don't know any formal proofs, but it was fairly straightforward to disprove this idea with a simulation - see Stata code below. You can tweak the values for relationships between `dist-> bath`, and `dist->y` without altering `bath->y`, and you will see fairly dramatic changes. ``` clear set obs 10000 gen dist = rnormal() gen bath = .1*dist + rnormal() gen y = .5*dist + .25*bath + rnormal() reg y bath dist reg y dist predict yhat, xb reg yhat bath ```
null
CC BY-SA 4.0
null
2023-04-26T00:55:19.303
2023-04-26T09:58:19.767
2023-04-26T09:58:19.767
149657
106580
null
614153
1
null
null
0
30
I have had to include a squared term in my regression model due to observed non-linearity in the LOWESS plot. In my reading to understand how to interpret the coefficients on the linear and squared term, most advice to centre around the mean. However, my model regression is multivariate, with linear variables. I'm unsure whether to still mean centre, especially as all examples I've seen generally are measuring a bivariate relationship.
Mean centering polynomial regression model
CC BY-SA 4.0
null
2023-04-26T00:57:46.560
2023-04-26T17:18:22.770
null
null
386581
[ "regression", "mixed-model", "multivariate-analysis", "panel-data", "polynomial" ]
614154
2
null
614151
1
null
Random forest inherently has this ability. Contrary to popular belief, a random forest does no classification. A random forest outputs values on a continuum. Those values can be compared to a threshold in order to make classifications (below threshold is negative, above is positive), but that is separate from the random forest (even if software typically bundles the two together). For instance, the `randomForest::predict.randomForest` in `R` has an argument `type` that returns values on a continuum if you set `type = prob`. Importantly, these can be interpreted as probabilities, and those give you your confidence scores. Not only can you claim a classification, but you can claim the probability of that classification. Random forests have this pesky attribute, however, that they tend to lack calibration. That is, when they predict an event to happen with probability $p$, the event does not happen in $p$ proportion of the times. The `sklearn` documentation has some [nice material](https://scikit-learn.org/stable/auto_examples/calibration/plot_calibration_curve.html#sphx-glr-auto-examples-calibration-plot-calibration-curve-py) to introduce you to the idea behind calibrating probabilities. Two terms worth knowing for your research are "Platt scaling" and "isotonic regression".
null
CC BY-SA 4.0
null
2023-04-26T01:21:09.817
2023-04-26T01:21:09.817
null
null
247274
null
614155
2
null
613689
1
null
"Is the p-value large because the correlation coefficient is low?" Basically, yes. A lower sample statistic (r in your example) will always give you a higher p-value, all else being equal. What this p value is telling you is that the probability that you would get a sample correlation of 0.0434 (or higher) if the actual population correlation is zero (i.e., the null hypothesis is correct) is 0.4687. Because this is a relatively high probability - and is certainly greater than the standard cutoff of 0.05 - you don't have evidence that allows you to reject the null hypothesis. Try this (I hope it doesn't confuse you more!): Suppose instead of a probability, you had a certainty: If the population correlation was zero, then the probability of getting a sample correlation of x or lower is 1.0. So, you got a sample correlation of x. I hope you can clearly see that this doesn't give us evidence to reject the antecedent condition (population corr = 0), because we are certain to get this sample correlation when our population correlation is zero. (It also doesn't let us ACCEPT the antecedent. There might be some other underlying circumstances that could also cause a sample correlation of x.) Now, think about a certainty the other way: If the population correlation is zero, the probability of getting a sample correlation of y is 0. If you get a sample correlation of y, then if this were true you would know that the antecedent cannot be true: Because it is impossible to get that sample correlation if the population correlation is really y. So, you can reject the null hypothesis: We would know that the population correlation must not be zero. Now, in real life, we don't have certainty. The probability we get is going to be somewhere between 0 & 1. When the p-value we get is below some arbitrarily chosen low cutoff (often 0.05), we essentially treat it as if the probability were zero. In that case, we can reject the null hypothesis. On the other hand, if the p-value is above that cutoff, then we can't treat it as zero, so we cannot reject our null hypothesis. Your probability value is not below the cutoff, so you cannot reject the hypothesis that the population correlation is zero.
null
CC BY-SA 4.0
null
2023-04-26T01:21:16.683
2023-04-27T12:46:34.780
2023-04-27T12:46:34.780
386576
386576
null
614156
1
null
null
1
11
I am trying to find the best analysis approach for my data. Simply put, I am looking at a treatment for the impacts of drug exposure before birth. My groups are: 1 - no drug, no treatment 2 - drug, no treatment 3 - no drug, treatment 4 - drug, treatment In each group, there are 5-7 pairs so siblings (1 male, 1 female from each mother, so 5-7 mothers per group) - so 10-14 participants in total. However, as they are related I do not want to treat the siblings as independent from one another, as the mothering style etc could have an impact on the data I am collecting (although not something I am particularly interested in). Additionally, I want to consider if the drug and treatment are having different impacts in males or females. Thus far I had been using a RM ANOVA in SPSS with the siblings as within-subjects variables, and drug and treatment as between-subjects variables (e.g. drug 1 or 2 [no exposure or exposure] and treatment 1 or 2 [treatment or no treatment]). However, there are some issues with this. For some of my data sets I am missing data from one of a pair of siblings (sometimes multiple times), meaning the data from the other sibling is lost, reducing my power, obviously an issue when I already have small group numbers. For some measures, I have significant p values for Levene's test of equality of variances and I am struggling to find an appropriate alternative parametric test. My group sizes are unequal. I'm sure there are other issues I am unaware of in this set up. I am very unfamiliar with linear mixed modeling, and no one in my lab etc uses it, but it has come up a few times as a potentially more appropriate test. Alternatively, I would also consider analyzing the sexes separately from one another. Or as a within-subjects variable in a univariate ANOVA but I would loose the ability to control so sibling effects. So I guess my questions are - Is an RM ANOVA appropriate here, and if not what would be a more appropriate test? - If I do continue to use RM ANOVAs what is the appropriate response to a significant Levene's test? - If I were to change to a mixed model, what might that set look like and what are the appropriate follow-up tests to an interaction? I have ended up doing research in an area requiring stats I am not familiar with (clearly) and have very minimal support available to me, so any help would be greatly appreciated.
RM-ANOVA vs Mixed Model
CC BY-SA 4.0
null
2023-04-26T02:46:29.367
2023-04-26T02:46:29.367
null
null
386583
[ "mixed-model", "anova", "linear-model", "levenes-test" ]
614157
1
null
null
0
20
I am trying to look at how two different factors A (levels A1, A2, A3, A4) and B (levels B1 and B2), as well as the interaction between the two, influence the time to an event X. As a result I am trying to use a cox proportional hazards model as my data contains censored data (individuals for which event X did not occur in the time of the study period: 1 = Event occured, 0 = Event did not occur). My model is thus as follows: ``` model.all<-coxph(Surv(X, Censor.Y.or.N) ~ A*B, data = data) summary(model.all) ``` ``` Call: coxph(formula = Surv(X, Censor.Y.or.N) ~ A * B, data = data) n= 199, number of events= 119 (2 observations deleted due to missingness) coef exp(coef) se(coef) z Pr(>|z|) AA2 -0.5319 0.5875 0.3929 -1.354 0.17582 AA3 -0.5779 0.5611 0.4104 -1.408 0.15909 AA4 -0.8626 0.4220 0.3701 -2.331 0.01977 * BB2 -1.4654 0.2310 0.4706 -3.114 0.00185 ** AA2:BB2 1.5935 4.9208 0.6509 2.448 0.01436 * AA3:BB2 1.7029 5.4896 0.5898 2.887 0.00389 ** AA4:BB2 1.7132 5.5468 0.5715 2.998 0.00272 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 exp(coef) exp(-coef) lower .95 upper .95 AA2 0.5875 1.7022 0.27197 1.2690 AA3 0.5611 1.7823 0.25100 1.2542 AA4 0.4220 2.3694 0.20432 0.8718 BB2 0.2310 4.3292 0.09184 0.5810 AA2:BB2 4.9208 0.2032 1.37410 17.6218 AA3:BB2 5.4896 0.1822 1.72773 17.4425 AA4:BB2 5.5468 0.1803 1.80962 17.0021 Concordance= 0.609 (se = 0.027 ) Likelihood ratio test= 14.27 on 7 df, p=0.05 Wald test = 12.85 on 7 df, p=0.08 Score (logrank) test = 13.75 on 7 df, p=0.06 ``` So far my interpretations of the main effects are as follows: A (adjusted for B): - At a given instance in time, event X is 0.59 times as likely (41% less likely) in A2 individuals compared to A1 individuals. Time to event X is not significantly longer for A2 compared to A1. - At a given instance in time, event X is 0.56 times as likely (44% less likely) in A3 individuals compared to A1 individuals. Time to event X is not significantly longer for A3 compared to A1. - At a given instance in time, event X is 0.42 times as likely (58% less likely) in A4 individuals compared to A1 individuals. Time to event X is significantly longer for A4 compared to A1. B (adjusted for A): - At a given instance in time, event X is 0.23 times as likely (77% less likely) in B2 individuals compared to B1 individuals. Time to event X is significantly longer for B2 compared to B1. A likelihood ratio test revealed the interaction between A and B to be significant. The problem I am now having is that I am not sure how to interpret the interaction terms in the summary output. Is the following interpretation for, A2:B2 for example, correct? - At a given instance in time, event X is 4.92 times as likely (392% more likely) in individuals that are both A2 and B2 compared to individuals that are A1 and B1. Time to event X is significantly shorter for A2:B2 individuals compared to A1:B1 individuals. Furthermore, if this is the case, is there any way to gain hazard ratios (and their significance levels) for within and between factor comparisons for example: comparing B1 and B2 within each level of factor A; or comparing A1-A2, A1-A3, A2-A3 within each level of factor B. Any help anyone can provide with my query would be greatly appreciated.
How do I interpret the interaction coefficients in a cox proportional hazards model with two factors?
CC BY-SA 4.0
null
2023-04-26T03:13:51.203
2023-04-26T17:03:26.887
null
null
293416
[ "categorical-data", "survival", "interaction", "cox-model", "hazard" ]
614158
1
null
null
1
28
First of all, this is not a case of Overfitting. The task is to forecast Temp using univariate Single Step forecasting. I have trained the LSTM model with on jena climate dataset(Dataset [https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip](https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip).) Following are steps - Extract Temp as a series - Divide in train(60000), test(5035) and Val(5000) data - Convert one-dimensional time series to a supervised model using lag features For example: Sequence 1 to 8 is converted with a window of 5 as below # [[[1], [2], [3], [4], [5]]] [6] # [[[2], [3], [4], [5], [6]]] [7] # [[[3], [4], [5], [6], [7]]] [8] Code for it is... ``` def df_to_X_y(df, window_size=5): df_as_np = df.to_numpy() X = [] y = [] for i in range(len(df_as_np)-window_size): row = [[a] for a in df_as_np[i:i+window_size]] X.append(row) label = df_as_np[i+window_size] y.append(label) return np.array(X), np.array(y) ``` - Data is fed to LSTM mode by following the config ``` model1 = Sequential() model1.add(InputLayer((24, 1))) model1.add(LSTM(64)) model1.add(Dense(8, 'relu')) model1.add(Dense(1, 'linear')) model1.compile(loss=MeanSquaredError(), optimizer=Adam(learning_rate=0.0001), metrics=[RootMeanSquaredError(),MSE_matrics()]) history1 = model1.fit(X_train1, y_train1, validation_data=(X_val1, y_val1), epochs=10, callbacks=[cp1]) print(model1.evaluate(X_train1, y_train1)) print(model1.evaluate(X_val1, y_val1)) print(model1.evaluate(X_test1, y_test1)) ``` This is going to print RMSE or SSE or any other but train error remains less compared to test error. Experiment 2: I tried different configs for train, test validation split like, train(5000), Val(30000), test(5000) Now in this cal train errors are less compared to the test. Question is: are these errors increasing with the number of samples? if so how we can report train-test-val error and confirm that model is doing well?
RMSE of Training data is lower compared to test dataset
CC BY-SA 4.0
null
2023-04-26T03:38:10.410
2023-04-26T03:38:10.410
null
null
190095
[ "time-series", "forecasting", "error", "lstm", "univariate" ]
614159
1
614160
null
1
43
A simple question that I just want to clarify for understanding: In logistic regression, if we have $n$ input features, will there therefore be $n$ weights used for classification, one for each feature? EDIT: I wanted to edit this so it will show up if others search. It seems that a more common terminology for what I called weights is "coefficients". What I call "bias" would also be a coefficient, just the intercept one.
Do the # of weights (non-intercept coefficients) in logistic regression = # of features
CC BY-SA 4.0
null
2023-04-26T03:42:01.947
2023-04-26T14:18:48.490
2023-04-26T14:18:48.490
357470
357470
[ "regression", "logistic", "classification", "sigmoid-curve" ]
614160
2
null
614159
1
null
In a standard application of logistic regression, you have one weight (or coefficient) per input feature, plus one intercept for the whole model (in ML sometimes referred to as the "bias term"). So the number of weights = number of inputs + 1. Technically an intercept is not needed, but exclusion is rare and usually not a good idea. We can have more features if we include the option of parameter expansion. As a very simple example, if we want to include a quadratic feature for $x$, then the input feature is just $x$, but then we create another term $x^2$ which gets it's own weight, so in a sense we have two weights that came from one input feature. Other examples of parameter expansion include interaction effects (features $x, y$ expanded to $x, y, x \times y$) and splines (piecewise continuous polynomials).
null
CC BY-SA 4.0
null
2023-04-26T05:11:50.800
2023-04-26T05:11:50.800
null
null
76981
null
614161
1
null
null
1
31
I have MA(2) as $$x_t=e_t-0.3e_{t-1}+0.1e_{t-2}$$ and would like to find $x_{t+1}$. But I struggle and $$x_t(1-0.3B+0.1B^2)^{-1}=e_t.$$ I can't write it as linear process because the root of the equation is complex. In here I try to find the best linear predictor of $F_{t+1}$.
How to write it as the linear process for MA(2) when the characteristic root is complex?
CC BY-SA 4.0
null
2023-04-26T05:39:25.973
2023-04-26T06:58:33.187
2023-04-26T06:55:49.060
53690
376220
[ "forecasting", "arima", "moving-average" ]
614162
1
614393
null
1
28
After years of uncertainty, I need to make this question. I have a conducted several experiments with X replicates each. I obtain then a dataset, where each "observation" is a single experiment and the values are each average±SD of those replicates. Now I need to calculate the Average and SD of the whole study. While it is easy for the mean, Which is the mean of the averages, I don't know how to measure the final SD, including the original error. Do I need to use some sort of formula of propagation error?
Standard deviation of several average±sd
CC BY-SA 4.0
null
2023-04-26T06:58:29.147
2023-04-28T12:12:29.483
null
null
96986
[ "standard-error" ]
614163
2
null
614161
0
null
Hint: - To obtain $x_{t+1}$, increase the time index by 1 for every member of the first equation. - To obtain a forecast that targets the conditional mean of $x_{t+1}$, take the conditional expectation $\mathbb{E}(\cdot|I_t)$ of that equation. ($I_t$ is the information set up to and including time $t$.)
null
CC BY-SA 4.0
null
2023-04-26T06:58:33.187
2023-04-26T06:58:33.187
null
null
53690
null
614164
1
null
null
0
21
Given an unbalanced panel of data points $(X_{i,t})$, I calculate variances in a rolling window (labelled $Y_{i,t}$). I want to calculate the time series mean of cross-sectional averages of this generated variable $Y_{i,t}$. In Fama-MacBeth (1973) language, I estimate an OLS regression of $Y_{i,t}$ on a constant for each time period $t$ and then calculate the mean of the time series of intercepts. I calculate the standard error using Newey-West (1987) correction. Question: How do I incorporate the fact that $Y_{i,t}$ is a generated regressand (similar to Shanken correction for independent variables)?
Error-in-Dependent-Variable in Panel Data
CC BY-SA 4.0
null
2023-04-26T07:01:35.213
2023-04-26T07:01:35.213
null
null
271277
[ "econometrics", "panel-data", "standard-error", "finance", "errors-in-variables" ]
614165
1
614170
null
3
87
Suppose I have a dataset where 9 patients occured with the post-operative complication. (e.g. information such as height, smoking, weight, age, disease status) and rest of the 150 patients without the post-operative complication. In this case, I only can have 9 as the number of control group and experimental group respectively at most. Just in general, the statistical analysis result with this dataset can be valid or not?
Is this sample size big enough to analyze with Propensity Score Matching?
CC BY-SA 4.0
null
2023-04-26T07:04:22.503
2023-04-26T08:04:12.987
null
null
386592
[ "regression", "logistic", "propensity-scores", "matching" ]
614166
1
null
null
0
14
I’m doing a Bachelor Thesis on “The Impact of Macroeconomic Factors On Currency Exchange Rates”. Narrowing down towards my methodology section, my professor advised me to analyse about 7 currency pairs to help get a good conclusion on the thesis topic. I want to know how I can do it to find how select macroeconomic factors would impact those 7 currency pairs individually? Will a multivariate regression analysis work in this case? Or else do I have to analyse each currency pair (dependent variable) separately?
How do I analyse the impact of various macroeconomic factors on 7 currency pairs?
CC BY-SA 4.0
null
2023-04-26T07:27:47.640
2023-04-26T07:27:47.640
null
null
386594
[ "regression", "multiple-regression", "multivariate-analysis", "descriptive-statistics", "macroeconomics" ]
614168
1
null
null
1
72
Edit: After a couple of days of googling, I have found no reference about using gradient descent (GD) methods to solve MLE for fixed distribution, not to mention the case where parameters may vary over the time. This is highly surprising to me. The only stuff I've found was on the GD methods used for MLE computation in case of supervised learning, e.g. linear of logistic regressions. But those methods do not seem to be directly applicable in my situation, as they focused on estimating of distribution of a target given features, where below there are no explicit features given. There's definitely a connection, but I am failing to see one. --- Original: In [Auction problem](https://stats.stackexchange.com/questions/613718/auction-problem) I have asked about a problem where I am looking to estimate a fixed distribution parameters from data. There I have some parametrized distribution given by a CDF $F(\cdot|\theta)$, and at each step I do a draw by choosing $x$ which gives a success with probability $1 - F(x|\theta)$ and failure with probability $F(x|\theta)$. Let's just say I have fixed $x$ and wait until the first time I get a successful trial. My MLE $\theta$ must maximize the log likelihood $$ L(x, t|\theta) = \log\left(F^{t-1}(x|\theta)\cdot (1 - F(x, \theta))\right) = (t-1)\log F(x|\theta) + \log (1 - F(x, \theta)) $$ where $t$ is the step at which I get a success. What I thought of is that instead of solving $\nabla_\theta L= 0$ I could use $\nabla_\theta L$ to as in gradient descent (GD) methods to update my estimate of the parameters by $\theta \to \theta + \nabla_\theta L\cdot s$ where $s$ is some step size. If the distribution is fixed, then after enough iterations I should be able to converge to the optimal $\hat\theta$. I have noticed however, that in such case I do not have to wait for the first success to make an update. Reason is: in the situation above, on each failure I would have an update of $$ \theta\to\theta + \nabla_\theta \log F(x|\theta) \cdot s $$ and on each success $$ \theta\to\theta + \nabla_\theta \log (1 - F(x|\theta)) \cdot s $$ which in case success happens after $t$ steps amounts to the very change of $\theta$ as if we would have just updated it in one go after $t$ steps. That's assuming that the learning step $s$ says constant. I'd like to know more about this procedure. Most likely it is being used, since it allows improving one's knowledge of the best parameters on the go, also one could change $x$ at each step. Do it have a name, what are good references on it? Finally, in case the distribution $F$ may change over time, this procedure provides a way to constantly update our estimate of $\theta_t$. For which perhaps some adjustment of the learning rate $s$ may be needed.
Iterative MLE, possibly changing distribution
CC BY-SA 4.0
null
2023-04-26T07:38:37.227
2023-04-28T06:28:59.170
2023-04-28T06:28:59.170
11363
11363
[ "machine-learning", "maximum-likelihood" ]
614170
2
null
614165
4
null
Let's look at both aspects of your question separately: PS matching and sample size calculations. ## Propensity score matching When you do propensity score matching, you are aiming to balance your groups on a set of observed covariates that inform treatment. By matching people who have (largely) similar propensity scores, we can try to achieve this balance between groups. However, you are not limited to matching on a 1:1 ratio. You could for example match the groups 1:10, meaning you could get 9 individuals in the exposure group and 90 in the unexposed group. Mind you that at a certain point matching more individuals becomes statistically redundant though, as you are still limited by the 9 people in the exposed group. One alternative to PS matching could be PS weighting, where you weight individuals up or down based on their propensity score, but do not remove any individuals from the dataset. This can still allow you to balance the groups on covariates. A great paper introducing this concept and helping you choose the PS weight is [Desai & Franklin 2019 BMJ](https://pubmed.ncbi.nlm.nih.gov/31645336/). ## Sample size calculations To determine whether our random sample from the (theoretical supra)population is large enough in regards to random sample variation, there exist formal sample size formulas. A great introduction to sample size calculations is [Noordzij et al. 2010 Nephrol Dial Transplant](https://pubmed.ncbi.nlm.nih.gov/20067907/). With 9 individuals in one group, from experience I would say that your sample size is likely too small to detect any meaningful differences, but with the sample size calculations, you can still detect how many individuals you would need to be able to detect a meaningful difference, so I strongly suggest you try these out. Mind you that for different objectives in medical research, different sample size calculations exist: they differ for different outcomes, different modelling strategies, and different objects (e.g., detecting a difference vs. creating a prediction model). ### Sources In case the links didn't work, here are the citations of the works I mentioned. Both are open-access. - Desai RJ, Franklin JM. Alternative approaches for confounding adjustment in observational studies using weighting based on the propensity score: a primer for practitioners. BMJ. 2019 Oct 23;367:l5657. doi: 10.1136/bmj.l5657. - Noordzij M, Tripepi G, Dekker FW, Zoccali C, Tanck MW, Jager KJ. Sample size calculations: basic principles and common pitfalls. Nephrol Dial Transplant. 2010 May;25(5):1388-93. doi: 10.1093/ndt/gfp732. Epub 2010 Jan 12. Erratum in: Nephrol Dial Transplant. 2010 Oct;25(10):3461-2. PMID: 20067907.
null
CC BY-SA 4.0
null
2023-04-26T08:04:12.987
2023-04-26T08:04:12.987
null
null
385890
null
614171
2
null
614091
0
null
To answer my own question: Yes, apparently linearity is one of the attributes of SHAP values. I found the answer in this github issue thread: [enter link description here](https://github.com/slundberg/shap/issues/457). The response I am referring to is the one by the "inventor" of SHAP values for model interpretation, so I guess it's trustworthy.
null
CC BY-SA 4.0
null
2023-04-26T08:44:35.570
2023-04-26T08:44:35.570
null
null
182258
null
614172
1
null
null
0
29
I need some assistance. Its driving me crazy. So I have two independent populations, I sample them at different points to obtain ordinal count data. However, at each point the sample sizes used to estimate the counts are unequal. I beleieve that affects the count at that point. What test can I use to compare both populations adequately after obtaining count results after sampling about 5-15 times?
What test can I use to compare for discrete counts from two independent samples
CC BY-SA 4.0
null
2023-04-26T08:46:33.447
2023-04-26T08:46:33.447
null
null
376660
[ "nonparametric" ]
614173
1
null
null
1
21
Let us assume that a logistic regression model has been fitted to some training data and that there is new test data in which $n$ predictor combinations are identical, say $\vec{x}$. The probability for a certain proportion $\hat{p}$ of positive responses among these $n$ data points can then be computed with the binomial distribution based on $n$ and $p=P(Y=1|\vec{x})$ as predicted by the model, i.e. $$P(\hat{p}=k/n) = {n\choose k}p^k(1-p)^{n-k}$$ from which prediction intervals can be built. The probability $p$ predicted by the model, however, comes with some uncertainty. As I understand it, the approximate distribution of $p$ is the distribution of a logistic transform of a normal distribution with the standard error reported by the model, e.g. in R by `glm.predict(..., se.fit=TRUE)$se.fit`. This distribution is complicated (?) but should in principle be computable with the transformation theorem of probability densities. Can (or should?) this uncertainty be incorporated into the probability for obtaining a specific value of $\hat{p}$, or for constructing prediction intervals for $\hat{p}$? If yes: how?
Utilization of standard errors of logistic regression for generating prediction intervals of binomial response
CC BY-SA 4.0
null
2023-04-26T09:06:17.373
2023-04-26T09:33:19.800
2023-04-26T09:33:19.800
244807
244807
[ "regression", "logistic", "prediction-interval" ]
614174
1
null
null
0
23
I would like to compare two sets of n=100 Pearson correlation coefficients. In particular, I would like to test whether the means of these two sets of correlations differ significantly. After some research online, I found a lot of recommendations for comparing individual pairs of correlations coefficients but only [one](https://stats.stackexchange.com/questions/61456/how-to-test-the-difference-between-2-sets-of-pearson-correlations) thread that seems to discuss the setting I'm interested in, i.e. comparing sets of correlation coefficients. In this thread, [one suggestion](https://stats.stackexchange.com/a/368095) is to run a t-test on the Fisher-Z-transformed correlation coefficients. However, [another suggestion](https://stats.stackexchange.com/a/365804) is to run a permutation test. The second suggestion has not gotten any attention, however, I find it interesting because it seems more flexible than a t-test. Therefore, I would like to use a permutation test. Now my question is: In the above-described setting is it necessary to perform the permutation test on the Fisher-Z-transformed correlations, or can I equivalently run it on the "raw" correlations? In Python code, what I would like to do is this: ``` def get_permutation_statistics(x, y, n_permutations=50_000): rng = np.random.default_rng() statistics = np.zeros(n_permutations) total = np.hstack((x, y)) for i in tqdm(range(n_permutations)): shuffled = rng.permuted(total) x_p, y_p = np.split(shuffled, 2) assert x_p.shape == y_p.shape mean_diff = np.mean(x_p) - np.mean(y_p) statistics[i] = mean_diff.item() return statistics # observed difference init_mean_diff = np.mean(x) - np.mean(y) # permutation statistics statistics = get_permutation_statistics(x, y) # get p-value (two-tailed) p = np.sum(np.abs(statistics) > np.abs(init_mean_diff)) / n_permutations ``` where `x` and `y` are two 1-dim. arrays containing the correlation coefficients, either "raw" or Fisher-Z-transformed. ``` ```
Permutation test for comparing sets of correlations
CC BY-SA 4.0
null
2023-04-26T09:13:36.410
2023-04-26T09:21:55.267
2023-04-26T09:21:55.267
384739
384739
[ "correlation", "permutation-test", "fisher-transform" ]
614175
1
null
null
0
9
When I dealing with a infinite horizon average cost DP problem with infinite state space{-inf..-2.-1.0.1.2..inf},with some discrete control and noise space,Im trying to truncate the state space and find the average cost J and differential cost h(x) using bellman equation .It turns out that when I truncate the state space ,the dynamic of the system may bring me out of the truncated state space how can I deal with this case?
Dynamic Programming truncated state space
CC BY-SA 4.0
null
2023-04-26T09:27:58.583
2023-04-26T09:27:58.583
null
null
386603
[ "dynamic-programming" ]
614176
1
null
null
0
12
I wonder if anyone can advise on how I can calculate a confidence interval for the difference between 2 quantities that are themselves a change in proportions. To be clearer if I have two groups (say active and control) and in each a certain number out of a total are ill (say) both pre and post a time period. so for active I can look at the change in proportions over the period and the same for the control. It possible there is some pairing (to some extent some people will occur pre and post - I want to ignore this). I wondered if I could use the group x prepost interaction terms from a model where I specify each person as having a binary endpoint. Or alternatively bootstrap but sampling for each of the 4 proportions (using the sample proportion) and then find the middle 95% of the difference between the proportion changes in the bootstrap samples. many thanks
confidence interval for the difference between 2 changes in proportions
CC BY-SA 4.0
null
2023-04-26T09:39:30.563
2023-04-26T09:39:30.563
null
null
54520
[ "confidence-interval", "pre-post-comparison" ]
614177
1
null
null
0
56
I have a database called dat1. Each participant (id) had multiple measurements. One of the variables measures type of the stress called stress_type (acute=0, chronic =1) and another variable is the stressful level of the stressor (continuous variable 0-3). I wanted to examine the effect of the stressor type, stressful level and interaction of type*level of stress on depression. I disentangled stress_type and stress level variables to get the within and between person mean, and calculated the within and between interaction as well. mean_stress_type_rec = between person mean of stress type mean_stress_level = between person mean of stress level cent_stress_type_rec = within person mean of stress type cent_stress_level = within person mean of stress level I ran the following model: ``` res_intvsnonint_levels_int_nopleas <- lme(dep ~ mean_stress_type_rec * mean_stress_level + cent_stress_type_rec * cent_stress_level, random = ~ cent_stress_type_rec + cent_stress_level _rec + obs_cent1 | id, data=dat1, na.action=na.omit, control=lmeControl(maxIter=1000, msMaxIter=1000, niterEM=1000, sing.tol=1e-20)) ``` I found a very marginal interaction at the between level: ``` Coef. SE 95% CI t-value p Fixed effects Intercept 29.28 4.42 20.61 – 37.96 6.62 <0.001 Time -0.07 0.03 -0.13 – -0.01 -2.43 0.015 Type - between 4.46 6.47 -8.36 – 17.27 0.69 0.492 Type - within 0.41 0.59 -0.74 – 1.56 0.7 0.484 Level - between 2.14 2.6 -3.01 – 7.29 0.82 0.413 Level - within 0.41 0.35 -0.27 – 1.10 1.18 0.237 Type*Level - between-8.21 4.13 -16.40 – -0.03 -1.99 0.048 Type*Level - within 0.6 0.79 -0.96 – 2.16 0.76 0.448 ``` I don't know what then I should do to further examine the between-level interaction. I'd appreciate your help.
Between-person level interaction
CC BY-SA 4.0
null
2023-04-26T09:54:27.197
2023-04-26T14:02:38.400
null
null
386601
[ "r", "mixed-model", "lme4-nlme", "interaction", "multilevel-analysis" ]
614178
1
null
null
1
18
I want to perform a regression and want an objective strategy to decide on which variables to include (so something besides theory or expert judgement). I am aware of stepwise backward and forward methods, in which the most significant variable is added or the least significant variable is removed up to the point where the model fit does no longer significantly improve. I want to make a decision based on BIC, but I do not want to consider all possible combinations of variables (excluding polynomial terms and interactions). If I have $n$ variables, then this means I need to perform $(n) (n!)$ regressions (I think), which is too many. I was hoping to find a stepwise approach for the BIC methods. The reason is simply to reduce the number of cases I need to check. Does there exist a stepwise algorithm for the BIC method? I could not find any stepwise approach that does not consider p-values.
How to determine which variables to include in a regression using BIC, but reduce the number of possibilities by using a stepwise algorithm
CC BY-SA 4.0
null
2023-04-26T09:54:30.367
2023-04-26T11:35:33.113
null
null
219554
[ "regression", "multiple-regression", "feature-selection" ]
614179
1
null
null
0
14
I am now doing a project to estimate the item parameters from the pharmacotherapy exams. Two groups of students from the same university answer the same set of items. From my understanding, the item parameters should be the same because of IRT model's property. But the results of the estimation are totally different. I am wondering why. Is it because the sample scale is too small (each group only has 80 students). [](https://i.stack.imgur.com/j3JOK.png)[](https://i.stack.imgur.com/1W8q0.png)
When using different groups of examinees to answer the same set of items, whether the item parameter should be invariant?
CC BY-SA 4.0
null
2023-04-26T10:33:57.343
2023-04-26T10:33:57.343
null
null
384858
[ "mathematical-statistics", "psychometrics", "item-response-theory", "social-science", "mirt" ]
614180
2
null
614177
0
null
In principle*, you can use emmeans emtrends for this. You need to choose a couple of "testing points" for your other continuous predictor. This is easiest if you standardize your predictors and choose for instance points -1, 0, 1 for predictor 1. This way you will be comparing predictor 2 slopes at predictor 1's levels of mean-1 SD, mean, and mean +1 SD, which I find is often reasonable way to test continuous-continuous interaction. So, first standardize your predictors, then run the model again with standardized predictors, then: ``` library(emmeans) emtrends(modelname, pairwise ~ levelBetween, var = "levelWithin", at=list(levelBetween=c(-1,0,1)) ``` You don't have to standardize though if you can find reasonable testing points from the predictor's raw values. *However, I wonder about your stress type variable - can it really be understood as continuous? I understand you need to somehow get the mean value from it but I wonder what the resulting variable actually represents because it was originally categorical (?). But in principle you can use the above with a cont x cont interaction.
null
CC BY-SA 4.0
null
2023-04-26T11:13:16.610
2023-04-26T14:02:38.400
2023-04-26T14:02:38.400
357710
357710
null
614181
1
null
null
0
10
Description: I have 100 products. For each item I have the number of hours it has been used for each quarter of multiple years. |Item |2020_Q1 |2020_Q2 |2020_Q3 |2020_Q4 |2021_Q1 |2021_Q2 |2021_Q3 |2021_Q4 |Sold | |----|-------|-------|-------|-------|-------|-------|-------|-------|----| |A |50 |80 |75 |90 |60 |70 |85 |95 |No | |B |30 |50 |45 |NaN |NaN |NaN |NaN |NaN |Yes | |C |70 |85 |80 |90 |75 |80 |90 |100 |No | |D |20 |70 |40 |45 |30 |NaN |NaN |NaN |Yes | |E |60 |75 |70 |80 |55 |45 |40 |30 |Yes | If the item has been used less or equal to 50 hours in three consecutive quarters, the item is sold. Question: Would you be able to suggest a smart and elegant way to predict if an item will be sold in the next quarter (2022_Q1) if a new time series is passed as an input? Note: I would be able to achieve my goal if the series had the same the same lengths but in this case I am totally lost.
How to make a prediction if training time series have different lengths?
CC BY-SA 4.0
null
2023-04-26T11:16:43.807
2023-04-26T11:16:43.807
null
null
224726
[ "machine-learning", "time-series", "predictive-models" ]
614182
1
null
null
0
15
I'm doing text-based, double-entry bookkeeping using beancount, and I'm creating a GUI application using node.js to automate some steps in the process. Some example data looks as follows: |Payee #1 |Narration #2 |Date #3 |Account #4 |Other account #5 |Amount #6 | |--------|------------|-------|----------|----------------|---------| |Employer |Salary payment |2023-03-25 |Assets:Bankaccount |Income:Salary |1000.00 | |Real estate company |Apartment rent |2023-04-01 |Expenses:Rent |Assets:Bankaccount |700.00 | |Supermarket |Groceries |2023-04-20 |Expenses:Groceries |Assets:Bankaccount |150.00 | Having roughly 10'000 such items completely available, I thought about using machine learning or a similar approach to predict some values I have to otherwise provide manually. Always having available are: Payee (1), Narration (2), Date (3), Amount (6). What I would like to predict/make suggestions for: Account (4) and Other account (5). Ideally, the result would be a function predicting those elements. I've looked into tensorflow, but I believe the challenge here is to convert the strings into meaningful numbers that can in turn be mapped to a range between 0 and 1 (does this even make sense?). The payee can sometimes be similar, but is not always reported exactly the same. The narration will be provided manually by me anyway. The date and amount could easily be converted to numerical values. Account and Other account are categorical data which could easily be mapped to an index - there are roughly 150 different ones. Any idea of how to tackle this? Is using an engine like tensorflow (or similar) even a sensible approach? I think the task at hand is not too complex, I just don't really know where to get started without hand-coding everything from scratch. I had this question stackoverflow, where it was closed as off-topic, which is why I'm trying again here.
Produce automated suggestions on beancount entries
CC BY-SA 4.0
null
2023-04-26T11:20:57.380
2023-04-26T11:20:57.380
null
null
386610
[ "predictive-models", "tensorflow", "finance" ]
614183
1
null
null
1
57
Say I have a time series X from Bernoulli Trials with outcomes 0 or 1 where X(n) is the $n$th outcome in the time series. The process is driven by some probability of success $\pi$ but this probability may change over time, meaning that we have $\pi(n)$ being the probability of success of the $n$th trial. What I would like to do is to from X estimate $\pi$ at every point in some maximum likelihood sense so we could see it's evolution over time. I would of course like $\pi$ to behave nicely as well. $\pi$ may change gradually over time but shouldn't jump erratically, the naive solution to the above I guess would have $\pi$ jumping from 0 to 1 constantly without taking the history into account which is not what we want. Would we need to prescribe some dynamics to $\pi$ itself to accomplish this? My best guess is I should be using some Bayesian inference to accomplish this and having the prior distribution of $\pi(n)$ be centered around $\pi(n-1)$ (normally distributed probably) and updating our belief about the current value from the $n$th outcome, but I'm not sure exactly how to go about it or if this is the right approach. Would appreciate any ideas. EDIT: I've tried to do this with Bayesian inference where I start with a prior distribution of $\pi$ and update it with one data point at a time but I'm not convinced this is exactly what I want. One issue is that it's not quite as responsive to changes in the underlying probability as I would like (from simulated data with known probability) and the credible region (width indicated by red line) is always decreasing with additional data, ideally I would like that to be able to widen in the case that the underlying probability looks like it has changed to represent that we are again in unknown territory. ``` import pandas as pd import numpy as np from scipy.stats import beta ##Bayesian with equal weights. # Load data p_true = np.concatenate((0.1 * np.ones(2000), 0.2 * np.ones(2000), 0.3 * np.ones(2000))) data = np.random.binomial(n=1, p=p_true) # Set prior parameters mean = 0.2 alpha_prior, beta_prior = Beta_Dist_shapeparam_from_mean(mean) # Create empty list to store posterior parameters alpha_posterior = [] beta_posterior = [] # Iterate over each trial in the data for i in range(len(data)): # Get outcome of current trial (0 or 1) outcome = data[i] # Compute posterior parameters using current outcome and prior parameters alpha_post = alpha_prior + outcome beta_post = beta_prior + 1 - outcome # Store posterior parameters for future use alpha_posterior.append(alpha_post) beta_posterior.append(beta_post) # Update prior parameters with posterior parameters alpha_prior = alpha_post beta_prior = beta_post # Compute mean and standard deviation of posterior distribution for each trial posterior_mean = np.array(alpha_posterior) / (np.array(alpha_posterior) + np.array(beta_posterior)) posterior_std = np.sqrt(np.array(alpha_posterior) * np.array(beta_posterior) / ((np.array(alpha_posterior) + np.array(beta_posterior)) ** 2 * (np.array(alpha_posterior) + np.array(beta_posterior) + 1))) #Plot the posterior parameters ``` [](https://i.stack.imgur.com/xHMX0.png)
Estimating the probability of success in a sequence of Bernoulli Trials when the probability is changing over time
CC BY-SA 4.0
null
2023-04-26T11:27:35.883
2023-05-08T11:03:01.890
2023-05-02T11:11:13.217
280338
280338
[ "bayesian", "bernoulli-process" ]
614184
2
null
614178
0
null
BIC is one criterion for adding or removing a single variable in stepwise selection. There is the `MASS::stepAIC` function in `R`. Similar logic could be applied to BIC. As is discussed in the comments, though, there are major issues with stepwise selection. Stepwise regression seems like it can be [competitive](https://stats.stackexchange.com/questions/594106/how-competitive-is-stepwise-regression-when-it-comes-to-pure-prediction) for pure prediction problems, but it’s use requires some care, and there are major flaws when you do inference on stepwise-selected variables.
null
CC BY-SA 4.0
null
2023-04-26T11:35:33.113
2023-04-26T11:35:33.113
null
null
247274
null
614185
2
null
614044
2
null
> does it still make sense to calculate the Confidence Interval for what you believe to be the population estimate You already answered this yourself. Yes it makes sense because of sources of error/variation in the measurements.
null
CC BY-SA 4.0
null
2023-04-26T11:36:12.927
2023-04-26T11:36:12.927
null
null
164061
null
614186
1
null
null
0
31
The following quote is from a set of lecture notes: > When fitting generalised linear models, the objective function is canonically the log - probability of $Y|X$ (essentially, the log - likelihood with data $Y|X$ and weight parameters W). Equivalently, we minimise $L(Y, \hat{Y}) := -\ln P(Y | \hat(Y(X))$ where the conditional distribution $p$ is assumed, and is parameterised by its mean $\hat{Y} = E[Y|X]$ How should I convince myself that $L(Y, \hat{Y}) := -\ln P(Y | \hat{Y}(X))$?
Why is the objective function for GLM equal to $-\ln p(Y | \hat{Y})$?
CC BY-SA 4.0
null
2023-04-26T11:42:57.340
2023-04-26T12:30:20.747
2023-04-26T12:30:20.747
22311
109101
[ "self-study", "generalized-linear-model" ]