Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
615885
2
null
562676
1
null
As Nick Cox says in his comment, a plot with few elements (lines/points) is often best labelled directly i.e. with text next to the element on the plot. But if that is not feasible or becomes too crowded, I agree with you that legend elements/markers should correspond to the plot design: use points for points, lines for lines, etc.
null
CC BY-SA 4.0
null
2023-05-15T07:28:48.083
2023-05-15T07:28:48.083
null
null
121522
null
615886
1
null
null
0
11
Do caret 'creates' automatically the dummies for Random Forest and Gradient Boosting models, or do I have to do it previously with dummyVars()? When I predict with the model, does caret handle the dummies in the predict data, too? Thanks! :)
Dummy variables in Caret for Random Forest and Gradient Boosting
CC BY-SA 4.0
null
2023-05-15T07:53:14.893
2023-05-15T07:53:14.893
null
null
381118
[ "r", "categorical-encoding", "caret" ]
615888
1
null
null
1
25
I understand the close analogy between a fuzzy RDD and Instrumental Variables estimation. This is clearest in the Wald Estimator. Here is the Fuzzy RDD equation: [](https://i.stack.imgur.com/6CuPv.png) And here is the (manual) estimation using IV: [](https://i.stack.imgur.com/isTRV.png) My question is: when implementing an RD on Stata, using rdrobust, and choosing a linear function (i.e. polynomial degree 1), will this yield the same estimate as doing the estimation (manually) using IV? How will they differ? How can you ensure they are the same?
Fuzzy RDD and IV
CC BY-SA 4.0
null
2023-05-15T08:30:16.110
2023-05-15T08:30:16.110
null
null
387986
[ "nonparametric", "stata", "instrumental-variables", "regression-discontinuity" ]
615892
1
null
null
1
25
I have some time-to-event data with right-censoring. Details Added: Each subject is followed up until first event (no recurrent events) or loss to follow-up. Let's assume there are no competing risks. Given that rates are constant (and other Poisson assumptions hold), and censoring is non-informative, am I correct in thinking that Poisson regression gives unbiased estimates of the incidence rate ratio in the following model? ``` glm(status ~ trt, offset = log(time), family = poisson(link = "log"), data = df) ``` A few test datasets have yielded very similar results between the Cox HR and Poisson IRR, but I want to understand whether this is general. My logic is that any bias in the estimation of the rates is balanced between both groups, leaving ratios unaffected?
Effects of right-censoring on Poisson IRR estimates
CC BY-SA 4.0
null
2023-05-15T09:03:48.433
2023-05-17T22:32:10.813
2023-05-15T17:35:47.210
304449
304449
[ "survival", "poisson-regression", "censoring" ]
615893
2
null
610116
3
null
I assume that by 'size', you mean something like body length. Your model is inconsistent with the data, and ignores much that is known about such scaling relationships in biology. As a start, it's worth modelling both size and mass on a logarithmic scale, because [these relationships exhibit power laws](https://www.jstor.org/stable/2410002). Secondly, that is a remarkably complex model, given what is known about these allometric relationships. If analysed properly, I would be very surprised if you found that many of those terms had a strong influence on the size-mass relationship. Thirdly, you're also being led astray by your use of stepwise selection methods to do statistical inference. As explained well in [this thread](https://stats.stackexchange.com/a/20856/121522), stepwise selection leads to heavily biased results and there are always better ways of analysing data. Finally, I would also recommend writing your code differently. As written, you have multiple redundant main effects and interactions. `R` interprets this intelligently and filters out the duplicate terms, but it's not good practice and is liable to lead to confusion.
null
CC BY-SA 4.0
null
2023-05-15T09:15:23.127
2023-05-15T15:07:16.087
2023-05-15T15:07:16.087
121522
121522
null
615894
1
null
null
1
8
So, I am testing a theory, proposed in the literature, and I want to do a multigroup analysis on it. One of the HOCs(reflective) of the model has three LOCs(all reflective); when I run the tests on the model I cannot reach compositional invariance(smart PLS); however, when I remove the LOCs and just make it a first order reflective construct with all indicators attached to it, the model gives full measurement invariance. The question is that am I allowed to do this removal and change as a way to reaching invariance? Thanks.
Could I remove LOCs of an HOC to have the compositional invariance?
CC BY-SA 4.0
null
2023-05-15T09:17:38.673
2023-05-15T09:17:38.673
null
null
387988
[ "structural-equation-modeling", "partial-least-squares", "invariance" ]
615895
2
null
599065
2
null
Answered in comments: > Have you considered a standard scatterplot of (pre, post) observations? – whuber > Note that the plot is simply showing a set of differences as slopes (taking pre-post distance on the x-axis as 1). That visual comparison of slopes may be better done as their raw values (post-value-pre-value). Beyond that, but in a similar vein, a Tukey mean/difference plot may be relevant (medical people tend to call it a Bland-Altman plot) – Glen_b
null
CC BY-SA 4.0
null
2023-05-15T09:47:28.983
2023-05-15T09:47:28.983
null
null
121522
null
615896
2
null
615864
1
null
You are probably good with an ARIMA model, although ARIMA is notorious for not really giving good forecasts. However, I would discourage "rolling your own" model - better to [use a good automatic ARIMA model selection algorithm](https://stats.stackexchange.com/q/595150/1352). This will take care of log-transforming (or using other Box-Cox transformations, and the back-transformations, which are not trivial!). Alternatively, consider exponential smoothing, e.g., `ets()` in the `forecast` package for R. This is still a very good benchmark. This thread contains pointers to general literature about forecasting: [Resources/books for project on forecasting models](https://stats.stackexchange.com/q/559908/1352)
null
CC BY-SA 4.0
null
2023-05-15T09:47:44.283
2023-05-15T09:47:44.283
null
null
1352
null
615897
1
null
null
1
81
I am new to machine learning, I am trying to find a way to predict voltage waveforms into the future. I have seen examples that successfully predict sinusoids or continuous voltage data based on timeseries. (following the same idea) My data set looks like this: [](https://i.stack.imgur.com/FzSNA.png) [](https://i.stack.imgur.com/Ty8vB.png) Each sequence maps one discharge cycle of a specific sensor (padded to 1000 data points). My goal is now to train a LSTM network with this data to predict the future course of a curve at any point in time. The idea is to divide the time series into time windows of 24H. On the basis of these 24H one time step into the future shall be predicted. (Later I would like to put the prediction at the end of the time window as if it had actually happened and based on this a new one) X_train.shape (3975, 24, 5) y_train.shape (3975, 1) When I train the network this way, sometimes I get a model that works very well, and sometimes after training with exactly the same data on the same Model and hyperparameters, I get a model that makes worse predictions, e.g. when the curve rises instead of falls (Completely different, not explainable to me). Is this a problem with my model setup, data preparation, or is it just too little data to train a model properly? My basic question is: Is the idea fundamentally wrong for this Dataset? I'm grateful for any kind of feedback!
Time series prediction problem formatted correctly for LSTM neural networks?
CC BY-SA 4.0
null
2023-05-15T09:48:28.493
2023-05-15T09:48:28.493
null
null
379800
[ "machine-learning", "time-series", "predictive-models", "lstm", "hyperparameter" ]
615898
1
null
null
1
5
I'm trying to conduct a SIMPER analysis in PRIMER. I'm wanting to do the analysis using a factor that is made up of the interaction between 2 other factors, with one nested in a third. I'm looking at the response of invertebrates after fire across time and at multiple sites. So the factor is timeXsite(impact) where site is nested in Impact and where impact is either fire or control. The simper pane in Primer allows you to select a single factor but I can't work out how to set timeXsite(impact) as one of those options as I want the simper results to take into account the nesting of site within impact. Any ideas are welcome. Thanks
How to include a nested factor in a SIMPER test in PRIMER
CC BY-SA 4.0
null
2023-05-15T09:50:00.290
2023-05-15T09:50:00.290
null
null
367731
[ "similarities" ]
615899
1
null
null
1
64
Suppose I use R and the `ggsurvplot()` function to draw three survival curves like this: [](https://i.stack.imgur.com/gCtVK.jpg) The `ggsurvplot()` then returns a single p value. From what I've read this p value is derived from the score test of the null hypothesis that there are no differences at all between the three survival curves in a cox model. How can interpret this practically? Is it fair to say the green curve for example is significantly different or in this case superior to the blue or red one? I have a hard time intepreting this because green and red look similar but there seems to be significant difference between blue and the other 2 curves.
Interpreting single p value for three survival curves
CC BY-SA 4.0
null
2023-05-15T10:19:47.007
2023-05-15T11:48:01.647
null
null
384315
[ "survival" ]
615900
2
null
615804
3
null
A generalized distribution is just more general than what it generalizes. No more, no less. I haven't encountered rules about its usage beyond that self-evident limitation. A distribution can be generalized in different ways, and conversely a distribution may be a particular case of various generalized distributions. Nor is there a rule about whether the word "generalized" appears at all. Thus gamma distributions include exponential distributions, but so do generalized Pareto distributions. Despite its other limitations, I have found the Wikipedia articles on particular distributions generally excellent, and they and the literature generally are replete with comments about particular families of distribution and yet wider families to which they belong. The matter is muddied further by whether particular distributions are limiting cases of a generalized family, and not just special cases. Thus some normal distributions are limiting cases of gamma distributions. There is a mass of small print here. Sometimes it is clearer in one parameterisation rather than another what generalizations are possible and indeed useful or congenial.
null
CC BY-SA 4.0
null
2023-05-15T10:27:26.773
2023-05-15T10:27:26.773
null
null
22047
null
615902
1
615917
null
5
352
I've fitted my data to a glm with a gamma distribution (as this is the most appropriate for the type of data) using the `glm()` function in R. I used the `summary()` function to test and get p values. I've received comments back from a publication submission which wants more details on the specific test that this does. I can't find any where which states what test this performed with this call. One place I saw said that this does a Wald test, however as far as I can tell a Wald test is a parametric test that assumes a normal distribution. Can anyone provide some clarity on the specifics of what statistical test is providing the p values here? EDIT For clarity: ``` data <- data.frame(Group = rep(c("A","B"), each = 100), Variable = c(runif(100, 1, 99), runif(100, 25, 124))) model <- glm(Variable ~ Group, data = data, family = "Gamma") summary(model) ``` returns ``` Call: glm(formula = Variable ~ Group, family = "Gamma", data = data) Deviance Residuals: Min 1Q Median 3Q Max -2.2135 -0.4441 0.0023 0.3474 0.6771 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.0186047 0.0008493 21.905 < 2e-16 *** GroupB -0.0049861 0.0010526 -4.737 4.13e-06 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for Gamma family taken to be 0.2084024) Null deviance: 64.352 on 199 degrees of freedom Residual deviance: 59.505 on 198 degrees of freedom AIC: 1937.6 Number of Fisher Scoring iterations: 5 ``` This returns the expected result that these groups are significantly different, but what test was performed to calculate this. In the manuscript I have written "R (version 4.1.1) base functionality was used for all calculations and statistical analyses...data were fitted to generalised linear models using a gamma distribution with the glm() function, and groups compared by Wald tests against their respective, same age controls using the summary() function." But a Wald test is a parametric test which assumes a normal distribution. Is R perhaps performing a Wald Log-Linear Chi-Square Test instead of a normal Wald test in these cases?
What test does summary() perform on a glm() model using a Gamma distribution in r?
CC BY-SA 4.0
null
2023-05-15T11:02:02.377
2023-05-15T17:26:50.480
2023-05-15T12:59:30.443
270954
270954
[ "r", "generalized-linear-model" ]
615903
2
null
615899
0
null
All that the score (log-rank) test tells you in this situation is that there is evidence of some difference among the 3 survival curves. It doesn't tell you directly which are different. A simple way to evaluate pairwise differences among the survival curves is to set up a Cox regression model with `group` as a 3-level categorical predictor. The "score" test is for such a model, anyway. You then can do pairwise comparisons among the regression coefficients like you would for any set of multi-category regression coefficients. In the data you display you might not find any "significant" pairwise differences despite the overall model `p = 0.046`, which is already barely "significant" under the usual `p < 0.05` criterion. And even if you do find such a difference it might not extend well to new data samples. The power in survival analysis comes from the number of events, not the number of total individuals, and you seem to have fewer than 10 events total. You might need on the order of 20 to 30 events to be able to distinguish 3 groups reliably.
null
CC BY-SA 4.0
null
2023-05-15T11:48:01.647
2023-05-15T11:48:01.647
null
null
28500
null
615904
1
null
null
0
32
I try to solve a problem (number 7.7) in the 6-th edition of the book 'Statistical Quality Control, a modern introduction' of Douglas Montgomery. Here is the statement: A control chart indicates that the current process fraction nonconforming is 0.02. If fifty items are inspected each day, what is the probability of detecting a shift in the fraction nonconforming to 0.04 on the first day after the shift? By the end of the third day following the shift? My approach is the following: I calculated my upper and lower control limits , UCL and LCL respectively, given by $$ UCL = p + 3\sqrt{\frac{p(1-p)}{n}}, $$ $$ LCL = p - 3\sqrt{\frac{p(1-p)}{n}}, $$ where $p$ is the fraction of nonconforming and $n$ is the sample size of each 'trial'. So, in my case, I have $p=0.02$ and $n = 50$ leading to $UCL = 0.079$ and $LCL = -0.039$. But since a negative value for $LCL$ does not make sense, then it is set to $0$, i.e. $LCL = 0$. So, for the first question, we are looking to this value $$ \mathbb{P}(detecting \ a \ shift \ of \ p \ to \ 0.04|p = 0.04), $$ the latter is equivalent to (from my point of view) $$ \mathbb{P}(D \geq USL \cdot n| p = 0.04) - \mathbb{P}(D \leq LSL \cdot n| p = 0.04) = 1- \sum_{k=0}^{0.079 \cdot 50} \binom{50}{k} 0.04^k \cdot 0.96^{50-k} - 0.96^{50}= 0.269, $$ where $D$ is the number of records that are outside the limits. this result is not the correct one, the correct answer is $0.278$. Does anyone have an idea?
Probability of detecting a shift in fraction nonconforming control chart
CC BY-SA 4.0
null
2023-05-15T11:50:19.430
2023-05-16T11:24:00.817
2023-05-16T11:24:00.817
383929
383929
[ "quality-control" ]
615905
1
null
null
1
34
I know this has been asked a lot but I have checked everything and still don't understand. To start, I have a dataset of global temperatures averaged over years. There is a trend in the series and I use pmdarima ndiffs to give me the number of differencing. I use pmdarima auto_arima to get the model with the lowest aic value. But when I plot the predictions, I get a straight line with a trend. The model I use is: ``` size = len(avg_temp) cutoff = int(size*0.7) train = avg_temp[:cutoff] test = avg_temp[cutoff:] model = pm.auto_arima(train, start_p=1, start_q=1, max_p=10, max_q=10, seasonal=False, d=1, trace=True, suppress_warnings=True) ``` [](https://i.stack.imgur.com/CGmqB.png) The order of the model obtained by auto_arima is (1, 1, 3). I read in some answers that the forecast of ARIMA is only to the value of q. But I have seen people forecasting bigger ranges of values. I checked for seasonality by using statsmodels seasonal_decompose and there was no seasonality. [](https://i.stack.imgur.com/iN7Cv.png) My questions are: Am I doing something wrong? What can I do to improve it? and if it can't be improved how do I explain it?
auto_arima straight line prediction python
CC BY-SA 4.0
null
2023-05-15T11:59:19.867
2023-05-15T16:00:05.687
2023-05-15T16:00:05.687
35989
387960
[ "python", "forecasting", "arima" ]
615906
1
null
null
0
50
I have repeated measures (two assessments) of an independent variable (binomial/two different groups) but dependent variables (continous) measured at only one assessment - what analysis approach would be appropriate? I checked if there are significant differences regarding those variables between the groups. I also did logistic regression analysis. But is there another approach which could analyze the relationship between these variables or wether the dependent variables influence the group membership? Would ANOVA or ANCOVA be appropriate?
Repeated measurement-independent variable only at one measurement
CC BY-SA 4.0
null
2023-05-15T12:11:31.333
2023-05-15T12:11:31.333
null
null
387590
[ "repeated-measures" ]
615907
1
null
null
0
15
I am trying to generate labor standards (the amount of productivity per hour an employee should be accomplishing) based off of historical data volumes. I do not have the start and end time that a specific task was completed or started. I do however have various dimensions related to these tasks. An example would be for collections I know if the account was a high dollar, low dollar, mid dollar account. I know the time stamp that is associated with the completion of the entry (not the start time however). I also know the employee that worked the account. So I can see volume by hour by employee, but what I can’t determine is how much time they spent on a certain account type versus others. I would like to determine a way to apply some sort of regression analysis or other technique to account for mixes in items worked and determine the number of accounts that should be worked per hour based on a typical run rate given the different account type fields (high dollar and financial class namely). I realize this is a very broad question but I’m hoping someone can walk me through a statistical model or technique I could use to help me achieve this. I believe the toughest aspect to account for is the mix in volume per hour. For example, in the below table it is not possible for me to merely take an average of accounts worked per hour of each account type (High, Mid, Low, etc) because each employee would be working different types of accounts each hour. So I'm not able to definitively say that a High dollar account for financial class 8 can have 4 accounts completed per hour because of this mix. |Total per hour |High Dollar |Mid dollar |Low dollar |Ultra low dollar |Total | |--------------|-----------|----------|----------|----------------|-----| |Employee 1 |3 |2 |5 |1 |11 | |Employee 2 |6 |1 |3 |0 |10 | |Employee 3 |0 |0 |2 |12 |14 | |Employee 4 |2 |7 |3 |0 |12 | |Employee 5 |6 |0 |0 |2 |8 | I am posting an example raw dataset below. Any and all help would be greatly appreciated, and if you need further clarification I'm happy to provide it. I recently obtained my masters in data science and this will be my first "real life" stats project, so I'm hoping I can deliver! Thank you all! I have used box and whisker plots to identify outliers, but I'm not sure how to account for mixes in account types without using some sort of regression analysis or technique to determine relationships and mixes better. But I'm really struggling getting started. |Employee_ID |FinClass |Liability Level |Work_Units |Work_Activity_Case_ID |Work Activity Timestamp | |-----------|--------|---------------|----------|---------------------|-----------------------| |2 |12 |H |1 |4750591-89044 |4/11/2023 18:56 | |3 |12 |H |1 |6901244-16522 |4/11/2023 18:53 | |4 |12 |H |1 |1185488-79022 |4/11/2023 18:51 | |5 |8 |T |1 |11734672-61078 |4/11/2023 18:48 | |6 |8 |T |1 |14288106-4729 |4/11/2023 18:40 |
Finding patterns of volume worked - Productivity Mix Correlation
CC BY-SA 4.0
null
2023-05-15T12:13:44.950
2023-05-15T15:59:52.553
2023-05-15T15:59:52.553
387854
387854
[ "regression", "correlation", "multiple-regression", "k-means", "change-point" ]
615910
1
615935
null
1
53
I am currently working on a binary classification problem using imbalanced data. The algorithm that I am using is random forest. The problem is about predicting whether each sales project will meet its target or not. For example, a sales manager could have multiple sales project running under him. We need ML to predict what is the likelihood that each project will meet its target agreed during start of the project. Each projects runs for 3 to 5 year cycle. So, every year there is a specific target to be met. Based on the year currently the project is in, we would like to know whether project will meet its target upto that specific year. If the project is in 3rd year, we need to find the likelihood for the project to meet its 1st 3 years target (1st, 2nd and 3rd year). So, now my question is on including two columns/feature which contains the value of how much target achieved/units purchased till this time point (3rd year) as well as "target set at the start of the project". Is it okay to include the feature of "total target achieved/units purchased as on date" and "target set at the start of the project"? or it is data leakage or considered biasing the model? we have that target achieved/units purchased as on date info for every project which is updated frequently based on the purchase made. Every project that we are trying to predict the likelihood, will either have achieved 0 % of the target or 10% of the target or 20% of the target or exceeded the target up to that time point etc. So, we have this info for all records. And the output_label column is marked as 1 if they exceed the target and marked as 0 if they have not met the target. So, we feed the model the target set (ex:1000 units should be bought) for a project and also how much they have achieved as of now (ex: 200 units bought already) along with other variables. So, do you think this is a data leakage or considered biasing the model? can I use these two features or not? As I have the data for these two features at the start of my analysis itself. Meaning, if I am extracting data/building model today, I can find out what is the latest value for "target achieved as on date" yesterday and "target set at the start of the project" (using which labels are derived) But what if ML model easily captures the relationship (if target achieved >= target set - high likelihood to meet the target else low likelihood to meet the target). So, in this case do we need ML at all in the first place? Am confused. Of course, along with these features, am trying to few more input variables as well based on historical data. Can you guide me on whether incorporating these two features - target set and target achieved as of date is okay? But yes, including these features results in better performance of the model. while these two features majorly drive the prediction to 87% of f1 in test data, if I include my additional features, they take upto 93% for f1 in test data. If I exclude these two features, f1 is about 55-60% for minority class. But one thing, I found out was that these two columns are not heavily correlated within themselves and also with the target. So, am not sure how is prediction performance being increased so heavily after these two features Also, important point to note is that my output variable is computed using a formula/rule that involves these two features. However, when I validated the performance on the test data, I don't see any signs of overfitting or drop in performance. But yes, these two features drive the prediction all alone contributing to around 87% of f1 score where as other 3-4 predictors add another 5 points. So, am I good to use these features in model building despite they being used to create rule-based label? I don't let the model know the exact formula/rule. So, what do you think?
Can input variable be leaking data?
CC BY-SA 4.0
null
2023-05-15T12:39:51.840
2023-05-16T12:50:07.623
2023-05-16T12:50:07.623
241460
241460
[ "machine-learning", "mathematical-statistics", "classification", "data-transformation", "data-mining" ]
615911
1
null
null
0
14
I have a Gibbs sampler that updates a system of $n$ variables $(x_1,\ldots,x_n)$ by each of the full conditional distributions. Let's say that I add a $n+1^{th}$ step: I also update $v^T(x_1,\ldots,x_n)$, with $v\in R^n$. Since I update following the full conditional of $v^T(x_1,\ldots,x_n)$, either by direct sampling or using a Metropolis step, no doubt that the stationary distribution will be preserved. However, does this risk to mess with the irreductibility (or any other property I have forgotten) of the chain and make my Gibbs sampler invalid ?
Can I add more steps to a Gibbs sampler without hurting the ergodicity of the chain?
CC BY-SA 4.0
null
2023-05-15T12:44:28.903
2023-05-15T12:44:28.903
null
null
388009
[ "markov-chain-montecarlo", "gibbs", "ergodic" ]
615912
1
null
null
0
11
I have a large model in r like: sales = ai+ x1 + x2 + x3 + x4 +(x1x4) + (x2x4) + (x3*x4), where x4 is a dummy for a certain interventoin. As a result, i wanted to analyse the effect of x1,x2,x3 during intervention and without intervention. One problem i experience is multicollinearity. I thought about makeing 2 seperate models like: sales = ai+ x1 + x2 + x3 + x4 +(x1x4) + (x2x4) + (x3x4), sales = ai+(x1x4) + (x2x4) + (x3x4) and compare their coeficient to generate a conclusion about the effect of the intervention. However, i am not sure if this is an apprpriate way to handle this. DO you guys spot any danger in this method or sugget other procedures to do this? Thank you in advance?
Interaction effects in modeling with multicollinearity
CC BY-SA 4.0
null
2023-05-15T12:47:54.337
2023-05-15T12:47:54.337
null
null
386543
[ "regression", "linear", "model", "multicollinearity", "intervention-analysis" ]
615913
1
null
null
1
15
I am presented with this problem: given a set of points (the number can vary, and their identity is not fixed across observations) distributed in a bounded 2D space (say $x \in [0,1]$, $y \in [0,1]$), predict a specific numerical value or class. So, each observation is this collection of points. There is no rotational or translational invariance. To make a trivial example, think about the task of predicting the correlation coefficient from a given scatterplot (of course, my problem is way less trivial!). What could be a meaningful encoding of such information? I considered dividing the space into small squares and then performing some density estimation (dividing by the total number of points). This will enforce the same structure to all data points. Is there anything more elegant? Possible alternatives?
Best way to encode a variable number of points in a 2D space as features
CC BY-SA 4.0
null
2023-05-15T12:48:49.877
2023-05-15T12:48:49.877
null
null
239947
[ "machine-learning", "predictive-models", "embeddings" ]
615915
1
null
null
0
10
I have been working with sentiment analysis for a paper and my advisor wants me to have a confidence interval for everything. Here is my data: ``` type label count 0 aide memoire negative 102 1 aide memoire positive 128 2 briefing negative 176 3 briefing positive 141 4 draft negative 160 5 draft positive 144 ``` I want to find two things: 1. Find the percentage of negative and positive sentiments for each type of document. Then, find the confidence interval of this percentage. What I have done: Found the percentages: ``` type label count sent_frequency 0 aide memoire negative 102 0.443478 1 aide memoire positive 128 0.556522 2 briefing negative 176 0.555205 3 briefing positive 141 0.444795 4 draft negative 160 0.526316 5 draft positive 144 0.473684 ``` For the confidence interval, I did the following using `proportion_confint`: ``` confidence_intervals = [] for type in data['type'].unique(): for sentiment in data['label'].unique(): lower_ci, upper_ci = proportion_confint(data.loc[(data['type']==type) & (data['label']== sentiment), 'count'].iloc[-1], data.loc[data['type']==type, 'count'].sum(), alpha=0.05, method='wilson') ``` Giving me this: ``` type label lower_ci upper_ci 0 aide memoire negative 0.380726 0.508088 1 aide memoire positive 0.491912 0.619274 2 briefing negative 0.500164 0.608924 3 briefing positive 0.391076 0.499836 4 draft negative 0.470210 0.581765 5 draft positive 0.418235 0.529790 ``` 2. Find the delta values (positive percentage - negative percentage) for a specific type of document over time (there is a column `datetime`). Then, find the confidence interval of this difference. What I have done: I filtered the data only to a selected type of document, like this: ``` datetime label count sent_frequency 0 1956-07-31 positive 4 1.000000 1 1957-07-31 negative 90 0.422535 2 1957-07-31 positive 123 0.577465 3 1958-07-31 negative 100 0.546448 4 1958-07-31 positive 83 0.453552 5 1960-07-31 negative 94 0.362934 6 1960-07-31 positive 165 0.637066 7 1961-07-31 negative 87 0.345238 8 1961-07-31 positive 165 0.654762 9 1962-07-31 negative 202 0.411405 10 1962-07-31 positive 289 0.588595 11 1963-07-31 negative 117 0.455253 12 1963-07-31 positive 140 0.544747 13 1965-07-31 negative 143 0.496528 14 1965-07-31 positive 145 0.503472 15 1956-07-31 negative 0 0.000000 ``` Then I found the delta values (positive percentage - negative percentage) for each `datetime` and used the `t.interval` function to create the ci. ``` freq_pos = data.loc[data['label'] == 'positive', 'sent_frequency'].values freq_neg = data.loc[data['label'] == 'negative', 'sent_frequency'].values difference_arg = data.loc[data['label'] == 'positive', 'sent_frequency'].values - data.loc[data['label'] == 'negative', 'sent_frequency'].values difference_arg = pd.DataFrame(difference_arg) difference_arg.columns = ['difference'] # Calculate the standard error and confidence interval for the difference count_total = count_sent_arg.groupby('type').sum()['count'] se = ((freq_pos * (1 - freq_pos)) / count_total) + ((freq_neg * (1 - freq_neg)) / count_total) se = se.apply(lambda x: x**0.5) difference['ci_lower'], difference['ci_upper'] = difference.apply(lambda x: t.interval(alpha=0.95, df=len(data['datetime'].unique())-1, loc=x['difference'], scale=se[x.name]), axis=1).apply(pd.Series).values.T ``` Question I have done some search, and I found the `proportion_confint` and `t.interval` are different methods. Does it make sense to use two different ways of finding the confidence interval for these two scenarios? Is the application of these methods correct? I will have a meeting with my advisor soon and I want to be able to explain the ci methods I used and be sure that they are correctly applied. t.interval: [https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.t.html](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.t.html) proportion_confint: [https://www.statsmodels.org/devel/generated/statsmodels.stats.proportion.proportion_confint.html](https://www.statsmodels.org/devel/generated/statsmodels.stats.proportion.proportion_confint.html)
Confidence interval for sentiment analysis: proportion_confint vs t.interval difference?
CC BY-SA 4.0
null
2023-05-15T13:11:46.237
2023-05-15T13:14:44.753
2023-05-15T13:14:44.753
387997
387997
[ "python", "confidence-interval", "scipy" ]
615916
1
null
null
0
16
I am new to the subject and I assume that my question is basic... I have two sets of aggregated data on a population (not a sample) both data sets are categorical data: let's say I have age groups such as: 0-17, 18-35, 36-50, 50-65, 65+ and another group of relations: lawyer, son of a lawyer, spouse of a lawyer, grandchild of a lawyer ex. keep in mind that I only have the aggregated data - so I only know how many are in each group and I know that the population in hand is the same! can I tell if there is a correlation between the two sets? how strong it is and in what direction? I can get to the p-value of the crossing of each field I think (say the p-value of the population that is 0-17 and grandchildren) but what would be the next steps? thank you so much!!!
Basic question crossing two categorical fields on the same population
CC BY-SA 4.0
null
2023-05-15T13:12:05.250
2023-05-15T13:23:24.487
2023-05-15T13:23:24.487
388010
388010
[ "categorical-data" ]
615917
2
null
615902
8
null
> But a Wald test is a parametric test which assumes a normal distribution. Is R perhaps performing a Wald Log-Linear Chi-Square Test instead of a normal Wald test in these cases? The coefficients are maximum likelihood estimates, which are asymptotically normal. The resulting test is indeed a Wald test a t-test as noted by EdM There is a difference between the sampling distribution for the coefficients and the likelihood you're using. Let me demonstrate. Here is some code to generate data from a gamma distribution. The shape and scale of the gamma distirbution depend on some variable `group`. ``` library(tidyverse) set.seed(0) phi <- 10 group <- rep(0:1, 500) mu <- exp( 1.5 + group) y <- rgamma(1000, mu^2/phi, mu/phi) tibble(group, y) %>% ggplot(aes(y, fill=factor(group))) + geom_histogram(alpha = 0.5, position = 'identity') ``` [](https://i.stack.imgur.com/WpS2K.png) What I'm going to do is create these data 1000 times, run a regression, and look at the resulting distribution of the coefficients. This is the sampling distribution of the beta. Here they are, stratified by the type of coefficient (intercept or for `group`). You can see they are roughly symmetric and bell shaped. There is much more theory to demonstrate their asymptotic normality, but perhaps this suffices to convince you. [](https://i.stack.imgur.com/1BfGQ.png)
null
CC BY-SA 4.0
null
2023-05-15T13:18:37.797
2023-05-15T17:26:50.480
2023-05-15T17:26:50.480
111259
111259
null
615918
1
null
null
2
52
I am currently reading [https://www.ijcai.org/proceedings/2018/0374.pdf](https://www.ijcai.org/proceedings/2018/0374.pdf) ,this is a research paper based on Student-t Variational Autoencoder for Robust Density Estimation , In this research paper, they told about a zero variance problem in Gaussian VAE's and proposed a new idea of Student-t VAE to overcome that problem , I read many online sources but most sources just state about the uses of Gaussian VAE's but no one stated anything about this problem , I am new to VAE's and Multivariate statistics , so it is a bit hard for me to understand the concept of Student-t VAE's and how it is overcoming the shortcomings of Gaussian VAE's . I have read and understood some parts but still some parts are unclear , I will appreciate if someone can help? Edit: I understood till the part of the zero variance problem like how Gaussian VAE's become unstable because of the variance term in denominator in the Lower Bound , we are trying to maximize but after that they started with a concept of conjugate priors like precision parameter has Gamma distribution as conjugate prior , from there how they reached to t-distribution and how they are going to use this in the VAE model , this I could not understand.
What is a Student-t VAE ? and how is it different from Gaussian VAE?
CC BY-SA 4.0
null
2023-05-15T13:28:41.813
2023-05-15T16:50:56.333
2023-05-15T13:50:29.010
388012
388012
[ "machine-learning", "autoencoders", "variational-bayes" ]
615919
1
null
null
1
24
I Have an auto_arima model that works in python but I want to optimize it using ARCH. I have run an ARCH model on my Arima residuals but I do not know what to do with my results. How do I incorporate Variance, Residual Variance, Mean or Volatility into my ARIMA forecast? Do I just do e.g. Arima Forecast + Variance (or Volatility or Residual Variance or Mean)? Please explain, also a python code would help.
Auto_arima and ARCH
CC BY-SA 4.0
null
2023-05-15T13:36:12.957
2023-05-15T13:55:29.033
null
null
388014
[ "time-series", "python", "forecasting", "arima", "garch" ]
615920
1
null
null
1
18
I'm reading [this paper](https://arxiv.org/abs/2305.04148) on arXiv. I met a problem when I see the following information inequality. The author states that(If you are interested, it's in C.5 and C.6 of the linked paper) $$\begin{aligned} \sum_{t=1}^N I\left(X: I_t \mid I_1, \ldots, I_{t-1}\right) \le \sum_{t=1}^N I\left(X: I_t\right) . \end{aligned}\tag{1}$$ where $I_t$ and $X$ are random variables. To understand eq(1), I frame my problem to this post as follows. Suppose we have a random variable $X$ and random variables $I_1$ and $I_2$. We further have $I_1$ and $I_2$ are conditionally independent w.r.t. $X$, that is $$p\left( I_1,I_2|X \right) =p\left( I_2|X \right) p\left( I_2|X \right) .$$ Based on the above definitions, can we have the following $$I\left( X:I_2|I_1 \right) \le I\left( X:I_2 \right) \tag{2}?$$ Here, $I\left( X:I_2|I_1 \right) $ stands for [conditional mutual information](https://en.wikipedia.org/wiki/Conditional_mutual_information) defined as $I\left( X:I_2|I_1 \right) =H\left( X|I_1 \right) -H\left( X|I_2I_1 \right) $ where $H(\cdot)$ is the Shannon entropy. I know that if we have a Markovian chain $X\rightarrow I_2\rightarrow I_1$, then we can have eq(2). However, temporarily I only have $I_1$ and $I_2$ are conditionally independent w.r.t. $X$. Based on this, can I say the paper has the wrong deduction from C5 to C6, that is, eq(2) can not be satisfied when we only have $I_1$ and $I_2$ are conditionally independent w.r.t. $X$?
Inequality about conditional mutual information?
CC BY-SA 4.0
null
2023-05-15T13:54:46.940
2023-05-16T00:16:05.583
2023-05-16T00:16:05.583
336322
336322
[ "conditional-probability", "information-theory", "mutual-information", "probability-inequalities" ]
615921
2
null
615919
0
null
To deal with autocorrelation it is common to use ARMA. To deal with conditional heteroskedasticity it is common to use GARCH. If both patterns are present in your data, you can use ARMA-GARCH. An ARMA(p,q)-GARCH(r,s) model looks like this: \begin{aligned} x_t &= \mu_t + u_t, \\ \mu_t &= c + \varphi_1 x_{t-1} + \dots + \varphi_p x_{t-p} + \theta_1 u_{t-1} + \dots + \theta_q u_{t-q} \quad \text{(cond. mean)}, \\ u_t &= \sigma_t \varepsilon_t, \\ \sigma_t^2 &= \omega + \alpha_1 u_{t-1}^2 + \dots + \alpha_s u_{t-s}^2 + \beta_1 \sigma_{t-1}^2 + \dots + \beta_r \sigma_{t-r}^2 \quad \text{(cond. variance)}, \\ \varepsilon_t &\sim i.i.D(0,1), \end{aligned} where $D$ is some probability distribution with zero mean and unit variance. When you fit an ARMA with constant conditional variance, you effectively replace the cond. variance equation by $\sigma_t^2=\sigma_t$. When you fit a GARCH with constant conditional mean, you effectively replace the cond. mean equation by $\mu_t=\mu$. If you fit the two separately, you can joint them afterwards to produce ARMA-GARCH as specified above. Thus, you would treat the relevant estimates from your ARMA model as estimates of the $\mu_t$ equation and the relevant estimates from the GARCH model as estimates of the $\sigma_t^2$ equation. (I do not code in Python, so I am not providing example code.)
null
CC BY-SA 4.0
null
2023-05-15T13:55:29.033
2023-05-15T13:55:29.033
null
null
53690
null
615923
1
null
null
1
41
Assume there are $n$ subjects and for $1\leq i\leq n$, subject $i$ is measured at $t_{i1}<\dots<t_{im_i}$ time points. Consider a regression model $E[g(y_{ij})|u_i]=\beta\cdot x_{ij}+u_i$ for GLM where $u_i$ is random effect following normal distribution for subject $i$ measured at time $t_{ij}$. If $g$ is identity link, then ignoring $u_i$ will not lead to bias estimation of $\beta$. This is not true in non-identity link case. Suppose I want to obtain confidence interval of $\beta$'s by bootstrap, where bootstrap has nested structure in accordance with random effects $u_i$. - I apply GLM without random effects $u_i$. Apply bootstrap to obtain confidence intervals of $\beta$'s. - I apply GLM with random effects $u_i$. Apply bootstrap to obtain confidence intervals of $\beta$'s. 2's confidence interval always have nominal coverage under assumption of true model with random effects. However, it is not clear that 1's confidence interval will always have nominal coverage. $Q1:$ In linear model case, I expect 1 has nominal coverage and its median is unbiased as well. When do I expect bootstrapped median unbiased? And when do I expect bootstrapped CI having nominal coverage? $Q2:$ In case that 1 is biased, would 1's CI still have some close to nominal coverage?
If I apply a wrong model, do I get a correct confidence interval by bootstrapping?
CC BY-SA 4.0
null
2023-05-15T14:01:52.570
2023-05-15T15:02:11.190
2023-05-15T15:02:11.190
79469
79469
[ "regression", "mixed-model", "generalized-linear-model", "bootstrap" ]
615924
1
null
null
1
41
I use a GAM to model interactions between two continuous and one unordered categorical factor with three levels. Depending on significance of the smooth terms I would like to extract and visualize either level-specific smooth predictions if they are significant, or the interaction between the two continuous variables across all levels is, in case this one is signficant. Using the iris dataset here because it is pretty close to my own data, which contains different species and their traits as predictors and matings as reponse (I am trying to model fitness landscapes). I specified the model below in an attempt to exclude the other terms (e.g., using `predict.gam(mod1, newdata, exclude=c( "ti(Sepal.Length,Sepal.Width):Speciessetosa", "ti(Sepal.Length,Sepal.Width):Speciesversicolor", "ti(Sepal.Length,Sepal.Width):Speciesvirginica")`, if this makes sense (see below): ``` ## data data(iris) ## gam mod1 = gam(Petal.Width ~ Sepal.Length * Sepal.Width * Species + s(Sepal.Length, k=3) + s(Sepal.Length, by=Species, k=3) + s(Sepal.Width, k=3) + s(Sepal.Width, by=Species, k=3) + ti(Sepal.Length, Sepal.Width, k=3) + ti(Sepal.Length, Sepal.Width, by=Species, k=3), method = "REML", data=iris ) > anova(mod1) Family: gaussian Link function: identity Formula: Petal.Width ~ Sepal.Length * Sepal.Width * Species + s(Sepal.Length, k = 3) + s(Sepal.Length, by = Species, k = 3) + s(Sepal.Width, k = 3) + s(Sepal.Width, by = Species, k = 3) + ti(Sepal.Length, Sepal.Width, k = 3) + ti(Sepal.Length, Sepal.Width, by = Species, k = 3) Parametric Terms: df F p-value Sepal.Length 1 4.30623 0.039860 Sepal.Width 1 NaN NaN Species 0 NaN NaN Sepal.Length:Sepal.Width 1 4.10032 0.044835 Sepal.Length:Species 2 0.94772 0.390174 Sepal.Width:Species 0 NaN NaN Sepal.Length:Sepal.Width:Species 2 1.02435 0.361785 Approximate significance of smooth terms: edf Ref.df F p-value s(Sepal.Length) 1.000000528612 1.000001005439 11.22252 0.0010463 s(Sepal.Length):Speciessetosa 1.000000232442 1.000000464223 4.81234 0.0299554 s(Sepal.Length):Speciesversicolor 0.000001331420 0.000002637874 0.04797 0.9997172 s(Sepal.Length):Speciesvirginica 1.000000996148 1.000001892404 2.09408 0.1501724 s(Sepal.Width) 1.000007172805 1.000012865152 3.19529 0.0760765 s(Sepal.Width):Speciessetosa 1.000000855121 1.000001708770 0.10222 0.7496783 s(Sepal.Width):Speciesversicolor 1.665759568126 1.888265235757 1.03729 0.2710795 s(Sepal.Width):Speciesvirginica 0.641978333598 0.870807147692 2.25871 0.1630554 ti(Sepal.Length,Sepal.Width) 0.873528400557 0.982977416832 7.70746 0.0067218 ti(Sepal.Length,Sepal.Width):Speciessetosa 0.000001845268 0.000003682513 0.00065 0.5000000 ti(Sepal.Length,Sepal.Width):Speciesversicolor 0.000002545775 0.000005019515 0.03278 0.9996770 ti(Sepal.Length,Sepal.Width):Speciesvirginica 0.000003049801 0.000005392319 0.08124 0.9994726 ``` So, considering these results, I would like to visualize the smooth for `s(Sepal.Length):Speciessetosa` separately from `s(Sepal.Length)`, because the smooth effect is significant, but also `ti(Sepal.Length,Sepal.Width)` because the across-species smooth is significant, unlike the by-species terms. Does this model specification make sense both in the context of i) my research question (do species differ in how their trait interaction-surfaces affect the response var) and ii) to selectively extract smooth surfaces within and across categorical levels? Related: - Predicting mean smooth in GAM with smooth-by-random-factor interaction - Predicting with GAM (mgcv) and categorical/factor covariate in R - How do I fix a categorical predictor at a mean value when predicting from a regression model with multiple categorical predictors in R?
Predicting mean smooth in GAM with multiple continuous and categorical "by" variables
CC BY-SA 4.0
null
2023-05-15T14:19:54.870
2023-05-16T07:35:36.170
2023-05-15T16:13:11.960
123056
123056
[ "r", "generalized-additive-model", "mgcv" ]
615927
2
null
615902
8
null
Demetri Pananos makes the critical point about regression models (+1): the coefficient estimates are taken to have underlying multivariate normal distributions, at least in the asymptotic limit of large numbers of observations. That does not necessarily require a normal distribution of data, as the reviewer evidently thinks. In ordinary linear regression that doesn't necessarily even require a normal distribution of residuals, as the law of large numbers can lead to a normal distribution of coefficient estimates with a large enough data set. The appropriate test to perform on the coefficient estimates depends, however, on whether the model is based on an assumed variance or if the model estimates the variance from the data. In ordinary linear regression you estimate both a mean value (as a function of covariates) and an error variance. In that situation you take the extra uncertainty from making dual estimates from the data into account by using a t-test. In a generalized linear model (GLM) you could have either situation, depending on the family. The binomial and Poisson families, for example, have fixed associations between mean values and variances. If the choice of family is correct, there is no extra estimate of variance to make once the mean value has been estimated. As there is no independent estimate of variance, the correct test is a z-test for the probability that the normally-distributed coefficient estimate differs from 0. For a single coefficient that's equivalent to a single-coefficient Wald test.* In the gamma family, however, the model fits both the coefficient estimates and a "dispersion" that defines how much the variance increases with the mean value. That's the same situation as in ordinary linear regression; the separate estimate of dispersion calls for a t-test with degrees of freedom based on the data size, instead of a z-test. The code for `summary.glm` makes that clear, as shown in the following snippet for the situation where `est.disp=TRUE` as for your gamma GLM: ``` >summary.glm ## much code omitted pvalue <- 2 * pt(-abs(tvalue), df.r) coef.table <- cbind(coef.p, s.err, tvalue, pvalue) dimnames(coef.table) <- list(names(coef.p), c(dn, "t value", "Pr(>|t|)")) ## much more code omitted ``` So the simplest way to address the reviewer's concern in your case is to apologize for your error and state that these are actually t-tests on the coefficient estimates. I suspect that if you had called them t-tests to start with there wouldn't have been a complaint, even though t-tests and z-tests and Wald tests are all based on underlying normal distributions of coefficient estimates (and not of "data"). --- *The general form of the [Wald test](https://en.wikipedia.org/wiki/Wald_test) is particularly important when there are multiple parameters to test together. For a single parameter, the Wald statistic is just the square of the z statistic and p-values are identical.
null
CC BY-SA 4.0
null
2023-05-15T14:59:13.150
2023-05-15T14:59:13.150
null
null
28500
null
615929
1
null
null
1
14
I want to conduct a statictical analysis on reation times(dependent) between two groups of people. suppose the factors are: A, B, group Is it possible for me to conducting a posthoc test(by using emmeans() function with Tukey adjustments) eventhough there is no significant interactions between each levels of factors? and is it meaningful to test pairwise comparisons for each participant group(using the emmeans() function with Tukey adjustments)? --- e.g. (only interactions specified below) A x B p>o.5 B x group p>o.5 A x group p>o.5 A x B x group p>o.5 --- Thanks in advance!!
Is it possible to conduct post hoc eventhough there is no significant interation between the levels of factors in lmer results?
CC BY-SA 4.0
null
2023-05-15T15:07:12.160
2023-05-16T08:21:01.973
null
null
388017
[ "lme4-nlme", "interaction", "post-hoc", "lsmeans" ]
615930
1
null
null
0
12
I have a particular data set where there is strong seasonality, and possibly a trend or non-stationarity. The seasonal decomposition looks as follows:[](https://i.stack.imgur.com/MuEyv.png) I am a bit confused as to what the order should be in which I perform my tests. First, I do believe I should perform an ADF test on the deseasoned data. But then I don't know which version to use, the one that takes into account a trend, or the one that doesn't. Could someone please explain this to me?
Which version of ADF to use when a trend is suspected?
CC BY-SA 4.0
null
2023-05-15T15:10:19.990
2023-05-15T15:10:19.990
null
null
384768
[ "time-series", "seasonality", "trend", "augmented-dickey-fuller" ]
615931
1
null
null
0
9
I am performing a survival analysis with three cohorts. In the descriptive analysis, I have used Fisher and chi-squared tests to compare the categorical variables (such as sex), and the Kruskal-Wallis test for continuous variables (such as age), under the non-normality assumption. Is it correct? In cases where the difference is significant, do I need to compare groups pairwise? Thank you very much :)
Survival, Descriptive, and Group Comparisons (>2 groups)
CC BY-SA 4.0
null
2023-05-15T15:17:35.873
2023-05-15T17:37:36.147
2023-05-15T17:37:36.147
56940
387468
[ "survival", "chi-squared-test", "multiple-comparisons", "fishers-exact-test", "kruskal-wallis-test" ]
615932
1
null
null
4
349
Suppose we have 2 reports. These reports' original values are either all 0 or all a (a>0). Let $x_i$ be independent Laplace(0, b) random variables. For each report, we generate a random noise $x_i$ and add it to the report’s original value. Then we add the value of 2 reports, resulting in a sum $s_0 = x_i + x_j$ or $s_1 = x_k + x_l +2a$ The question is: Suppose we want to be 95% certain that the original value is not 0, what should be the minimum value of a? Side note: We may have more than 2 reports. In a more general case, the question can be formulated as follows: Suppose we have n reports. The original value on these reports are either all 0 or all a (a > 0). Let $x_i$ be independent Laplace(0, b) random variables. For each report, we generate a random noise $x_i$ and add it to the report’s original value. Then we add the value of n reports, resulting in a sum $s_0 = \Sigma_{i=1}^n x_i$ or $s_1 = \Sigma_{i=1}^n x_i + a*n$. The question is: Suppose we want to be 95% certain that the original value is a, what’s the minimum value of n?
Is it Possible to Differentiate Two Integers after Adding Random Variables from Laplace Distribution?
CC BY-SA 4.0
null
2023-05-15T15:20:27.380
2023-05-18T19:11:55.690
2023-05-15T20:12:32.550
388018
388018
[ "hypothesis-testing", "laplace-distribution" ]
615935
2
null
615910
2
null
If I understand this correctly, there is no data leakage. You are just using information that at the point at which you want to predict the outcome is available, and therefore can be used. The concept of information leakage comes into play when using resampling, data set splitting, cross-validation and the like. These technques use some part of the data in order to predict some other part of the data that in fact you already have, but in order to assess the prediction quality on unseen data properly, you pretend that you don't have it (as you are emulating a real situation in which you wouldn't have that information at the point where you use the other part for prediction). Information leakage means that for some reason you set up things in such a way that at some stage you use information that you shouldn't use, because in a real situation you wouldn't have it. As far as I understand your situation, this is not the case here. > But what if ML model easily captures the relationship (if target achieved >= target set - high likelihood to meet the target else low likelihood to meet the target). So, in this case do we need ML at all in the first place? Why would the fact that ML does a job successfully (and rather easily) be a reason that we don't need it at all in the first place? If your prediction problem indeed is to predict achievement of the target in a situation in which you already have strong information that indicates with large probability what will happen, so be it! (The only thing one could worry about is whether a simpler approach will do the trick already, but as long as it's not a big problem to set up your random forest, you may well use it.)
null
CC BY-SA 4.0
null
2023-05-15T15:41:07.317
2023-05-15T15:48:10.610
2023-05-15T15:48:10.610
247165
247165
null
615936
2
null
615875
0
null
Your argument is correct (+1). The step of deriving the conditional distribution of $Y$ given $X = x$ may be made a little more formal as follows: for $x \geq 2$ and $0 \leq y < x$, \begin{align} & P(Y = y|X = x) = \frac{P(Y = y, X = x)}{P(X = x)} =\frac{\binom{x - 1}{y} \left(\frac{4}{6}\right)^{x - 1 - y}\left(\frac{1}{6}\right)^y\frac{1}{6}}{\left(\frac{5}{6}\right)^{x - 1}\frac{1}{6}} \\ =& \binom{x - 1}{y} \left(\frac{4}{5}\right)^{x - 1 - y}\left(\frac{1}{5}\right)^y. \end{align} Hence $Y | X = x \sim B(x - 1, 1/5)$ (technically this relation only holds for $x \geq 2$. If $x = 1$, then $Y$ is trivially degenerated at $0$). The numerator in the penultimate expression calculates, in a sequence of $x$ independent Beroulli trials, the probability of observing "6" for $y$ times in the first $x - 1$ trials, observing "1" in the $x$-th trial for the first time.
null
CC BY-SA 4.0
null
2023-05-15T15:58:07.293
2023-05-15T15:58:07.293
null
null
20519
null
615937
1
null
null
1
32
I am using [boruta_py](https://github.com/scikit-learn-contrib/boruta_py), Python implementation of the Boruta algorithm, with a random forest estimator. ``` model = RandomForestClassifier(n_estimators=5000, n_jobs=-1) boruta = BorutaPy(model, max_iter=10, verbose=2, random_state=1) ``` And I fit `boruta` like so: ``` boruta.fit(np.array(X_train), np.array(Y_train)) # X_train is a DataFrame # transform input X_train_br = boruta.transform(np.array(X_train)) X_test_br = boruta.transform(np.array(X_test)) # then fit the RF estimator model.fit(X_train_br, Y_train) ``` My input has 240 features, i.e.: ``` >>> X_train.shape (67092, 240) ``` Fitting `boruta` for `max_iter = 10`, but the strange thing is no single feature is classified `Confirmed` or `Rejected`: ``` ... building tree 1000 of 1000 [Parallel(n_jobs=-1)]: Done 1000 out of 1000 | elapsed: 4.2min finished Iteration: 5 / 10 Confirmed: 0 Tentative: 240 Rejected: 0 .... ``` What am I doing incorrect here?
BorutaPy: All features are classified tentative
CC BY-SA 4.0
null
2023-05-15T16:23:17.687
2023-05-15T16:55:59.247
null
null
261548
[ "machine-learning", "python", "feature-selection", "boruta" ]
615938
2
null
615918
1
null
[Prior](https://stats.stackexchange.com/a/58792/22311), [conjugate](https://stats.stackexchange.com/q/66018/22311), and posterior distributions are concepts from [bayes](/questions/tagged/bayes)ian statistics. A good introduction is [Bayesian Data Analysis by Gelman et al.](http://www.stat.columbia.edu/%7Egelman/book/), currently in its third edition. Loosely, a prior distribution is a mathematical statement about how you believe the parameters of a model, or another quantity of interest, are distributed before you even look at the data. The core concept in this paper is that the authors are trying to prevent the model from estimating variances near 0, because that creates training instability. To ameliorate this, the authors assign a prior probability distribution to the variance, so there is only a very small probability mass on the values of variance that are near 0. This is analogous to penalizing a loss function for certain solutions, so that the model doesn’t estimate the variance to be near 0. The effect is that the model's posterior probability distribution avoids training instability that arises from near-0 variance estimates.
null
CC BY-SA 4.0
null
2023-05-15T16:25:37.127
2023-05-15T16:50:56.333
2023-05-15T16:50:56.333
22311
22311
null
615940
2
null
615937
0
null
Why would you expect anything to be wrong? You are using default settings and it is a valid result. You could try using more iterations to see if it leads to more useful results. It's better if the algorithm can say “I don't know” rather than forcing an arbitrary choice. But why are you using it in the first place? Random Forest will pick the features it needs on its own, even [colinear features are not a problem](https://stats.stackexchange.com/a/377047/35989).
null
CC BY-SA 4.0
null
2023-05-15T16:55:59.247
2023-05-15T16:55:59.247
null
null
35989
null
615941
2
null
615875
1
null
$E[X] = 6$ because in a very long series of rolls arbitrarily close to $1/6$ of all outcomes will be $1,$ whence the mean waiting time to observe a $1$ will be arbitrarily close to $1/(1/6)=6.$ Because the remaining outcomes $2,3,4,5,6$ all play equivalent roles in this problem, if we let $Y_k$ be the number of times a $k$ is observed, $$E[Y_k\mid X = x] = E[Y_6\mid X = x] = E[Y\mid X=x].$$ But every one of the first $x-1$ rolls results in $k\in \{2,3,4,5,6\},$ showing $$X = 1 + Y_2 + Y_3 + Y_4 + Y_5 + Y_6,$$ whence by linearity of conditional expectation (and taking out what is known), $$x = E[X\mid X=x] = E[1\mid X=x] + E[Y_2\mid X = x] + \cdots + E[Y_6\mid X = x].$$ Therefore each of these five equal conditional expectations is one-fifth of their total, $$E[Y\mid X = x] = E[Y_k\mid X = x] = \frac{x-1}{5}.$$ In the same fashion, linearity of (unconditional) expectation asserts $$6 = E[X] = 1 + E[Y_2] + \cdots + E[Y_6] = 1 + 5E[Y],$$ giving $$E[Y] = \frac{6-1}{5} = 1.$$
null
CC BY-SA 4.0
null
2023-05-15T17:13:36.523
2023-05-15T17:13:36.523
null
null
919
null
615943
1
null
null
0
17
Consider a mixture model involving an observed random variable $Y$ taking values in $\mathbb{R}$ with cdf $F$, a mixing random variable $\alpha$ taking values in a set $\mathcal{A}$ with cdf $G$ and a set of factors, i.e. a collection of cdfs $(F(y|\alpha))_{\alpha\in\mathcal{A}}$. Suppose I know $F$ and $G$ and that the two are related as above. How can I charachterise the set of plausible factors? In other words how can I charachterise $$\mathcal{F}(F,G)=\left\{(F(y|\alpha))_{\alpha\in\mathcal{A}}\ |\ F(y)=\int F(y|\alpha)dG(\alpha)\right\}$$ Can I say something on the conditional moments of the factors, for instance, given $a\in\mathcal{A}$ what can I say about $$\mathcal{E}(F,G,a)=\{ E[Y|\alpha=a]\ |\ (F(y|a),(\tilde{F}(y|\alpha))_{\alpha\in\mathcal{A}\setminus a})\in \mathcal{F}(F,G)\ for\ some\ (\tilde{F}(y|\alpha))_{\alpha\in\mathcal{A}\setminus a} \}$$ I would be happy about result which make further assumptions too (e.g. finite factors,parametric assumptions on the distributions etc). Thanks for any help or reference.
Identifying a mixture model given the mixing and the outcome distribution
CC BY-SA 4.0
null
2023-05-15T17:45:23.733
2023-05-16T11:16:23.630
2023-05-16T11:16:23.630
349769
349769
[ "references", "conditional-probability", "conditional-expectation", "mixture-distribution", "identifiability" ]
615944
1
null
null
0
11
- I have a questionnaire made of 10 questions given to doctors from different specialties. - Each doctor evaluates 10 echocardiographic images and gives the image a score of 1-10 based on their own assessment of right heart strain. - 30 complete responses were received (300 values, 10 columns, 30 rows) Task 1: I would like to compare the concordance (accuracy/precision) between these 30 responses to "the control response", which is the correct assessment of right heart strain to the 10 questions in the questionnaire. (1 row with 10 values) Task 2: I would like also to assess the responses based on the doctor's specialty (cardiologist and non-cardiologist), year of graduation, and level of confidence in reading echo. objective: is to see whether non-cardiologist are as good as cardiologists in assessing right heart strain using echocardiography. My initial plan is to: - use statistical measures such as the Intraclass Correlation Coefficient (ICC) to measure the agreement between the observers and the control response. - graph the relationship using The Bland-Altman plot which will shows the differences between the doctors' responses and the control response. - To assess the responses based on the doctors' specialty, year of graduation, and level of confidence in reading echo, I will use descriptive statistics such as means, standard deviations, and frequencies to explore the data. - use inferential statistics such as ANOVA or regression to test for significant differences or associations between the variables. I would like to use R, any advice?
help with choosing statistical test to compare responses to a questionnaire
CC BY-SA 4.0
null
2023-05-15T17:48:24.917
2023-05-15T17:48:24.917
null
null
387849
[ "r", "anova", "survey", "intraclass-correlation", "bland-altman-plot" ]
615948
2
null
595840
0
null
I found your question because I had viewed this same resource for some of my work. To answer your question: > does it still make sense to calculate the Weighted Mean on essentially three measurements? Yes, I think in this case it would. If all three cities have roughly the same population, your simple mean and weighted mean will be similar; but if one city is 10 times the size of the others, knowing that the sample comes from 25% of each city's population means we know they sampled many more people for the larger city. I would want to weight that city's average income much higher than the other two cities'. Otherwise, you'd be discounting the importance of the individuals in the larger city just because they happened to live somewhere with a lot of other people. > Or in such cases, is it better to refrain from calculating any statistics, seeing that any estimate generated in such a context is likely to be inherently flawed? I think this really just depends on the context of your problem. Nobody will get hurt if you type in some numbers in a calculator one way vs the other. If policy decisions will be made from this average, then you should disclose up front which way the average was calculated and why you chose to do so that way, and perhaps some followup recommendations for future studies (i.e., data preparers should also include some measure of variance/standard deviation per city). Finally, this wasn't directly asked in your question, but as far as calculating a weighted standard deviation, I don't think you can do that with the information given. You could calculate one based on the three averages but I doubt it would give you a lot of insight - with just three numbers, it's pretty easy at a glance for someone to understand whether one is much different than the others.
null
CC BY-SA 4.0
null
2023-05-15T18:14:45.077
2023-05-15T18:14:45.077
null
null
338408
null
615949
1
null
null
0
25
I have a dataset of longitudinal measurements for different sample individuals, with some covariates such as age, sex, time period, etc. The number of measurements taken for each individual varies. I would like to model the variance of the longitudinal measurements, including the covariates in the model with interactions between the covariates. My first instinct is that a hierarchical linear model could be appropriate, as there is a nested structure of correlated measurements within patients, and I am hoping that using a hierarchical linear model will also account for the differing number of observations within each individual (fewer measurements ~ less precise estimate ~ lower weight, something like that.) However, I have no idea where to begin when directly modeling the variance, instead of the mean, much less in the context of a hierarchical model. Any advice or guidance would be greatly appreciated, either resources or code examples (R preferred). (I believe) that non-bayesian methods would be an easier sell, though I am generally open to methodology. Thank you!
Advice for modeling variance in a hierarchical linear model
CC BY-SA 4.0
null
2023-05-15T18:20:03.923
2023-05-15T18:20:03.923
null
null
99476
[ "r", "mixed-model", "variance", "multilevel-analysis", "hierarchical-bayesian" ]
615951
1
null
null
0
38
I've been reading around this and keep coming up short so I'm hopeful there's an easy test for this. I am measuring the differences (on a Likert scale) between matched (paired) pre- and post-lesson surveys. Two sets of four questions in the survey are grouped around themes (i.e., "knowledge" or "attitude") so I need to analyze those together to measure changes rather than individually. I may also do smaller subsets of questions (i.e., two variable, hence two-way ANOVA) For non-parametric tests, nothing seems to fit--Wilcoxon Rank Sum works for comparing single variables among pairs; Friedman works for multiple matched groups but not multiple variables. There doesn't seem to be anything for paired groups and multiple variables. Also, our sample is around 70 and I I have read that with a large enough sample size a parametric test like ANOVA will most likely be fine. In that case a MANCOVA with repeated measures would potentially work. But I would prefer to go with the "safer" non-parametric option if there's any question about accuracy. Thank you for any suggestions.
Seeking suggestions for a non-parametric version of two-way ANOVA/multi-variable MANOVA with repeated measures
CC BY-SA 4.0
null
2023-05-15T18:26:58.380
2023-05-17T12:29:04.510
null
null
388037
[ "multivariate-analysis", "nonparametric", "paired-data", "likert" ]
615952
1
null
null
1
13
Is there an implementation of Deming regression that also handles random intercepts and slopes in the sample? We have a situation where we have compelling theoretical reason to believe that one measurement influences another but not the other way around. The data consists of multiple measurements per subject of a single variable for the first variable (brain vessel diameter) and one measurement per subject of the second variable (cognitive score).
Mixed-level Deming regression?
CC BY-SA 4.0
null
2023-05-15T18:38:03.873
2023-05-15T18:38:03.873
null
null
28141
[ "mixed-model", "total-least-squares" ]
615953
1
null
null
0
19
Is this correct reasoning? Let $x_i$ be a variable in a Bayesian Network and $\text{MB}(x_i)$ denotes its Markov blanket. Let us note that: $$ p(x_i \mid \text{MB}(x_i)) \propto p(x_i, \text{MB}(x_i)). $$ By the definition of Bayesian network (factorisation property): $$ p(x_i, \text{MB}(x_i)) = p(x_i \mid \text{Parents}(x_i)) \prod_{y_j \in \text{Children}(x_i)} p(y_j \mid \text{Parents}(y_j)) \prod_{z_k \in \text{Co-parents}(x_i)} p(z_k \mid \text{MB}(z_k)). $$ Therefore: $$ p(x_i \mid \text{MB}(x_i)) \propto p(x_i \mid \text{Parents}(x_i)) \prod_{y_j \in \text{Children}(x_i)} p(y_j \mid \text{Parents}(y_j)), $$ since $ \prod_{z_k \in \text{CoParents}(x_i)} p(z_k \mid \text{MB}(z_k))$ becomes constant (it does not depend on $x_i$, the value of od variable $z$ is fixed and $p(z_k \mid \text{MB}(z_k))$ becomes constant).
Markov blanket - probability derivation
CC BY-SA 4.0
null
2023-05-15T18:50:02.533
2023-05-15T19:40:56.400
2023-05-15T19:40:56.400
271278
271278
[ "graphical-model", "bayesian-network", "conditional-independence" ]
615954
1
null
null
1
14
I'm currently working with pregnant women data. Given that the same woman could have multiple pregnancies over the years, I tend to use GEE to obtain odd ratios of my variables of interest. Now say I am interested in a classification problem with ~ 120 variables (whether the woman will have a preterm or not). Of course I cannot do this with GEE. I wanted to explore some models such as XGBoost, SVM or RF; but I am wondering if they can take repeated IDs into consideration. I was tempted to include LR on the list, but it does not seem right to do so given how data is correlated. That is why I am restricting my list to the three mentioned methods. I did some research but it doesn't seem clear to me. Could someone help me out ?
Longitudinal data in ML vs GEE
CC BY-SA 4.0
null
2023-05-15T19:14:21.507
2023-05-15T19:14:21.507
null
null
388039
[ "machine-learning", "generalized-estimating-equations" ]
615955
1
null
null
0
16
When I first studied the representer theorem and the kernel trick from my professor's notes, I thought that the representer theorem implied that we could write any problem in kernelized form (and by that I mean that the data enters the objective function as a kernel matrix $XX^T$), and therefore we could use the kernel trick for any problem to compute the kernel matrix. The notes aren't really clear in that part though, so I'm not sure at all about that. Looking online I think I understood that the representer theorem comes in handy when predicting a new point (correct me if I'm wrong), since: $\hat{y} = \Phi(\hat{x})^T w^* = \Phi(\hat{x})^T\Phi(X)^T\alpha^* = \sum_{i=1}^n K(\hat{x}, x_i)\alpha_i^*$ where X is the design matrix, K is the kernel function, w is the vector of weights and alpha the coefficients coming from the representer theorem. But still I'm not sure about the first point: does the representer theorem also imply that we can write any problem in a kernelized form?
How does the representer theorem relate to the kernel trick?
CC BY-SA 4.0
null
2023-05-15T19:16:42.567
2023-05-15T19:16:42.567
null
null
388030
[ "kernel-trick" ]
615957
1
null
null
1
38
I have used Complex Samples in SPSS (and SUDAAN in SAS, Survey in R) when working with survey data that were collected using a sampling design that was not random. For example, when an oversample was included in the data collection. Complex Samples incorporates the sample design into statistical tests, providing more accurate estimates and standard errors. I am now working with data that were collected from a survey panel and includes post-stratification weights to balance the sample with the population on gender, age, race/ethnicity, region, and education. The weights are not being used to scale estimates to reflect the actual size of the population. Since this is not a design weight, I am wondering how to apply weights when running a statistical test. Is it sufficient to simply "weight cases" and perform standard statistical tests, or should I treat the post-stratification weights the same way I would treat stratification weights if I had oversampled in the design? Very much appreciate any assistance.
Appropriate way to use post-stratification weights when running statistical tests SPSS
CC BY-SA 4.0
null
2023-05-15T19:45:27.343
2023-05-16T10:57:14.037
2023-05-16T03:18:33.887
388042
388042
[ "spss", "survey-sampling", "survey-weights", "poststratification" ]
615958
1
null
null
3
107
The following question is crossposted from [MathStackExchange](https://math.stackexchange.com/questions/4699407/function-proportional-to-the-log-likelihood-for-the-gaussian-distribution) upon recommendation from the MSE community and a lack of responses on my post over there. --- Consider the following problem from a course on statistical inference: > If we generate a sample $x_i$ for $i \in$ {$1 ... n$} from $ p(x_i) = \sum_{k=1}^2 w_kp(x_i| \mu _k, \sigma^2_k)$ where $p(x_i| \mu _k, \sigma ^2_k)$ are Gaussian densities and $w_2 = 1 - w_1$ (where we assume that all the parameters are unknown) Find the log-likelihood function for the sample $x_{1:n}$. The log-likelihood is defined as: $$ l( \theta | x_{1:n}) = \log(p(x_{1:n})) = \log( \Pi _{i=1}^n p(x_i)) = \sum _{i=1}^n \log(p(x_i))$$ Now substituting in for $p(x_i)$ we get: $$ l( \theta | x_{1:n}) = \sum_{i=1}^n \log \Big{(} \sum_{k=1}^2 w_k p(x_i | \mu _k , \sigma^2_k) \Big{)} $$ We can now substitute in the Gaussian densities which gives us: $$ l( \theta | x_{1:n}) = \sum_{i=1}^n \log \Big{[} w_1(2 \pi \sigma ^2 _1)^{-1/2} \exp \Big{(}\frac{(x - \mu _1)^2}{2 \sigma ^2 _1} \Big{)} + w_2 (2 \pi \sigma ^2 _2)^{-1/2} \exp \Big{(}\frac{(x - \mu _2)^2}{2 \sigma ^2 _2} \Big{)} \Big{]} $$ --- > The following solution aligns with my working, however, there is a proportionality relation on the last line that is unclear to me. Why does the final line hold? I can't see a clear and obvious way to get from the point at which I have substituted the Gaussian densities into the sum to it being proportional to the given double summation. I understand that we can exclude any multiplicative constants, however, it seems as though the summation has been taken out of the logarithm at some stage in order to derive a proportionality relation. Although I'm not sure if this holds and if it does, why does it hold? --- Edit It seems as though this is a mistake as Whuber presents a counterexample in the comments. Here is the second part of the question: > Presumably, this is also incorrect for similar reasons to the above. If this does end up being incorrect is there any way to salvage this?
Function proportional to the log likelihood for the Gaussian distribution
CC BY-SA 4.0
null
2023-05-15T20:00:28.573
2023-05-23T04:51:07.650
2023-05-23T04:51:07.650
359872
359872
[ "probability", "mixed-model", "normal-distribution", "maximum-likelihood", "likelihood" ]
615959
1
null
null
3
197
In reading about brms, I came across the words "pterms" and "gterms" for "population-level effects" and "group-level effect". I was astonished. No, I was GOBSMACKED! Such sensible choice of terminology in statistics. How were they allowed to get away with it? More to the point, what makes "fixed" effects "fixed" and "random" effects "random"? Why aren't the far more intuitive "population-level" and "group-level" used?
Why are "fixed" and "random" called that?
CC BY-SA 4.0
null
2023-05-15T20:25:21.503
2023-05-15T21:47:11.847
null
null
28141
[ "mixed-model" ]
615960
1
null
null
0
23
I am confused about a simple point. Are the 95% CI's of a log odds ratios only symmetric when computed from a logistic regression? What if I got an odds ratio from a fishers exact test and converted it into a log odds ratio, does that guarantee symmetry? Sorry for the silly question.
Why are the log odds ratios not symmetric?
CC BY-SA 4.0
null
2023-05-15T21:01:10.690
2023-05-15T21:01:10.690
null
null
133134
[ "logistic", "odds-ratio" ]
615963
2
null
615724
5
null
In an attempt to match your variable names, suppose the (conditional) error covariance matrix $\Psi = \sigma^2 \Sigma^{-1} = \sigma^2V$ in your general linear regression model is know up to a positive constant $\sigma^2$. Then, the (conditional) covariance matrix of the generalized least squares estimator $\hat\beta_{\text{GLS}}$ is given by $$ \mathop{\mathbb{V}}\left(\hat\beta_{\text{GLS}}\,|\,X\right) = \left(X^\top \Psi^{-1} X\right)^{-1} = \sigma^2\left(X^\top \Sigma X\right)^{-1}. $$ To obtain an unbiased estimator of $\mathop{\mathbb{V}}\left(\hat\beta_{\text{GLS}}\,|\,X\right)$ we thus need an unbiased estimator of $\sigma^2$. One such estimator$-$the one that will answer your question$-$is given by $$ \hat\sigma^2= \frac{1}{n-k} \left(y - X\hat\beta_{\text{GLS}}\right)^\top \Sigma\left(y - X\hat\beta_{\text{GLS}}\right), $$ where $n$ is the number of entries of $y$ and $k$ the number of entries of $\hat\beta_{\text{GLS}}$. In your example we have $$ \Psi = \sigma^2 \frac{1}{1-\phi^2} \begin{pmatrix} 1 & \phi & \cdots & \phi^{n-1} \\ \phi & 1 & & \phi^{n-2} \\ \vdots & & \ddots & \vdots \\ \phi^{n-1} & \phi^{n-2} & \cdots & 1 \end{pmatrix} = \sigma^2 V, $$ where $\sigma^2 = 0.5^2$ is the variance of the innovations $u_t$ in your AR(1) error term process $\varepsilon_t = \phi\varepsilon_{t-1} + u_t$, $u_t \overset{\text{i.i.d}}{\sim} \mathcal N(0, \sigma^2)$ with AR coefficient $\phi = 0.8$. In your code this would correspond to redefining ``` V <- toeplitz(phi^(1:n - 1)) / (1 - phi^2) ``` which doesn't change `betas` (the realized value of $\hat\beta_{\text{GLS}}$) since any scaling factor of $V$ cancels out in the corresponding calculation. With the new `V` (and thus new `Sigma`) you can calculate the realization of $\hat \sigma^2$ via ``` sigma_sq_hat <- as.numeric(1/(n-2) * t(e) %*% Sigma %*% e) ``` and the realized value of $\mathop{\hat{\mathbb{V}}}\left(\hat\beta_{\text{GLS}}\,|\,X\right) = \hat \sigma^2 \left(X^\top \Sigma X\right)^{-1}$, i.e. `vcov(gls1)`, via ``` sigma_sq_hat * solve(XtX) ``` Note that, again due to cancellation, this term would be the same if calculated with your original definition of `V`; but `sigma_sq_hat` would then be scaled by a factor of $100/9$.
null
CC BY-SA 4.0
null
2023-05-15T21:43:19.000
2023-05-15T21:43:19.000
null
null
136579
null
615964
2
null
615959
4
null
Because like many other fields (possibly all other fields), statistics terms were not created at the beginning, then used, but rather have evolved over time and many terms make sense from one perspective, but make less sense from other perspectives. Here are some other common statistical terms/phrases that makes sense from one perspective, but could be confusing from others: analysis of variance, regression, linear models, expected value, bias, variance, standard deviation, standard error, survival analysis, censoring, roc curve, population, statistics (and probably plenty more). And from a mathematical perspective, statisticians hold variables constant and allow constants to vary. When I think of "fixed" and "random" effects (from a frequentist perspective), the "True" values of the "Population" are "fixed" numbers that do not change depending on which random sample I take (the estimates depend on the sample, but the "true" value being estimated does not). But the "random" effects will depend on which random sample of "groups" I select. Even terms like "population level" and "group level" can have understanding problems from a different perspective. If we group the countries of the world into groups, then collect data on the populations within those groups and analyze a specific group of countries, then the populations are the brms groups, and the group is the brms population. And if we have multiple measurements on each person, then the individual is a group.
null
CC BY-SA 4.0
null
2023-05-15T21:47:11.847
2023-05-15T21:47:11.847
null
null
4505
null
615966
2
null
595122
0
null
I agree with you that the ROC AUC can be very misleading - a model with a high AUC can have a large number of false positives, which as you mentioned can be very undesirable in many circumstances. > popular data science libraries (such as scikit) do not implement it: https://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics Scikit-learn does have an implementation of partial AUC. You calculate the ROC AUC using `sklearn.metrics.roc_auc_score` and specify the desired maximum false positive rate using the `max_fpr` parameter. This will give you the AUC only within a certain range a false positive rate between 0 and the maximum you specify. e.g., `partial_auc = roc_auc_score(y_true, y_pred, max_fpr=0.05)`
null
CC BY-SA 4.0
null
2023-05-15T22:02:43.830
2023-05-15T22:02:43.830
null
null
367833
null
615967
2
null
615932
8
null
Upon dividing everything by $b,$ the question concerns working with the sum of $n$ independent Laplace distributions. (A Laplace distribution arises as the difference of two independent Exponential distributions.) The density of this sum can be found by applying the inverse Fourier Transform to its characteristic function. When $X$ has an exponential distribution, by definition its characteristic function (c.f.) is $$\phi_X(t) = E\left[e^{itX}\right] = \int_0^\infty e^{itx}e^{-x}\,\mathrm dx = \frac{1}{1 - it}.$$ Therefore the c.f. of the Laplace distribution is $$\phi(t) = \phi_X(t) + \phi_{-X}(t) = \frac{1}{1-it} + \frac{1}{1 + it} = \frac{1}{1 + t^2}.$$ Consequently, the c.f. of the sum of $n$ iid Laplace variates is $$\hat f_n(t) = \phi(t)^n = \frac{1}{(1+t^2)^n}.$$ Its inverse Fourier transform, normalized to integrate to unity, can be found in terms of the Bessel K function as > $$f_n(x) = \frac{|x|^{n-1/2} K_{n-1/2}(|x|)}{\sqrt{2\pi} 2^{n-1}\Gamma(n)}.$$ Its CDF can be expressed in terms of generalized hypergeometric functions. Because implementations of those in statistical software are uncommon, you can resort to numerical integration. It will work well because the Central Limit Theorem asserts $f_n,$ standardized to unit variance, will be close to a standard Normal distribution for large $n$ and numerical integration of that is straightforward. In fact, $n\ge 2$ already works decently: it's the gray density in this plot (the red density is the standard Normal). [](https://i.stack.imgur.com/I8VpO.png) Finally, you can find quantiles with a root finder to invert the CDF. But even the Normal approximation for $n=2$ yields the 95th percentile with a relative standardized error of less than $0.01.$ Using this Normal approximation (and bearing in mind that the variance of the Laplace distribution is $1+1=2$), the minimum value of $n$ to test the hypothesis $a=0$ compared to the alternative $a=\mu$ with $1-\alpha$ confidence and $\beta$ power will be close to $$n_{\min} \approx 2\left(\frac{Z_{1-\alpha} - Z_{1-\beta}}{\mu}\right)^2$$ where the $Z_{*}$ are Standard Normal quantiles. For instance, with $\mu=1/2$ and 95% confidence, 95% power this formula gives $86.58,$ which we round up to $87.$ The test will decide $a=0$ when the observed sum is less than the $0.95$ quantile of the sum of $87$ iid Laplace variables, equal (exactly) to $21.6883;$ and indeed, the sum of $87$ iid Laplace variables each offset by $1/2$ has a $4.90\%$ chance of exceeding this critical value. That is, the confidence and power are very close to the intended ones for this value of $n.$ --- The following `R` code computes the log density `f`, the cumulative probability function `pf`, and the quantile function `qf` using the straightforward numerical methods described above. The calculation of `f` for arguments near $0$ is fraught because the formula is a division of infinity by infinity; for small arguments it therefore uses an asymptotic approximation for $K_{n-1/2}.$ These functions still only work for $n$ between $1$ and c. $100,$ because above those values the Bessel function tends to overflow even for largish arguments. But for larger $n$ the Normal approximation is excellent. ``` f <- function(x, n, eps = 1e-3) { x <- abs(x) lb <- ifelse(x <= eps, (n - 3/2) * log(2) - (n - 1/2) * log(x) + lgamma(n - 1/2), log(besselK(x, n - 1/2, expon.scaled = TRUE)) - x) (1 - n) * log(2) + (n - 1/2) * log(x) + lb - lgamma(n) - log(2 * pi) / 2 } pf <- Vectorize(function(x, n, eps = 1e-3, ...) { s <- sign(x) x <- -abs(x) v <- integrate(\(x) exp(f(x, n, eps)), -8 * sqrt(2 * n), x, ...)$value ifelse(s >= 0, 1 - v, v) }, "x") qf <- function(q, n, ...) { x0 <- sqrt(n) * min(log(q/2), log((1-q)/2)) uniroot(\(x) pf(x, n, ...) - q, c(x0, -x0))$root } ```
null
CC BY-SA 4.0
null
2023-05-15T22:12:58.063
2023-05-15T22:19:09.960
2023-05-15T22:19:09.960
919
919
null
615968
1
616196
null
2
65
I have $(Y_{n})_{n\in\mathbb{N}}$ as a seq. of positive, independent r.v.'s whose expectation is 1 $\forall n$. I have the canonical filtration $\mathcal{F}_{n}=\sigma(Y_{k},k\leq n)$. I have already shown that $\xi_{n}=\prod_{k=1}^{n}Y_{k}$ is an $\mathcal{F_{n}}$-martingale, and that $(\sqrt{\xi_{n}})_{n\geq 0}$ is a $\mathcal{F_{n}}$-supermartingale. The question proceeds to add an additional condition, $\prod_{k=1}^{\infty}\mathbb{E}(\sqrt{Y_{k}})=0$. My question: How do I use this additional condition to look at the convergence of $(\sqrt{\xi_{n}})_{n\in\mathbb{N}}$ and use this to show whether or not $(\xi_{n})_{n\in\mathbb{N}}$ is a uniformly integrable martingale? My idea: I would like to use a theorem stating that if $(X_{n},\mathcal{F_{n}})_{n\in \mathbb{N}}$ is an $L^{1}(\Omega,\mathcal{F},\mathbb{P})$-bounded martingale, then $\lim_{n\to\infty}X_{n}=Y$ a.s., where Y is some random variable in the same space. However, I'm not totally sure how to leverage the information I know to show this. I then would like to show this martingale is right-closable, and thus uniformly integrable. Any tips on showing this?
Uniformly Integrable Martingale
CC BY-SA 4.0
null
2023-05-15T23:51:33.547
2023-05-18T04:04:08.873
2023-05-18T03:36:49.723
384698
384698
[ "probability", "stochastic-processes", "martingale" ]
615969
2
null
615848
0
null
It doesn't make sense to me that you say that they are independent, but then say that the conditional probability is increasing. If they are independent, then the conditional probability must be constant. What you seem to be getting at is that you're treating $\theta$ as being a random variable, but also a parameter, and knowing about $X_i$ for $i<n$ tells you something about $\theta$ and thus about $X_n$. The probability that a variable with Bernoulli distribution of parameter $\theta$ will be $1$ is the same as the probability that a variable from a unit uniform distribution will be less than $\theta$, and if all of $X_1 ... X_{n-1}$ are less than $\theta$, then that means that $\max_{0<i<n}\{X_i\}<\theta$. If we label $\theta$ as $X_0$, then $\max_{0<i<n}\{X_i\}<\theta$ is the same as $\max_{0 \le i<n}\{X_i\} = X_0$. So now we have $r(n) = P(X_n < X_0 | \max_{0 \le i<n}\{X_i\} = X_0)$. If we take $X_0$ to also be from the unit uniform distribution, I don't think we lose much generality. But it seems to me that there's now a symmetry argument to be made that that is the same as $r(n) = P(X_n < \max_{0 \le i<n}\{X_i\} )$. After all, why should $ P(X_n < X_0 | \max_{0 \le i<n}\{X_i\} = X_0)$ be any different from $ P(X_n < X_0 | \max_{0 \le i<n}\{X_i\} = X_k)$ for some $k$ where $0<k<n$? And if the probability is the same regardless of the condition, then we should be able to drop the condition. But $P(X_n < \max_{0 \le i<n}\{X_i\} )$ is in turn the same as $P(\max_{0 \le i \le n}\{X_i\} \neq X_n)$. So now your question boils down to "Why is it that as we increase the number of variables, the probability that the last one is not the maximum is strictly increasing?" Or, alternatively, "Why is it that as we increase the number of variables, the probability that the last one is the maximum is decreasing?" > i don't know what's the "closed form" of r(i) for a given i If you're treating $\theta$ as being a random variable, I think you need to know its distribution to calculate the closed form of $r(i)$. But the above argument shows that if it's a unit uniform distribution, then $r(i) = 1-\frac 1i$ or $\frac{i-1}i$
null
CC BY-SA 4.0
null
2023-05-16T00:07:50.947
2023-05-16T00:07:50.947
null
null
179204
null
615970
1
null
null
1
6
What I did was computing a new variables with the cutoff scores for each of them. Then added them together with the cut off scores: - Mahalanobis X2 df = 2p <.001 = 13.82 - Cooks 4 / N -K -1 4 / 643 – 2 – 1 = .00625 - Leverage 2(K)+2 / N 2*2+2 / 643 = .00933 For excluding outliers, I want to exclude the items with 2 or more violations for each (cooks/maha/lev. Is this the usual practice? Is there a reference for this? [https://www.youtube.com/watch?v=0vcUtzPDGrw](https://www.youtube.com/watch?v=0vcUtzPDGrw) Im doing what she is doing here at 19:15 Syntax: DATASET ACTIVATE DataSet1. EXECUTE. RECODE MAH_1 (13.82 thru Highest=1) (ELSE=0) INTO out_mah. EXECUTE. SORT CASES BY out_mah (A). SORT CASES BY out_mah (D). RECODE COO_1 (.00625 thru Highest=1) (ELSE=0) INTO out_cook. EXECUTE. SORT CASES BY out_cook (D). RECODE LEV_1 (.00933 thru Highest=1) (ELSE=0) INTO out_lev. EXECUTE. SORT CASES BY out_lev (D). EXECUTE. COMPUTE out_tot=out_mah + out_cook + out_lev. EXECUTE. SORT CASES BY out_tot(D).
Reference for exluding cases when 2 of more of mahalanobis, cooks distance and lev are violated
CC BY-SA 4.0
null
2023-05-16T00:29:40.617
2023-05-16T00:29:40.617
null
null
386907
[ "outliers", "leverage", "cooks-distance", "mahalanobis" ]
615971
1
615983
null
9
773
Suppose I draw two sets of random integers from 1 to 100. each using simple-random-sampling without replacement. One is size 50 and the other is size 25. How can I calculate the probability of the smaller set intersecting with the bigger one?
How do I calculate the probability of 2 sets of random integers to intersect?
CC BY-SA 4.0
null
2023-05-16T00:38:10.057
2023-05-16T14:18:11.400
2023-05-16T14:18:11.400
919
388050
[ "probability", "distributions" ]
615974
1
null
null
1
45
General question: How do I fit a model to data when the data points have asymmetric error bars? What is the cost function I use to calculate residuals, and from that, how do I calculate confidence intervals/covariances for the fit parameters? More specific details: I have data that looks something like $$ Y \sim B(n, A \sin(2\pi f t + \phi) + O)/n $$ Where $B(n, p)$ is the binomial distribution with $n$ trials and success probability $p$. $A, f, \phi$, and $O$ are considered to be fixed parameters that I am trying to estimate. I sample $Y$ $n$ times for a variety of values of $t$ to get $t/y$ data that I want to curve fit to. I calculate each $y$ data point as the fractional number of successes at each $t$. I use the [Wilson Score Interval](https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval#Wilson_score_interval) to calculate asymmetric error bars on the $y$ data points for each $t$. This differs from the naive estimation for the error on the mean for a binomial distribution when $p$ is close to 0 or 1 and the number of trials is small. When error bars are symmetric I curve fit this data by calculating residuals for each point, weighting by the error for each point. What do I do for the case of asymmetric error bars? I could weight by the average of the upper and lower error bars? I could weight by the upper error bar when the data point is above the model line and the lower error bar when the data point is below the model line but this would give discontinuous residuals.. What's the statistically sound approach here?
How to fit data with asymmetric error bars?
CC BY-SA 4.0
null
2023-05-16T01:32:41.437
2023-05-16T01:32:41.437
null
null
260369
[ "regression", "residuals", "curve-fitting", "weighted-data" ]
615975
1
null
null
0
9
I'm attempting to plot simple slopes of my neg binomial from PROC GLIMMIX. I've been using the resources from UCLA ([https://stats.oarc.ucla.edu/sas/seminars/analyzing-and-visualizing-interactions/](https://stats.oarc.ucla.edu/sas/seminars/analyzing-and-visualizing-interactions/)) but have hit some snags. My models have 6 mean-centered predictors including 2 of interest, treatment week (weeks 1-8) and affect. Neither main effect is sig but the interaction is (b = .02127, p = .02); in this case, affect is the moderator. I've used the UCLA resource to begin probing and plotting this interaction; however, I find that when I follow these steps, that I am unable to retrieve the predicted values at different levels of the interaction. After storing the interaction, I create estimates based upon the UCLA website (I'm wondering if part of the issue is that I'm applying a basic linear regression equation to a neg bin model...) ``` PROC PLM restore = AGGBXINT; estimate 'pred behavior_aggress, cweek = 3, ccu = 0' intercept 1 cweek 3 ccu 10 cweek*cpticu24 30 / e; RUN; ``` Next, probing this shows me that the simple slope is sig at 1SD below. ``` Title 'Recalling INT file and testing simp slopes'; PROC PLM restore = AGGBXINT; estimate 'minus 1 SD mod (cu)' cweek 1 cweek*cpticu24 -10.88, 'mean mod (cu)' cweek 1 cweek*cpticu24 .00, 'plus 1 SD mod (cu)' cweek 1 cweek*cpticu24 10.88 / e; RUN; ``` The issue follows: ``` Title 'creating dataset to plot interactions'; data timeout; do cpticu24 = -10.88, .00, 10.88; do cweek = 1 to 8 by 0.1; output; end; end; run; Title 'Setting layout of plot'; PROC PLM source = AGGBXINT; score data = timeout out = plotdata predicted = pred lclm = lower uclm = upper; RUN; ``` I get the warning: At least one variable used in the model is not present or has incompatible type in the WORK.TIMEOUT score data set. All computed score variables are set to missing in WORK.PLOTDATA. I've looked all over with little success and can't find a reason why this error is occuring with my variables. This happens when I use centered and non-centered values. I'm likely making a very stupid mistake, so open to all feedback!
PROC GLIMMIX Simple Slopes Neg Bin
CC BY-SA 4.0
null
2023-05-16T01:41:14.137
2023-05-16T01:41:14.137
null
null
351054
[ "data-visualization", "negative-binomial-distribution", "sas" ]
615976
2
null
615971
5
null
This is a simple problem, and it can be generalised easily to sets of any size. For more general application, suppose your initial population has $N$ distinct objects and your samples are denoted as $\mathscr{S}_1$ and $\mathscr{S}_2$, containing $n_1$ and $n_2$ elements respectively. Since each sample is taken without repetition of elements, the first sample will contain $n_1$ distinct elements and there will be $N-n_1$ unsampled elements. The probability of a non-empty intersection is equal to the probability that the second sample contains any of these latter elements, which can be found using the [hypergeometric distribution](https://en.wikipedia.org/wiki/Hypergeometric_distribution) as follows: $$\begin{align} \mathbb{P}(\text{Samples intersect}) &= \mathbb{P}(\mathscr{S}_1 \cap \mathscr{S}_2 \neq \varnothing) \\[18pt] &= 1-\mathbb{P}(\mathscr{S}_1 \cap \mathscr{S}_2 = \varnothing) \\[18pt] &= 1-\text{Hyper}(0|N, n_1, n_2) \\[12pt] &= 1-\frac{N-n_1 \choose n_2}{N \choose n_2} \\[6pt] &= 1-\frac{(N-n_1)!}{n_2! (N-n_1-n_2)!} \cdot \frac{n_2! (N-n_2)!}{N!} \\[6pt] &= 1-\frac{(N-n_1)! (N-n_2)!}{N! (N-n_1-n_2)!}. \\[6pt] \end{align}$$ In your case you have $N=100$, $N_1=50$ and $n_2=25$ which gives: $$\begin{align} \mathbb{P}(\text{Samples intersect}) &= 1-\frac{(100-50)! (100-25)!}{100! (100-50-25)!} \\[6pt] &= 1-\frac{50! \ 75!}{100! \ 25!} \\[12pt] &= 1-5.212394 \times 10^{-10} \\[18pt] &\approx 1. \\[6pt] \end{align}$$
null
CC BY-SA 4.0
null
2023-05-16T02:17:03.993
2023-05-16T02:17:03.993
null
null
173082
null
615977
1
null
null
0
14
I am studying the Bootstrap method and encountered a question about how to estimate the reject region. Suppose we have observed a sample $\{(X_i, Y_i)\}_{i=1}^n$, I aim to investigate the linear correlation of the pairwise sequence $X_i$ and $Y_i$ without any infomation about the population. To be specific, I have to perform the hypothesis test with null hypothesis $X$ and $Y$ are independent. The reject region is $W = \{(X, Y) \big| |\rho(X, Y)| > t\}$ where $\rho(X, Y)$ is the Pearson correlation coefficient. However, we don't know the value of the threshold $t$ for the significance level $\alpha = 0.05$ since the population is unkown. Thus I want to estimate $t$ by Bootstrap with the observed sample $\{(X_i, Y_i)\}_{i=1}^n$. The intuition comes from that Bootstrap is to estimate the distribtion of some parameters by leveraging resampling and the empirical distrbution. I try to get the empirical distribution by resampling $\{(X_i, Y_i)\}_{i=1}^n$. I come up with a method as follows. > Resample for $m$ iterations, calculate the Pearson coefficient for each iteration and denote by $\rho_1, ... \rho_m$. Determine the threshold $t$ by the 95% quantiles of $\rho_1, ..., \rho_m$. Do hypothesis test with the determined $t$. So does the procedure make sense? And I tried to check the validity of the method.
Estimate the threohold of reject region by Bootstrap
CC BY-SA 4.0
null
2023-05-16T02:57:59.097
2023-05-16T02:57:59.097
null
null
376068
[ "hypothesis-testing", "correlation", "bootstrap", "validity" ]
615978
1
null
null
1
13
I want to do a regression. the y series is provided, so does a dataframe(df) with columns x1,x2,x3...,xn. I would like to have the regression as follows: y=b0+b1x1+b2x2+b3x3+...+bnxn where bi(i=1,2,3...,n) is in (0,1) and the sum of b1+b2+b3+...bn<=1 I was told to use scipy.optimize.minimize, but reading through the guidance([https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html#scipy.optimize.minimize](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html#scipy.optimize.minimize)), I still had no clue how to write the code. Thanks in advance!
regression with constraints on weights python
CC BY-SA 4.0
null
2023-05-16T03:23:03.673
2023-05-16T03:23:03.673
null
null
388055
[ "regression", "python", "optimization" ]
615979
1
null
null
1
9
0 I have process data in time series data(0min, 1min, ... 999min). I don't know what does the variables mean. They are just written in X1, X2, ... X52. Each row means the data at the time. At certain point, process becomes abnormal. Then the data after the point are all abnormal data. If normal data class value is 0 and 1 for abnormal data, the label would be like below. 0 0 0 . . . 1 1 1 1 So I want to know when the value is changed to 1. I will use change point detection (with python 'ruptures' library). In this situation, should I standardize my data? I wonder (1) whether standardization is needed or not, and (2) if the standardization downgrades performance of the model, why is it? (3) If the standardization is not needed, is there any advantage for log transformation of data? As far as I know, log transformation is conducted to make the distribution be similar to standard distribution. (Correcting skewness) Is it true? I would appreciate to the answers for only the part of questions.
Do I need to standardize time series data in change point detection?
CC BY-SA 4.0
null
2023-05-16T03:27:37.387
2023-05-16T03:27:37.387
null
null
388056
[ "machine-learning", "time-series" ]
615980
1
null
null
2
18
I'm using NN with sigmoid binary activation. And for threshold I using 0,5. So if output < 0,5, it classified as 0. And if output >= 0,5 it classified as 1. But I'm using bias too at the same time. Is it ok? Cause from what I know bias change threshold into 0.
Using threshold and bias at the same time in NN
CC BY-SA 4.0
null
2023-05-16T03:57:41.363
2023-05-16T07:57:23.527
2023-05-16T07:57:23.527
204068
388057
[ "neural-networks", "bias", "activation-function" ]
615982
1
null
null
0
6
When I adopt a case-control study for risk prediction models of CC, how I can measure Discrimination and Calibration?
measure Discrimination and Calibration in case-control study
CC BY-SA 4.0
null
2023-05-16T04:28:54.590
2023-05-16T04:28:54.590
null
null
388058
[ "predictive-models", "model", "calibration", "risk", "precision" ]
615983
2
null
615971
5
null
A simple intuition why you should get the hypergeometric distribution as answered by Ben: The process is the same as having 100 white balls, colour a random selection of size 50 red, then put all 100 in an urn and sample 25 balls from that urn randomly without repetitions. The number of red balls in this process will be hypergeometric distributed. Without using the formula for the hypergeometric distribution you can compute this more straightforward by multiplying the individual probabities each single draw. Ie. the probability to draw a red ball in the first draw, times the probability to draw a red ball in the second draw, etc. $$P(\text{all red}) = \frac{50}{100}\frac{49}{99}\frac{48}{98} \dots \frac{26}{76} = \frac{50!75!}{25!100!}$$
null
CC BY-SA 4.0
null
2023-05-16T05:07:12.293
2023-05-16T05:39:45.483
2023-05-16T05:39:45.483
164061
164061
null
615984
1
null
null
0
7
Can I use as one of variables weighted average of a few market indexes for time series? I saw it in one of the thesis i read recently, but i didn't see people commonly doing it in academic literature.
Can I use as one of variables weighted average of a few market indexes for time series?
CC BY-SA 4.0
null
2023-05-16T05:17:16.297
2023-05-16T05:17:16.297
null
null
361080
[ "time-series", "index-decomposition" ]
615985
2
null
615957
0
null
You should treat the post-stratification weights the same way you would treat stratification weights if you had oversampled in the design? It's not exactly right, but nothing with missing data ever is, and it is quite literally good enough for government work (based on analysis advice for public-use data from post-stratified/raked surveys).
null
CC BY-SA 4.0
null
2023-05-16T05:23:33.630
2023-05-16T05:23:33.630
null
null
249135
null
615986
1
null
null
0
24
A question about semantics really I guess. Say a clinical trial is conducted according to ITT principles and a Full Analysis Set is defined as (for example) subjects that take at least one dose of treatment and have one post-baseline measure. The sample size reported will be those subjects in each treatment group that have taken at least one dose of treatment and have one post-baseline measure, right? But am I correct in thinking that (unless you perform multiple imputation) this is really just an upper limit for your sample size, because there could be varying degrees of missingness on any of the post-baseline treatment effects you wish to estimate, which will in your statistical model automatically result in the dropping of those observations with missing data? e.g. Your FAS could be n = 100, but your 6 month treatment estimate may only be based on the data of 90 subjects (10 subjects dropped out before 6 months) and your 12 month treatment estimate may only be based on the data of 80 subjects (a further 10 subjects dropped out between 6 and 12 months). You don't tend to see these model-specific sample sizes reported in the published results (unless I am missing something)
Intention-To-Treat and reporting of sample size
CC BY-SA 4.0
null
2023-05-16T06:07:38.710
2023-05-16T06:07:38.710
null
null
108644
[ "treatment-effect", "clinical-trials" ]
615987
1
null
null
4
647
If a probability density function (created using kernel density estimation) exhibits peaks (not necessarily the mode), can we infer the presence of clusters or subgroups in the data?
How to interpret peaks in probability density function?
CC BY-SA 4.0
null
2023-05-16T06:29:56.763
2023-05-17T07:58:40.543
null
null
276238
[ "probability", "distributions", "density-function", "kernel-smoothing" ]
615988
2
null
615980
2
null
The bias is added before the activation, i.e. $y=\sigma(wx+b)$, so that's not directly associated with what you choose as threshold. In addition, instead of thresholding your output, you might also consider probabilistic scoring rules like ROC AUC. A starting point might be [this](https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models) thread.
null
CC BY-SA 4.0
null
2023-05-16T06:52:59.847
2023-05-16T06:52:59.847
null
null
204068
null
615989
1
null
null
0
16
This is the data output of the tool r.regression.multi on QGIS. I am new to this and cannot find good sources on how to draw conclusions for this. Therefore could someone explain; how adequately do the regressions describe meaningful relationships among the data? X=precipitation and temperature (in the year 2050 under IPCC scenario SSP1) y=bumblebee occurrence (current distribution) n=43621 Rsq=0.148596 Rsqadj=0.148556 RMSE=0.126423 MAE=0.102174 F=3806.322579 b0=0.670497 AIC=-180420.870163 AICc=-180420.870163 BIC=-180394.820282 -- predictor1=rast_645a2c86c33e256 b1=-0.000001 Rsq1=0.000008 F1=0.406184 AIC1=-180422.463954 AICc1=-180422.463954 BIC1=-180415.780660 -- predictor2=rast_645a2c86c359757 b2=-0.028587 Rsq2=0.148058 F2=7585.114031 AIC2=-173429.095957 AICc2=-173429.095957 BIC2=-173422.412663 [](https://i.stack.imgur.com/8qfXi.png) (properties table: range from red -0.0772 to blue 0.6045)
Interpreting r.regression.multi
CC BY-SA 4.0
null
2023-05-16T07:02:46.930
2023-05-16T15:41:18.167
2023-05-16T15:41:18.167
388066
388066
[ "regression", "multiple-regression", "linear-model", "interpretation", "regression-coefficients" ]
615990
1
null
null
0
20
I found the following while reading a paper [1] and got confused: > Replacing $k_{ij}$, $k{i.}$, respectively $k_{.j}$, by $k_{i,j}$, $k_{i.}$ and $k_{.j}$ provides us with estimates of entropy and mutual information. This is on page 235, right after equation 2.9. The authors seemed to write about replacing the set of variables with another, but there seems to be a typo. It might not be a big deal but I don't want to miss something important here. Does anyone know what might be going on that paragraph? ### References [1] ON ESTIMATION OF ENTROPY AND MUTUAL INFORMATION OF CONTINUOUS DISTRIBUTIONS [https://core.ac.uk/download/pdf/11473936.pdf](https://core.ac.uk/download/pdf/11473936.pdf)
What do the authors mean by this, regarding estimation of mutual information
CC BY-SA 4.0
null
2023-05-16T07:03:05.483
2023-05-16T07:03:05.483
null
null
388067
[ "entropy", "mutual-information" ]
615991
1
null
null
0
7
Suppose I survey a population asking each individual to recall as many digits of pi as they can. (Assume that they get them right but only to a point. Or, alternately we can just take up to their first error, or even just ask them to report how many digits they think they know.) On the one hand we might think that this is simply Bernoulli around, say, 5 digits - some folks will remember 6, some only 3, etc. But on the other hand there is a bias toward lower numbers of digits - everyone will recall at least 1 (3), and as the number of digits rises exponentially fewer people will get to higher and higher numbers, and eventually you’ll reach zero, suggesting a poisson or discrete power distribution. What is the appropriate distribution?
What is the correct model for sequence memory?
CC BY-SA 4.0
null
2023-05-16T07:06:24.813
2023-05-16T07:06:24.813
null
null
160479
[ "distributions", "sequence-analysis" ]
615992
1
null
null
0
19
I have a problem related to understanding a coefficient of a regression I have made. Some of the variables in my dataset looks like this: "SKDchange" is the variable of interest as dependent variable, while I struggle to interpret the diff1Equity variable. The regression in on changes-changes form where both the explanatory and dependent variable measures the change from t-1 to t. My regression code looks like this: reg SKDchange diff1Bokførtavkastning diff1Verdijustertavkastning diff1Diff diff1NPVrentegar diff1Duration diff1Innskutt_EK diff1Annen_EK diff1Tilleggsavsetning diff1Kursreguleringsfond diff1Premiefond diff1Equity diff1IG diff1HY diff1HF diff1PE diff1Eiendom i.Year i.id, vce(cluster id Year) Code: - Example generated by -dataex-. For more info, type help dataex clear input float(diff1Equity diff1IG diff1Duration SKDchange) . . . . .04754161 -.09403566 -.1599999 -.3 -.02064644 -.018740425 .54999995 -.14 .010813517 -.01095776 -.25 -.02 -.026318334 -.002717563 .71 -.29 .05093006 -.005377363 -.43 -.12 -.0011152852 -.013379592 .13000004 .07 -.024114784 .04315393 .029999923 .27 .02377115 -.04732417 .48 .16 -.030893333 .03741644 .22999993 .12 .02928274 .02414991 1.65 .3 .001455957 .0004799768 .26000005 .22 .004779475 .00558187 -.27000016 -.07 . . . . -.00512642 .004711266 -.10917584 -.07 -.063273735 .04655176 .26596162 -.18 .02563643 -.02877299 -.355 -.03 -.04105696 .15782078 -.2039706 .19 .009601564 -.010468778 .10676478 -.41 .02057601 .02437022 .11147057 .14 .02281441 -.012794206 -.011764593 .04 -.1235865 .11275683 1.35 .08 -.04838456 -.0012550277 -2.28 -.17 -.0041883523 .01323345 3.05 .43 .012058506 -.015719667 -2.0700002 .09 -.005533226 -.0021816033 1.45 .03 . . . . .06241516 -.05142799 -.19732134 -.04 -.0014952082 -.001770217 .9085715 -.91 .04270349 -.06381753 -.7399999 .52 .09990668 -.11121875 .32000005 -.2 -.02076794 -.02086097 .6300001 .24 .02764053 -.1214763 -1.75 .61 -.04209387 -.03557081 .03099999 -.29 .02711773 -.08156485 .2990001 -.05 -.04882512 .06585504 .28000006 -.12 -.04681846 .05470577 .2000001 .05 .11536504 -.07811742 .20000005 .09 .06964771 .013372925 -.8 -.15 . . . . -.0887506 .031709056 .8700001 -.13 -.08438469 -.4525758 .05999994 -.26 .02664301 -.007119911 -1 -.05 .0045617344 .02329633 0 .06 -.0021377348 .04008004 -1.43 .26 -.01407777 .03055194 -.16999993 -.04 .010314748 -.02585671 -1.76 .16 .02583596 .031915076 -5.960464e-10 .07 -.01983064 -.014577777 2.63 -.23 -.0041912617 .1929425 .13833335 .01 -.08911186 -.2353105 .06904765 .51 .032503515 .04405855 .286619 .01 . . . . .02059753 .00604589 -1.4834615 .33 -.012380703 -.06797267 .5134616 -.41 .016461588 .018263668 -2.8 .19 -.016667238 .001590435 .1 .3 -.024444895 .024264194 .75 .23 .007214402 -.013604298 1.56 -.51 .1661422 -.08117536 -.8400001 .09 .03991343 -.04297239 .10000005 -.21 .0945853 -.1083463 2.384186e-08 -.11 .09561862 -.10853375 -.09999998 -.2 -.3036081 .27654004 .4 .83 .0191042 -.04063289 .4999999 -.18 . . . . .028694145 -.07184994 -.4087499 1.04 -.11852485 .065436386 -.2100001 .83 .06047443 -.08638582 -.3 .08 -.00934424 .008003984 -.19999997 -.12 .05878297 .05886357 .19999996 -.66 -.03458966 .04444094 .2 -.02 -.01792603 -.0598651 -.6999999 .85 .009644688 .03987899 -.3 -.93 .004996592 -.006057756 -2.384186e-08 -.17 .0002828524 -.10505882 .4 .61 -.01864195 -.002772984 -.14 1.31 .031855654 .08679447 1.01 -1.65 . . . . .012399666 .06262513 -.19732134 .1 -.22266704 -.05568459 -.07587294 .19 .1560319 .11884157 .022385584 -.44 .03940175 .016805476 -.28460786 -.34 .017667945 -.04138368 -.3033333 .12 -.007013578 .0078530125 -.52000004 .56 -.06908407 -.072072 .6799999 .34 .06103085 .04783031 .2599999 .17 -.02623497 .07248117 -.1000001 .19 .04369584 -.03614791 .57000005 -.1 .02110185 -.04804264 1.0100001 -.05 .0099649 -.0017389223 .11999989 -.15 . . . . .08086738 -.014327994 -.10917584 -.58 -.12395176 -.05033401 .26596162 -.05 -.15088338 .13968238 -.355 .19 .04139739 .20281895 -.2039706 -.15 .035840575 -.16239823 .10676478 -.17 .11057469 .0003425852 .11147057 -.2 -.005472437 -.02754408 -.011764593 .17 .03027286 .05957837 -2.28 .13 end What I get out of the regression is shown in the attachment. What I don't totally is how to interpret the coefficients for diff1duration which is measured in years, and diff1Equity against the dependent variable SKDchange. (I do not understand diff1IG etc. either but the interpretation would be the same as for diff1Equity).The data is in the following scale. A change in the relative allocation to Equity of 1 percentage point will show as a diff1Equity of 0.01. The dependent variable will return 0.25 if the percentage point change in the dependent variable is 25. Any help is greatly appreciated. Is one unit here referring to a 1% increase or a one percentage point? [](https://i.stack.imgur.com/fQ6fL.png)
Interpretation of regression result
CC BY-SA 4.0
null
2023-05-16T07:07:04.777
2023-05-16T07:07:04.777
null
null
388068
[ "regression", "least-squares", "stata" ]
615994
2
null
615987
6
null
It depends on how you define or measure a peak. A local maximum of the KDE pdf could be the result of noise or represent one of many possible epicenters of a cluster. See the following population density map of Manhattan in 1990, where downtown Manhattan probably represents a single cluster, but contains multiple competing peaks. [](https://i.stack.imgur.com/UmW4o.png) Now, call this a hammer in search of a nail, but as an applied topologist I would say that topology can be used to define the extent to which a peak is responsible/participates in a unique subgroup or cluster. [Persistent homology](https://en.wikipedia.org/wiki/Persistent_homology) analyzes the various peaks (maxima and minima) and measures the extent to which they represent a unique topological feature, vs. are part of a larger feature (vast oversimplification here). This works in all dimensions, but here is a visualization of persistence in 1D: the right-hand side shows the derived features, whose coordinates are the birth and death scales of each feature. The point further from the diagonal represents the big isolated peak corresponding, potentially, to a robust subgroup. Another way of understanding persistence, and this can be made precise, is that the persistence of a topological feature, like an apparent cluster, captures the minimum amount of flattening needed to erase it. [](https://i.stack.imgur.com/QZxXa.png) Persistent homology, and applied topology more generally, has been applied extensively to clustering (at least in the literature), and provides, from my point of view, a very natural way of determining whether or not a peak corresponds to a real subgroup or something more transient.
null
CC BY-SA 4.0
null
2023-05-16T07:09:47.967
2023-05-16T07:09:47.967
null
null
366672
null
615995
2
null
599729
1
null
I've used a Network Navigator Chart from Power BI library. I can not say that I've achieved all goals that I wanted to, however, this chart is pretty easy to use and it is also possible to upgrade the chart by transform your data a little using DAX or Python (to color the first customer in the chain or to filter only chains with >1 connection).
null
CC BY-SA 4.0
null
2023-05-16T07:21:41.653
2023-05-16T07:21:41.653
null
null
375942
null
615996
2
null
615924
0
null
The model as you have it doesn't make much sense. - why do you have linear terms and their interactions for length and width with species? The bases for the smooths already include those terms. - why are you setting k so low here - you're only allowing two degrees of freedom per smooth here. (Is this just an limitation of this example data set?) Making statistical decisions in the way you plan using p values isn't advisable; you need to consider the potential effect sizes too. So I would suggest that you visualise all smooths. I'm not sure what the problem is because `plot(mod1)` will plot all the smooths allowing you to see the average effect of the smooth for sepal length separately from the species-specific smooth of that same covariate. You can use the `select` argument to choose only the terms you want to plot (`select` uses the numeric order of the smooths as shown in the output from `summary()`). `gratia::draw()` will also allow the same plots. The smooth interaction complicates things a bit, as the effect of sepal length is contained in the average smooth and the interaction. The way to proceed here is to prepare a counter factual and predict from the model at evenly space values of sepal length, while holding the sepal width fixed at some representative value. Then you also select which smooths you want in the counterfactual prediction: ``` # packages library("mgcv") library("gratia") ## data data(iris) ## model mod1 <- gam(Petal.Width ~ Species + s(Sepal.Length, k=4) + s(Sepal.Length, by=Species, k=4) + s(Sepal.Width, k=4) + s(Sepal.Width, by=Species, k=4) + ti(Sepal.Length, Sepal.Width, k=4) + ti(Sepal.Length, Sepal.Width, by=Species, k=4), method = "REML", data = iris) ``` Let's generate data for the counterfactual (here I'm fixing this at Sepal.Width at 2.5 as the median value of the interaction is ~entirely flat) ``` ds <- data_slice(mod1, Sepal.Length = evenly(Sepal.Length), Sepal.Width = 2.5) ``` This produces ``` > ds # A tibble: 100 × 3 Sepal.Length Sepal.Width Species <dbl> <dbl> <fct> 1 4.3 2.5 setosa 2 4.34 2.5 setosa 3 4.37 2.5 setosa 4 4.41 2.5 setosa 5 4.45 2.5 setosa 6 4.48 2.5 setosa 7 4.52 2.5 setosa 8 4.55 2.5 setosa 9 4.59 2.5 setosa 10 4.63 2.5 setosa # ℹ 90 more rows # ℹ Use `print(n = ...)` to see more rows ``` and we see another problem; because the model contains the parametric effects for the individual species mean petal widths, the intercept if for the reference level (with the default contrasts). So you have to either accept this and note that the plots of the smooth effect of Sepal.Length is offset by some vertical amount that would differ if you looked at a different species, or show the smooth for all three species, or change the contrasts such that there is a different interpretation of the intercept (whether that makes any sense depends on the specific contrasts you choose). This is important, because this counterfactual will include the estimated average petal width for the reference species but we are not going to show the species-specific smooth effects as per your request. Given that caveat, we proceed by using `predict` and here it makes more sense to choose which terms to include than to spell out all the terms you want to exclude, which we do with the `terms` argument: ``` sm_want <- c("(Intercept)", "s(Sepal.Width)", "ti(Sepal.Length,Sepal.Width)") fv <- fitted_values(mod1, data = ds, terms = sm_want) ``` which produces ``` > fv # A tibble: 100 × 7 Sepal.Length Sepal.Width Species fitted se lower upper <dbl> <dbl> <fct> <dbl> <dbl> <dbl> <dbl> 1 4.3 2.5 setosa -0.222 0.212 -0.638 0.193 2 4.34 2.5 setosa -0.219 0.209 -0.628 0.191 3 4.37 2.5 setosa -0.215 0.206 -0.619 0.189 4 4.41 2.5 setosa -0.211 0.203 -0.609 0.187 5 4.45 2.5 setosa -0.207 0.201 -0.600 0.186 6 4.48 2.5 setosa -0.203 0.198 -0.591 0.185 7 4.52 2.5 setosa -0.200 0.196 -0.583 0.184 8 4.55 2.5 setosa -0.196 0.193 -0.574 0.183 9 4.59 2.5 setosa -0.192 0.191 -0.566 0.182 10 4.63 2.5 setosa -0.188 0.189 -0.558 0.182 # ℹ 90 more rows # ℹ Use `print(n = ...)` to see more rows ``` (Note that we ignored the `Species` parametric term here so this counterfactual data set is for `setosa` we would have to double check that this is the reference level - it is here, but you could add `Species = ref_level(Species)` to the `data_slice()` call above to be explicit about which species we are considering in the counterfactual.) Now we can plot ``` library("ggplot2") fv |> ggplot(aes(x = Sepal.Length, y = fitted)) + geom_ribbon(aes(ymin = lower, ymax = upper), alpha = 0.2) + geom_line() ``` This produces [](https://i.stack.imgur.com/sCImL.png) Wherein the figure shows the smooth effect of the sepal length on the response petal width, at a sepal width of 2.5 for Iris setosa. Hope this answer explains how to plot the smooth you wanted? As for the model specification I address that at the start (what you have is wrong as you are including the linear components twice (hence [in part] the `NaN`s). This model (even corrected) doesn't answer your research question; you would need to fit a model without the species-specific effects (surfaces) and then compare the two models with a generalised likelihood ratio test or AIC for a test of the more complex model over the simpler model. Or you can try to estimate the difference between the estimated surfaces for pairs of species. Doing that in general is relatively easy (I have a blog post on the general idea [shown only for univariate smooths] [here](https://fromthebottomoftheheap.net/2017/10/10/difference-splines-i/)). gratia has a function `difference_smooths()` for this but something isn't working with your specific model right now that I'll need to look at and fix.
null
CC BY-SA 4.0
null
2023-05-16T07:29:05.163
2023-05-16T07:35:36.170
2023-05-16T07:35:36.170
1390
1390
null
615997
1
null
null
0
15
I tried to train bayesian network using ELBO loss function. \begin{align*} \mathcal{F}(D, \theta) = KL[q(w|\theta)||P(w)] - \mathbb{E}_{q(w|\theta)}[\log p(D|w)] \end{align*} My question is, if model gets larger, is there any possibility to KL-divergence becomes larger so that dominate likelihood cost term? For large network, $w$ has many components, and their KL-divergence will be added as $\log$ is additive. But, likelihood loss does not change even we have larger network. So, when model gets larger, KL-divergence term in ELBO will be dominated. So I think model will be trained so that it just only regularize, but not do indeed train to increase accuracy. Indeed, in my implementation, my model has 2 hidden layer as bayesian. But it cannot be trained well. When I debug, cross entropy loss is up to $200$, but my KL-divergence is up to $30000$. So my network is only at first trained to increase accuracy, but after some epochs, its accuracy becomes lower. Is there any wrong in my opinion or my code implementation? ``` def forward(self, x): # part of Bayesian Linear layer # Sample w, b w_eps = torch.randn((self.in_features, self.out_features)) b_eps = torch.randn(self.out_features) w_std = torch.exp(0.5 * self.w_logvar) b_std = torch.exp(0.5 * self.b_logvar) w = self.w_mu + w_std * w_eps b = self.b_mu + b_std * b_eps # Evaluate KL-divergence loss self.posterior = (-math.log(math.sqrt(2*math.pi))-torch.log(w_std+1e-8) -(((w-self.w_mu)**2) / (2 * w_std**2+1e-8))).sum() \ + (-math.log(math.sqrt(2*math.pi))-torch.log(b_std+1e-8) -(((b-self.b_mu)**2) / (2 * b_std**2+1e-8))).sum() self.prior = (-math.log(math.sqrt(2*math.pi))-(0.5 * w**2)).sum() \ + (-math.log(math.sqrt(2*math.pi))-(0.5 * b**2)).sum() self.kl_loss = self.posterior - self.prior return torch.mm(x,w) + b class BayesianNet(nn.Module): """ 2 layer Fully connected Network Linear -> ReLU -> Linear -> ReLU -> Linear """ def __init__(self, mode, input_size, hidden_size, output_size): super(BayesianNet, self).__init__() self.mode = mode self.flatten = nn.Flatten() # 2D to 1D array self.hidden1 = BayesianLinear(in_features=input_size, out_features=hidden_size[0], mixture_prior=mixture_prior, prior_init=prior_init) self.hidden2 = BayesianLinear(in_features=hidden_size[0], out_features=hidden_size[1], mixture_prior=mixture_prior, prior_init=prior_init) self.out = BayesianLinear(in_features=hidden_size[1], out_features=output_size, mixture_prior=mixture_prior, prior_init=prior_init) if mode == "Classification": self.last = nn.CrossEntropyLoss(reduction='sum') elif mode == "Regression": self.last = nn.MSELoss(reduction='sum') def forward(self, x): x = self.flatten(x) x = F.relu(self.hidden1(x)) x = F.relu(self.hidden2(x)) x = self.out(x) return x def kl_loss(self): return self.hidden1.kl_loss + self.hidden2.kl_loss + self.out.kl_loss def loss(self, x, y, samples=1): kl_loss = torch.zeros(samples) nll = torch.zeros(samples) for i in range(samples): pred = self.forward(x) kl_loss[i] = self.kl_loss() / 600 # I run MNIST with batch size 100, so total 600 batch exist nll[i] = self.last(pred, y) return pred, (kl_loss.mean() + nll.mean())/x.shape[0] ```
Too large KL-divergence in training
CC BY-SA 4.0
null
2023-05-16T07:47:53.740
2023-05-16T07:53:03.743
2023-05-16T07:53:03.743
386753
386753
[ "bayesian", "optimization", "bayesian-network", "kullback-leibler" ]
615998
2
null
91238
0
null
As noted in the comments, especially by Sextus Empiricus, fold change is often depicted on a logarithmic scale (specifically log2). It appears to work well in your case because the variances are much more similar when shown on the log scale. I don't see any real evidence in the plots that skewness is a meaningful problem. There's a small handful of extreme values but those don't really call for a change in your visualisation. That said, there are probably nicer ways to plot this (while retaining the general design and the log2 scale). If you have a small number of points, you could just plot the data directly or perhaps use a beeswarm plot instead of using boxplots. If you have a large number, violin plots, ridgeline plots (as Shawn Hemelstrand's answer shows), or density plots would all be good options.
null
CC BY-SA 4.0
null
2023-05-16T07:50:16.907
2023-05-16T07:50:16.907
null
null
121522
null
615999
1
616006
null
1
25
I have longitudinal data with a pre and post condition where 4 groups where used. The groups where a control group (CG), a control group with high protein intake (CGPR), a training group (IG) and a training group with high protein intake (IGPR). I was wondering if it is possible to fit an interaction term between protein intake and training with lmer(). My model would look like this: `final_model <- lmer(Outcome~Time+Training+ProteinIntake+Training:ProteinIntake +(1|ID), data=data, REML=F)`. My Question is if it is valid to compare CG and CGPR with IG and IGPR for the effect of training AND CG and IG with CGPR and IGPR for the effect of Protein Intake and also including the interaction term Training x ProteinIntake. I am not really sure, because i will include the groups in multiple in comparisons and due to that i think they will have shared variance. Thank you for any help!
Interaction term in a longitudinal study using LME-Model
CC BY-SA 4.0
null
2023-05-16T08:00:03.890
2023-05-16T08:47:10.163
null
null
388073
[ "r", "lme4-nlme", "interaction" ]
616000
2
null
615987
6
null
There are two ways to get peaks - If you are using kernel density estimation, then peaks may arrise as artifacts of the method. Especially when the kernel bandwidth is small and sample size is small. - The original distribution has truly peaks. Example in R code [](https://i.stack.imgur.com/35AS9.png) ``` set.seed(1) x = seq(-3,3,0.01) y = rnorm(20,0,1) plot(density(y, bw=0.2), main = "small sample from single mode standard normal distribution \n fitted with too small kernel bandwidth") graphics::rug(y) lines(x,dnorm(x), lty = 2) legend(-2.8,0.55,c("true distribution", "kernel density estimation"), lty = c(2,1)) n = 500 y = rnorm(n,rbinom(n,1,0.5)*2-1,0.5) plot(density(y), main = "large sample from a truly bimodal mixture distribution") graphics::rug(y) lines(x,(dnorm(x,1,0.5)+dnorm (x,-1,0.5))/2, lty = 2) ``` If the peaks in the estimated density have similar bandwidths as the kernel bandwidths, then it is difficult to differentiate between the two situations, and you should gather more data in order to figure out whether the peaks are artifacts or truly reflect the distribution of the population.
null
CC BY-SA 4.0
null
2023-05-16T08:13:14.600
2023-05-17T07:58:40.543
2023-05-17T07:58:40.543
164061
164061
null
616002
2
null
615929
0
null
First, when a model has interaction terms, the "significance" of the difference from 0 of an individual-predictor coefficient or of a lower-level interaction coefficient (in your case with a 3-way interaction, any 2-way interaction coefficient) depends on how the interacting predictors are coded or centered. See [this page](https://stats.stackexchange.com/q/65898/28500), for example. You thus shouldn't worry about their "significance" at all. Second, you don't even need a significant overall model to do valid pairwise comparisons, provided that you correct for the multiple comparisons. See [this page](https://stats.stackexchange.com/q/9751/28500), for example. So it's certainly OK to do pairwise comparisons of scenarios in your model. The danger with multiple interactions, however, is that you end up needing to correct for a lot of multiple comparisons and thus lose power for detecting truly significant differences. Think carefully whether you want to test all pairwise differences, or if there are a few specific comparisons that are of primary interest.
null
CC BY-SA 4.0
null
2023-05-16T08:21:01.973
2023-05-16T08:21:01.973
null
null
28500
null
616003
1
null
null
1
31
My team has been implementing a survival-analysis approach to loan default prediction for about a year now. However, I believe their method to be a little "brute-force" and I haven't been able to compare it to anything I've read online about survival models. A typical loan dataset would have one row per application and an outcome flag indicating if the customer defaulted within the first 12 months after application. It is a simple binary classification task that eventually aims to construct the probability of default (or PD) at 12 months. Survival models have some obvious advantages in this use case. The two biggest are that you can use applications that haven't been observed for the full 12 months (or is right-censored) and that there is now a time component which is useful for more detailed analysis and model monitoring. The way that my team has the dataset set up for "survival modelling" is like-so (with a credit score shown to represent the features we have available at the time of application): |Customer |Credit score |Months since application |Late | |--------|------------|------------------------|----| |A |240 |1 |0 | |A |240 |2 |0 | |A |240 |3 |1 | |B |270 |1 |0 | |B |270 |2 |0 | |B |270 |3 |0 | |B |270 |4 |1 | |C |400 |1 |0 | |C |400 |2 |0 | As you can see, we duplicate the information available at the time of application (the credit score) by the number of months since application and include this time as another column in the dataset. The late flag is included only on the month that the customer actually defaults and no further months are included for this customer. We then build a gradient-boosting classification model directly from this dataset, with no formal survival analysis package or equations being used. This means that we essentially view time in the same way as any other factor in the model, and can plot its effect on the outcome flag in the same way. I have doubts about this approach based mostly on gut feel and the fact I've not seen it done in this way anywhere in my research. Intuition tells me this approach is inefficient and doesn't properly consider the effects of censoring, but unfortunately, I am lacking the deep statistical understanding of survival modelling required back this up. Can anyone see the faults with this approach, if any? What are the disadvantages of doing it this way, and how should I correctly implement a gradient-boosting model for survival analysis?
Why use a formal survival model rather than including time as a factor in my model?
CC BY-SA 4.0
null
2023-05-16T08:32:01.727
2023-05-16T13:43:55.293
null
null
388071
[ "classification", "survival", "boosting" ]
616004
1
616018
null
0
22
I only have access to the following dataset summary data and not to the dataset itself. [](https://i.stack.imgur.com/K1jJv.png) In the example, the stats on each line come from a univariate cox regression to model a cause-specific mortality. My goal is to understand if a biomarker is overall associated with all-cause mortality, which implies that I will have to combine these models somehow. Possible things to consider: - p-value -> in the example I only show <0.05 p-values, but pre-filtering might not be needed; - sample size (n) -> It slightly varies due to missing values (and I cannot use imputation methods due to the lack of access to the dataset); - effect size -> a given biomarker can be protective or nefarious to different cause-specific mortalities. Is there a canonical way to deal with this? If not, is there some weighting heuristic? I am just doing some exploratory analysis for hypothesis generation. The data comes from metabolomics of a cross-sectional cohort, which were then matched to death registries.
Combining cause-specific mortality cox regressions into all-cause mortality
CC BY-SA 4.0
null
2023-05-16T08:38:53.370
2023-05-16T11:11:58.387
2023-05-16T09:59:55.617
388074
388074
[ "cox-model", "biostatistics", "mortality", "combining-p-values" ]
616005
2
null
594190
0
null
I think you can apply both. First,apply winsorizing on pre-experiment data and post-experiment data. Second,use the winsorized pre-experiment data and winsorized post-experiment data to calculate theta of CUPED. If you do this you will have a lower variance than the the winsorized variances
null
CC BY-SA 4.0
null
2023-05-16T08:44:30.930
2023-05-16T08:44:30.930
null
null
388076
null
616006
2
null
615999
0
null
Whether to include an interaction term depends on whether you suspect that the effect of training depends on protein intake, and vice-versa. Your design is similar to a classic two-way analysis of variance, which would include an interaction between the treatment groups to allow for that possibility. The comparisons you suggest make sense, but they shouldn't generally be labeled as the effect of either training or protein intake. With the `Training:ProteinIntake` interaction term, there no longer is a single "effect of training" or a single "effect of protein intake," as the effect of training depends on protein intake, and vice-versa. If the interaction is small you might be able to construct such individual estimates for training and protein intake, but explain carefully to your audience what they mean. It's not completely clear what you mean by "shared variance" here, as there are several sources of variance in the data and your model. There's an error variance in the linear model, a variance of intercepts among individuals captured in your random-effect term, and variances and covariances of the coefficient estimates from the model. In that sense there is always a "shared variance" in this type of modeling.
null
CC BY-SA 4.0
null
2023-05-16T08:47:10.163
2023-05-16T08:47:10.163
null
null
28500
null
616008
1
null
null
0
57
I was trying to rationalize the K-Means algorithm and came up with the following thoughts. Suppose we need to compute: $T=min_x L(x)$ but we struggle because $L$ is complex. Suppose we find $L'$ s.t.: $min_y L'(x,y)= L(x)$ SIDE NOTE: note that this framework defines a function $y=y(x)$ where the minimum is attained, that may be etiher easy or difficult to calculate, but in both cases we may get an improved formulation. We have $L'(x,y(x))=L(x)$ and $L'(x,y)\ge L(x)$ if $y \ne y(x)$. If this is the case than we can evaluate $T$ as: $T=min_{x,y} L'(x,y)$ So we have rewritten the minimum of $L(x)$ as a minimum of an extended function $L'(x,y)$. It may be that : - $L'$ is easier to minimize either via block descent than direct $L$ minimization. E.g. it may be easier to minimize $L'$ at fixed $y$ rather than $L$ directly. In one case we may get an analytical solution for example, but not in the other. Further, the loss function may be highly non-convex for $L(x)$ but convex for $L'$ at fixed $y$. - The partial derivatives of $L'$, in a differential setting, could be easier to evaluate than the ones of $L$. E.g. I think the general framework applies to K-Means where $x$ are the label assignemts and $y$ the cluster centers. My question: - Do the trick and thoughts make sense? - Are there more general principles underlying this technique? - What are some application examples to develop more the intuition? I hope the question is not too unclear, it is more brainstorming to develop intution about algorithms...
General technique for loss function minimization
CC BY-SA 4.0
null
2023-05-16T08:54:23.560
2023-05-16T12:33:09.160
2023-05-16T12:33:09.160
70458
70458
[ "self-study", "k-means" ]
616010
1
null
null
0
12
I have a data set as: x1, x2, x3... x9 (non-normal distribution). I check it with box-plot and find one outlier as x9. Normally I will stop then remove x9 and use the new data set. But when I check new data set x1...x8 with box-plot, I find one another outlier x8. I check new data set x1...x7 again, no outlier is detected. My question is that: In this case, x8, x9 are outliers? or only x9 is outlier? For any dataset, should I do box-plot test again and again until no outlier is detected?
should I check and remove outlier from dataset through multi round
CC BY-SA 4.0
null
2023-05-16T09:09:17.007
2023-05-16T09:36:55.170
2023-05-16T09:36:55.170
362671
388078
[ "standard-deviation", "outliers", "boxplot" ]
616011
1
null
null
0
33
I have a product substitutability matrix, where each cell has a number between 0 and 1 to represent how substitutable (i.e. how similar) the two products are. I then apply hierarchical clustering to this data in order to create a dendrogram. R code seen below: ``` # turn subs data into a 'matrix' with product 1 on rows, product 2 on columns and subs score in the body subs_matrix <- reshape(subs_scores, idvar = "p1_name", timevar = "p2_name", direction = "wide") #Removes the ID's as the rownames and replaces with the product 1 name column subs_matrix <- subs_matrix %>% remove_rownames %>% column_to_rownames(var="p1_name") #Removes the column prefix so the row/column names match colnames(subs_matrix) <- gsub("subs_score.", "", colnames(subs_matrix)) #reverses the subs scores so that lower is more substitutable and orders the rows/columns the same way subs_matrix<-1-subs_matrix[rownames(subs_matrix),rownames(subs_matrix)] #put the matrix through the hierarchical clustering method, using ward as the link c1 <- hclust(as.dist(subs_matrix), method = "ward.D2") #ward.D2 #scale the branches of the tree, only changes the aesthetics the dendrogram for interpretation min_height<-min(c1$height) - 0.01 #Ensure tree has some height at its lowest point max_height<-max(c1$height) c1$height <- (c1$height-min_height)/(max_height-min_height) #plot the tree, cex is the label font size and hang = -1 roots the branches plot(c1,cex = 0.6, hang = -1) ``` I have been using this dendrogram to visually identify clusters, and then clusters within the clusters, in order to understand product groupings. The process of manually identifying the clusters is time consuming, as I have dozens of matrices I need to do this to, so I'm looking for a method to automatically identify a sensible number of clusters, even if the method is imperfect. I could then use that number as my k value in a function such as `cuttree` in R, and re-apply this method to find more clusters, within each cluster. The best tool I've found for this seems to be NbClust in R: ``` library(NbClust) diss = as.dist(subs_matrix) nb.fit <- NbClust(diss, min.nc = 2, max.nc = 12, method = 'ward.D2', index = 'all') nb.fit["Best.nc"] ``` This uses 26 different methods to pick the optimal number of clusters, and then picks the modal result. Unfortunately, I usually get this error when running it: ``` Error in solve.default(W) : system is computationally singular: reciprocal condition number = 6.00125e-17 ``` My understanding of the mathematics isn't great, but this apparently arises because the matrix is not invertible, which apparently is likely because some of the columns of my matrix are correlated. I don't know how this can be fixed, as often I will be comparing some very similar products, which will likely have highly correlated substitutability scores. I also found an automated way to get a number of clusters using the elbow method, but this number always seemed weird, and didn't match what could be seen in the dendrogram. Does anyone have any suggestions for how to get the number of clusters in an automated way? Is it possible I should be using a different clustering methodology altogether, rather than hierarchical clustering? All help is much appreciated. I have code for the hierarchical clustering in both R and Python, so I could use a solution in either language.
How to automatically choose the optimal number of clusters for a hierarchical clustering dendrogram? (R or Python)
CC BY-SA 4.0
null
2023-05-16T09:13:28.907
2023-05-16T09:13:28.907
null
null
377199
[ "r", "python", "clustering", "hierarchical-clustering" ]
616012
1
null
null
0
27
I am conducting a meta-analysis using the metafor package in R. More specifically I use a random effects model applying the rma.mv function. When I run my meta-regression with moderators, I always get an insignificant intercept. As I rarely see this in published meta-analyses, I was wondering whether something is wrong with my model? My effect sizes are standardized mean differences (Cohen's d) and I have continuous as well as categorical moderators. I also tried mean-centering the continuous moderators. This reduces the p-value of the intercept substantially, but it is still not signficant. I would appreciate any help regarding this issue.
What does a non-significant intercept mean in a meta-regression using rma.mv ? Does it indicate problems? If so how to solve them?
CC BY-SA 4.0
null
2023-05-16T09:52:25.307
2023-05-16T09:52:25.307
null
null
388083
[ "r", "meta-analysis", "intercept", "metafor" ]
616013
1
null
null
0
36
I'm currently testing the Bayesian inference approach for a model given some data. However, I am running into the problem of estimating $\sigma$ (I am assuming a normal distribution of the data at each time step). I currently treat $\sigma$ in the same way as the other model parameters, i.e. I also estimate it using the MCMC algorithm. I set the prior for $\sigma$ to the uniform distribution. When I now run the MCMC algorithm, I notice that the derived samples of $\sigma$ are always very close to the upper bound, even when I set it urealistically high values. To test whether my model is working, I have also used a fitting approach, which gives really good results and a $\sigma=0.5$, but my posterior distribution hovers around 10 when I set the uniform distribution to the interval $[0,10]$ and doesn't even come close to $0.5$. Why does this happen and how can I fix it? My only idea at the moment is to simply infer $\sigma$ deterministically given the current set of parameters, but this seems out of flavour in a Bayesian inference approach. Maybe a better prior distribution?
Estimating sigma in Bayesian inference
CC BY-SA 4.0
null
2023-05-16T09:54:19.307
2023-05-16T13:13:21.283
null
null
384491
[ "bayesian", "markov-chain-montecarlo" ]
616015
1
null
null
0
41
Suppose we observe some physical event and we measure multiple time series. So for example we have a time series for a concentration level, a time series for energy consumption, so on and so on. Now we want to process the time series output in a machine learning model but we do not want to use actual time series. Instead we just want to use single points, like the time end point. Of course we are going to lose some information and hence the quality of our model will shrink. But my question is regarding on how much information we actually loose. Can we justify from a theoretical point of view this abstaction of taking only single points? Since the time series come from a real physical process there has to be some underlying dependencies. Hence the single time end points can not occur arbitrary in every combination that is possible. More likely they act on some lower dimensional vector spaces, right? What I am most interested in is if there is some theory or literature about this? Thanks in advance.
Theoretical justification for information loss
CC BY-SA 4.0
null
2023-05-16T10:56:43.827
2023-05-16T11:12:01.780
2023-05-16T11:12:01.780
364027
364027
[ "machine-learning", "time-series", "bayesian", "mathematical-statistics", "joint-distribution" ]
616016
2
null
615957
0
null
When working with survey data that includes post-stratification weights in SPSS, it is important to properly account for the weights in your statistical analysis to obtain accurate estimates and standard errors that reflect the population you are studying. Here's how you can apply post-stratification weights when running statistical tests in SPSS: Load the survey data: Start by loading your survey data into SPSS. Define the survey design: Use the Complex Samples module in SPSS to define the survey design. This includes specifying the sampling units, strata, and clusters that were used in the data collection. Additionally, specify the post-stratification weight variable as the weight variable in the survey design. Specify the statistical test: Choose the appropriate statistical test for your analysis based on the research question. This can include procedures like regression analysis, means comparison, or crosstabulation. Apply the post-stratification weights: In the analysis procedure, you need to instruct SPSS to use the post-stratification weights. You can do this by specifying the weight variable in the respective procedure options or commands. Run the statistical test: Execute the statistical test in SPSS with the specified weight variable. SPSS will incorporate the weights into the analysis, adjusting the estimates and standard errors to account for the post-stratification. Interpret the results: After running the statistical test, interpret the results while considering the adjusted estimates and standard errors obtained through the incorporation of post-stratification weights. These results will provide a more accurate representation of the population under study. By following these steps, you can appropriately apply post-stratification weights when running statistical tests in SPSS. It ensures that your analysis accounts for the complex survey design and provides valid estimates and inferences. Remember to consult the SPSS documentation or additional resources specific to your version of SPSS for detailed instructions on incorporating post-stratification weights in different analysis procedures.
null
CC BY-SA 4.0
null
2023-05-16T10:57:14.037
2023-05-16T10:57:14.037
null
null
388085
null