Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
βŒ€
ParentId
stringlengths
1
6
βŒ€
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
βŒ€
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
βŒ€
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
βŒ€
LastEditorUserId
stringlengths
1
6
βŒ€
OwnerUserId
stringlengths
1
6
βŒ€
Tags
list
616017
2
null
615669
0
null
This was solved by increasing the learning rate. The model previously got stuck in a local minimum.
null
CC BY-SA 4.0
null
2023-05-16T11:08:08.253
2023-05-16T11:08:08.253
null
null
26012
null
616018
2
null
616004
0
null
You can make an estimate based on the base levels of each risk and the risk ratios of each risk. $$\text{All cause risk} = \sum \text{individual risks}$$ With your table you can compute the average individual risks, as you have the number of events among a group of a particular size. You may also compute how those individual risks change as function of the alanine factor. One problem, is that your table does not show the distribution of alanine. That is necessary in order compute the risks for the different groups. Example: Say you have a group of $n_1$ smokers and $n_2$ non-smokers among which $y_1$ die because of lung cancer and the risk ratio for this is $3:1$. And $y_2$ die because of diabetes and the risk ratio for this is $3:4$. Then the ratio of all cause death is dependent on the size of $n_1$ and $n_2$. Example say, with the above ratios you have the deaths ``` Group deaths + cause 20000 total 200 cancers 20000 total 105 diabetes 20000 total 305 total ``` This could be a situation like ``` Group. deaths + cause 10000 smokers 150 cancers 10000 non smokers 50 cancers 10000 smokers 45 diabetes 10000 non smokers 60 diabetes 10000 smokers 195 total 10000 non smokers 110 total ``` Or just as well ``` Group. deaths + cause 5000 smokers 100 cancers 15000 non smokers 100 cancers 5000 smokers 21 diabetes 15000 non smokers 84 diabetes 5000 smokers 121 total 15000 non smokers 184 total ``` In the first case the all cause risk ratio is $195/110 \approx 1.77$, in the second case it is $121/(184/3) \approx 1.97$. So depending on the number of smokers and non-smokers in the experiment the table relates to a different all cause death risk ratio. The same principle is true for your case with alanine (and it will be even more complicated in your case since the example of smokers vs non-smokers is a dichotomous division and your case with alanine is probably some continuous distribution).
null
CC BY-SA 4.0
null
2023-05-16T11:11:58.387
2023-05-16T11:11:58.387
null
null
164061
null
616020
1
null
null
0
18
I am working with surveillance data - I have information on all the people who had outcome Y in a given geographical area in a given time period. |id |outcome_y |age |sex |... | |--|---------|---|---|---| |01 |yes |27 |f |... | |02 |yes |84 |m |... | |... |... |... |... |... | |99 |yes |69 |m |... | Crucially, this table does not include any information on people who did not have outcome Y. Because the geographical area is fixed, I can use (external) population denominators (even age- and sex-stratified) to calculate an overall incidence risk or rate, as well as stratified incidence rates. I am confident that the surveillance data and the denominator data refer to the same population. |age |sex |population | |---|---|----------| |0-4 |f |425 | |0-4 |m |427 | |5-9 |f |538 | |... |... |... | For example, I can calculate the risk in males (of any age) by dividing: - the count of all cases with sex == "m" in my table, - by the total number of males in the external population denominator However, I'd also like to fit a model, so I can adjust for more than one covariate at a time - and get measures of effect like a risk ratio, along with confidence intervals. I've never come across a situation like this before - I'm used to tables that contain information on both cases and non-cases. Is this type of analysis possible, and if so, how could I do that?
Fitting a model with only aggregate data for people without outcome
CC BY-SA 4.0
null
2023-05-16T11:16:35.983
2023-05-16T11:16:35.983
null
null
354235
[ "regression", "epidemiology" ]
616021
1
null
null
0
57
Given the joint probability distribution for three dependent continuous random variables, I want to find a formula to compute the probability distribution for the product of these 3 random variables. I am currently using a formula but I am not sure it is correct. To find my formula I used this relevant thread but it is for 2 dependant random variables ([Probability distribution of the product of two dependent random variables](https://stats.stackexchange.com/questions/482248/probability-distribution-of-the-product-of-two-dependent-random-variables)). What it is in case of 3 dependent random variables ? In mathematical terms: Let's E be a random variables such that e=abc, were A, B and C are dependent random variables. Is it correct to use the following formula to compute the pdf of E ? $f_E(e)=∫^∞_{βˆ’βˆž} ∫^∞_{βˆ’βˆž} f_{A,B,C}(a,b,e/ab) \space 1/|ab| \space da \space db$ or in another notation: $P(abc=e)=∫^∞_{βˆ’βˆž} ∫^∞_{βˆ’βˆž} P(A=a,B=b,C=e/ab) \space 1/|ab| \space da \space db$ where: e/ab means the value of e divided by ab 1/|ab| means one divided by the absolute value of the product ab.
Probability distribution of the product of three dependent continuous random variables
CC BY-SA 4.0
null
2023-05-16T11:18:14.237
2023-05-17T23:20:43.863
2023-05-17T15:05:41.697
374799
374799
[ "probability", "distributions", "random-variable", "joint-distribution", "multivariate-distribution" ]
616022
1
616102
null
2
23
I run the following GAM model in R using the `mgcv` pacakge: ``` my_fit <- mgcv::gam(mpg ~ s(disp, bs = "ts", k = 15), gaussian(), mtcars, method = "REML") ``` As far as I understand the help page of `mgcv::smooth.construct.ts.smooth.spec`, setting `bs = "ts"` shrinks the coefficients of the fit towards zero. Looking at `summary(my_fit)` returns ``` Approximate significance of smooth terms: edf Ref.df F p-value s(disp) 4.251 14 14.23 <2e-16 *** ``` so the effective degree of freedoms are 4.251. Do I understand this correctly, out of the `k = 15` basis functions, about 9 are shrunk to zero ($15 - 1 - 9 = 5 \approx 4.251$)? However, `my_fit$coefficients` returns 15 coefficients for `s(disp)`, and most of them are quite far from zero. I would have expected either only about 5 coefficients or 15 coefficients and about 10 are close to zero. Where do I see the shrinkage? Or is there no shrinkage going on?
GAM Parameter Estimates and Shrinkage
CC BY-SA 4.0
null
2023-05-16T11:21:26.663
2023-05-17T09:56:16.080
2023-05-17T09:56:16.080
359647
359647
[ "generalized-additive-model", "mgcv" ]
616023
2
null
615677
1
null
So, you won't want to only use the available visits as you describe. Most work transforms the data into monotone missingness, as it is easier to work with. To do this, you would censor someone when they miss a visit for all subsequent visits. Depending on the number of intermittent missing visits, this can result in a substantial amount of missing data. So whether to use this approach depends on how many observations will be available at the final time point. A more recent proposed alternative is IPW for nonmonotonic missing / censoring. These weights are described in [this paper](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5860553/) and a more technical discussion in [this paper](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6051732/). I know someone who is doing simulation work on this estimator but it is not published yet (can follow up once it is). In that work, they find that the UMLE works fine in most settings (i.e., you don't have to implement the Bayesian version).
null
CC BY-SA 4.0
null
2023-05-16T12:04:08.803
2023-05-16T12:04:08.803
null
null
247479
null
616024
1
null
null
0
4
I'm facing a mathematical challenge related to selectivity freedom under overlapping conditions. To make it more relatable, let's consider an analogy. Imagine a farmer who has just completed his apple harvest. The harvested apples can be either green or red. Additionally, some apples, irrespective of their color, may have worms. Now, the farmer wants to select a group of apples from his harvest that satisfies certain conditions related to their color (green or red) and whether or not they have worms. Until now, I've been able to calculate the exclusion of apples considering only one characteristic at a time - color or worm-status. For instance, if the farmer has a basket comprising 50 green apples and 50 red apples, and he needs a minimum of 75% green apples in his selection, the exclusion is calculated as follows: Selectivity freedom = (Green Min Ratio * Green Ratio in Basket) + (Residual Ratio * Any Apple on Basket Ratio) = 0.75 * 0.5 + 0.25 * 1 = 0.625 So, the exclusion is 1 - 0.625 = 0.375 or 37.5% However, now we have a more intricate situation. The farmer's basket has 40 green and worm-free apples, 10 apples green nor worm-free, 20 red and worm-free apples, and 30 apples that are red with worm. The challenge is to determine the exclusion when the conditions overlap, like needing a minimum of 75% for green apples and 50% for worm-free apples. Currently, I'm trying to address this by transforming the problem into a mathematical optimization scenario. I'm using linear programming to maximize the number of apples selected while satisfying all conditions to minimize the exclusion. Here's a brief overview of my approach: x1 = Green and worm minimum x2 = Green and free-worm minimum x3 = Red and worm minimum x4 = Red and free-worm minimum Green_worm_pct = 0.1 (for this example) Green_free_worm_pct = 0.4 (for this example) Red_worm_pct = 0.3 (for this example) Red_free_worm_pct = 0.2 (for this example) Objective function: Green_worm_pctx1 + Green_free_worm_pctx2 + Red_worm_pctx3 + Red_free_worm_pctx4 Constraints: x1 + x2 >= Green_min x3 + x4 >= Red_min x1 + x3 >= free_worm_min x2 + x4 >= worm_min x1 + x2 + x3 + x4 = 1 Bounds: All the variables (0,1) The problem is then to find the X[i]s that maximize the sum of X[i] for all apples 'i', while satisfying the constraints. This corresponds to maximizing the number of apples selected while satisfying all conditions. I would appreciate any thoughts, suggestions or alternative methods to address this issue. Thanks in advance!
Selectivity freedom with overlapping conditions
CC BY-SA 4.0
null
2023-05-16T12:20:13.890
2023-05-16T12:20:13.890
null
null
388092
[ "feature-selection", "model-selection", "degrees-of-freedom", "overlapping-data", "linear-programming" ]
616025
1
null
null
0
57
I have continuous data for x (for example "time") and y with several groups. I would like to show/test that the slope of groupC changes over time (is positive and rises) while groups A and B do not. Below is a MRE. I was thinking about using a GAM, but could need some inspiration of how to formulate this in R. Please note that my real data are a lot more variable than the example in case that is relevant, so it's really about using some sort of smoother and then use this as a basic for assessing differences in slope. ``` library(tidyverse) set.seed(1) data <- rbind(data.frame(x=seq(1, 100), y=rnorm(100, 1, 5), group="A"), data.frame(x=seq(1, 100), y=rnorm(100, 3, 5), group="B"), data.frame(x=seq(1, 100), y=jitter(seq(1, 10, length=100)^3, 1000)/20, group="C")) data %>% ggplot(aes(x=x, y=y, color=group)) + geom_smooth() + # any smoother is fine, loess is just an example for viz geom_point() ``` [](https://i.stack.imgur.com/7XPB6.png)
Assessing differences in slope of a loess (or any smoother) fit
CC BY-SA 4.0
null
2023-05-16T12:52:21.600
2023-05-18T16:17:56.610
2023-05-16T13:02:52.800
191429
191429
[ "smoothing", "loess" ]
616026
1
null
null
0
27
I have conducted a survey with several scale questions ranked 1-10 on a sample of n=18. I applied a specific treatment before and after the survey. However, I'm unsure about how to analyze the data and determine the minimum sample required to detect a significant change between the survey results before and after the treatment. After considering the circumstances, I believe performing a Wilcoxon signed-rank test for paired samples would be appropriate since the data is not normally distributed and the same sample was used for both surveys. I plan to utilize the R functions wilcox.test() and n.wilcox.ord() for this purpose. Have I to perform a test for each question? Do I have to take into account the survey median values to calculate the effect size? Could you please provide me an example and a plot to visualize the results of the two surveys? Thank you very much! :) EI_Stats
Assessing the Impact of a Specific Treatment on Survey Results: A Wilcoxon Signed-Rank Test Approach on Paired Samples
CC BY-SA 4.0
null
2023-05-16T12:55:50.747
2023-05-17T09:05:53.720
2023-05-17T09:05:53.720
387468
387468
[ "nonparametric", "survey", "paired-data", "non-independent", "wilcoxon-signed-rank" ]
616028
1
null
null
0
26
I am new to statistics and I was hoping someone could guide me on what statistical tests I should complete in R on the following dataset. I am trying to investigate the relationship between births and the issue of a specific drug to treat postnatal depression. The data set was created as follows: - I obtained the number of births in Northern Ireland in 2021 grouped by Month of birth and local council area (LCA). - I also obtained a dataset that contained a list of all the times a particular anti-depressant (used to treat postnatal depression) was issued by doctors in Northern Ireland in 2021. - I groups filtered the prescription by Month and Local Council Area and created a new file This resulted in a dataset that contained: - Month of 2021 - Local Council Area (LCA) - Number of prescriptions of this depressant per council area, per month - Number of live births in a council area per month Below is a sample of the table: |Month.of.Birth.in.2021 |LCA |Number.Births |Number.Prescriptions | |----------------------|---|-------------|--------------------| |January |Antrim and Newtownabbey |135 |4300 | |January |Armagh City, Banbridge and Craigavon |215 |7571 | |January |Belfast |333 |16866 | |January |Ards and North Down |118 |4423 | |February |Antrim and Newtownabbey |105 |4118 | |February |Armagh City, Banbridge and Craigavon |205 |7295 | |February |Belfast |301 |16251 | Full dataset link is here: [https://drive.google.com/file/d/1Qvm34S76g2oyXuf44PY2-xnvxvHg9Z7T/view?usp=sharing](https://drive.google.com/file/d/1Qvm34S76g2oyXuf44PY2-xnvxvHg9Z7T/view?usp=sharing) I have completed many visualisations of the data but I am unsure on the follows: - which statistical test to use - why this test should be used - what hypothesis to test with this statistical test I have looked at numerous videos and blogs, but am unsure how to apply this knowledge to this particular dataset. Any help or guidance would be greatly apprecaited. I have tried a T-Test (one and two sample), a F Test to compare two variances however I am unsure of what the results mean for the data that I am using.
Appropriate statistical tests (hypotheses) for categorical and numeric values on a specific dataset in R
CC BY-SA 4.0
null
2023-05-16T12:54:38.320
2023-05-16T13:21:45.060
2023-05-16T13:21:45.060
69508
388094
[ "r", "anova", "t-test" ]
616029
2
null
616013
3
null
A completely flat inverse-Gamma, i.e. letting the shape and scale tend to zero, will often lead to the same problems in practice. Gelman's 2006 paper on prior distributions for variance parameters is a good place to start ([link](http://www.stat.columbia.edu/%7Egelman/research/published/taumain.pdf)). It illustrates some of the issues with uninformative priors, and suggests some better solutions.
null
CC BY-SA 4.0
null
2023-05-16T13:13:21.283
2023-05-16T13:13:21.283
null
null
238285
null
616030
2
null
615951
0
null
Okay, maybe I'm misunderstanding how your data are arranged but here's my attempt at an answer. This is based on the understanding that your data are arranged something like: Survey 1 will have a set of likert responses and within that there's 4 questions for each of 2 categories (knowledge and attitude). This survey is administered before the intervention and then you provide the same survey afterwards. Survey 2 asks the same questions afterwards, and you're interested in the individual's change in response, and you can control for this because you have individual identity as one of your variables? In this case rather than attempt to run everything at once in one analysis I would break this out into individual questions that I was trying to address and use a specific test to answer each question. e.g.: (1) Does knowledge improve after the intervention? Use a paired t-test of the average response to the knowledge questions before and after the intervention. Same idea applies for attitude. (2) Does knowledge show a greater improvement than attitude? For each student calculate the difference between their knowledge scores between survey 1 and survey 2, then calculate the difference between their attitude scores between survey 1 and survey 2. Then perform a paired t-test to determine whether knowledge or attitude showed the greatest gains while controlling for individual identity. (3) Is an increase in knowledge correlated with an increase in attitude? Same test as above but you're running a spearman rank correlation instead (or Wilcoxon), where delta knowledge is on the y-axis, and delta attitude is on the x-axis, and each point is an individual's average score. This is just my opinion, but that's because I don't see the utility of lumping all of these analyses together into one assessment that's more difficult to understand. Also, don't commit yourself to a non-parametric test just because it's "safer". Run the tests for normality so that you meet the assumptions of whatever test you're doing and choose the appropriate (parametric/non-parametric) on that basis.
null
CC BY-SA 4.0
null
2023-05-16T13:33:24.880
2023-05-16T13:33:24.880
null
null
176966
null
616031
1
null
null
2
44
I am currently working on binary classification problem with imbalanced dataset (n=3419 and 69:31). However, based on the business expertise of the users, they have generated rule-based label based on two features - f1 and f2. Let's assume the label was created using the dummy formula - if (2f1+10f2/f2)*100>60%, then 1 else 0 Now, I extracted 4 more new features on top of f1 and f2 and we are trying to identify the other characteristics or patterns that are not uncovered through rule based label. So, my feature set includes 6 features - f1,f2,f3,f4,f5 and f6. So instead of passing f1 and f2 directly (as it was used in rule based label), I created a new feature f7=f1/f2 and passed as input to the model and it worked fine. Just to make sure that there is no bias or overfitting due to these features, I did the below assessment of my model a) train test split validation b) 15-fold CV (with random train test split) c) time-test split validation d) 15-fold CV (with time test split) In all these experiment, I didn't see any signs of overfitting as there was no drastic drop in performance metrics. But yes, f7 ranks as the most important feature solely driving the performance to 87% of f1-score whereas all other features such as f3,f4,f5 and f6 take the f1-score upto 93% for test data. if I exclude `f7` completely, my f1-score drops to 63% My question is mainly on a) whether it is okay to include `feature 7 (f7)` even though its composition `f1 and f2` were used in formula for the output label b) apart from above assessment approaches, is there any other validation that I can do to make sure that including this feature is indeed valuable and not bias/overfitting c) can I use `f1` and `f2` directly in the model rather than computing ratio of them and storing it as `f7`? holdout metric with just f7 (=f1/f2) f1-score in holdout - 85.47% mcc - 80.28% avg_precision score - 94.2% balanced_accuracy - 90.8% accuracy - 92.13 with features f7,f3,f4,f5,f6 f1 - 89.43% mcc - 85.6% avg precision score - 96.1% balanced_accuracy - 92.62% accuracy - 94.4% 15 fold CV metric F1-score with just f7 (=f1/f2) Mean CV Score: 0.8716225330191808 with features f7,f3,f4,f5,f6 Mean CV Score: 0.9025171992337288 update to model performance based on comments model 1 only with f1,f2,f7 15 fold CV F1-score - 87.87% hold out metric accuracy - 93% balanced accuracy - 91.7% f1-score - 87.2% MCC - 82.4% avg precision score - 94.39% model 2 with features from f1 till f7 15 fold CV F1-Score: 88.99% holdout metric accuracy - 94.1% balanced accuracy - 92.4% f1-score - 89% MCC - 85.0% avg precision score - 95.85% feature importance plot [](https://i.stack.imgur.com/XVkuN.png)
Can variables used for rule based labeling be treated as input features?
CC BY-SA 4.0
null
2023-05-16T13:43:55.060
2023-05-16T15:42:43.750
2023-05-16T15:42:43.750
241460
241460
[ "machine-learning", "classification", "feature-selection", "data-mining", "feature-engineering" ]
616032
2
null
616003
2
null
For each time period, you show one row of data for each individual at risk of the event during that period, with covariate values in place for the individual during that period and an indicator of whether the event occurred. That's a standard "long form" or "person-period" data format used for what's called "discrete-time survival analysis." That allows you to convert survival analysis into a binomial regression when you have a limited number of time periods, as in your case. Right censoring of event times is handled just as it is in continuous time: an individual's covariate values are included in the analysis while the individual is at risk, but aren't included thereafter. You are free to model the `time` aspect however you wish, as categorical or with an underlying continuous function. From that perspective, using gradient-boosting instead of a standard binomial regression is as appropriate here as it is for any binary outcome. It's best to use a continuous measure rather than yes/no classification accuracy to guide the model, but otherwise there's no inherent problem. [This page](https://stats.stackexchange.com/q/57191/28500), among others on this site, outlines the approach and includes some links to references. There's a 2016 text by Tutz and Schmid, [Modeling Discrete Time-to-Event Data](https://www.springer.com/us/book/9783319281568), that goes into much detail, although the reading is rough in some places.
null
CC BY-SA 4.0
null
2023-05-16T13:43:55.293
2023-05-16T13:43:55.293
null
null
28500
null
616033
1
null
null
0
54
Let $\{ x^{(1)}, \ldots, x^{(M)}\}$ be $M$ samples from a $n$-dimensional multivariate Gaussian distribution $\pi_{X} = \mathcal{N}(\mu, \Sigma)$. We recall the definition of the squared Mahalanobis distance $\delta_{X}$: $\delta_{X}(x) = (x - \mu)^\top \Sigma^{-1}(x - \mu)$ I am interested in computing the sample squared Mahalanobis distance $\hat{\delta}_{X}$: $\hat{\delta}_{X}(x) = (x - \hat{\mu})^\top \hat{\Sigma}^{-1}(x - \hat{\mu}),$ where $\hat{\mu}$ and $\hat{\Sigma}$ are the sample mean and covariance. I am typically working in the low-data regime where $M < n$. Independently of the dimension $n$, I observe the collapse of the sample squared Mahalanobis distance when $M < n$, i.e. all the samples have approximately the same Mahalanobis distance. [](https://i.stack.imgur.com/mApQk.png) First graph: Evolution of the mean of the squared Mahalanobis distance for $M$ samples from $\mathcal{N}(0_n, I_n)$ [](https://i.stack.imgur.com/U81Gm.png) Second graph: Evolution of the standard deviation of the squared Mahalanobis distance for $M$ samples from $\mathcal{N}(0_n, I_n)$ In this case, the sample covariance is not full rank. How can I regularize the sample (squared) Mahalanobis distance? Thank you for your help,
Collapse of sampled Mahalanobis distance
CC BY-SA 4.0
null
2023-05-16T14:03:37.750
2023-05-16T20:39:55.703
null
null
372023
[ "normal-distribution", "covariance-matrix", "matrix-inverse" ]
616034
1
null
null
0
3
I have been trying to find a way to compare different regression models where I have transformed the response variable. I have found sMAPE and NRMSE as two potential evaluation metrics. My concern is how to normalize them? If one distribution is right skewed, normalizing by the range wouldn't be the comparable to another distribution which is symmetric. At the same time, a strategy would be to employ quantiles, but I feel the same problem might arise with that. So my question is - what strategies can one employ to normalize an evaluation metric which is robust to different response distributions and extreme values? I have fitted 3 different models, based on four different ways of transforming the response variable.
Standardize evaluation metrics for various distributions; NRSME & sMAPE
CC BY-SA 4.0
null
2023-05-16T14:08:50.043
2023-05-16T14:08:50.043
null
null
320876
[ "model-evaluation", "model-comparison" ]
616035
1
null
null
0
18
Consider the product of two Normal distributions (prior-hyperprior): $$ p(\sigma^2) \propto \mathcal{N}(\theta; 0, s^2 \tau^2 \sigma^2)\mathcal{N}(\sigma^2; 0, c^2 \gamma^2) $$ this is a scale mixture of normals with normal mixing density for the variance term $\sigma^2$. More often, we find a solution when the normal hyperprior is assigned to the mean not the variance. Assuming that the means are zero and ommiting the fact that normals are not the usual hyperpriors for variance hyperparameters, is there a closed form solution for the mentioned product? Thanks in advance!
Normal-Normal mixture model for variance hyperparameter
CC BY-SA 4.0
null
2023-05-16T14:21:17.427
2023-05-16T14:21:17.427
null
null
277327
[ "normal-distribution", "inference", "hierarchical-bayesian", "hyperparameter" ]
616036
2
null
616025
2
null
Any continuous function on a compact space can be uniformly approximated as closely as desired by a polynomial. ([Stone-Weierstrauss theorem](https://en.wikipedia.org/wiki/Stone%E2%80%93Weierstrass_theorem)) In this case fit a quadratic to each set and note that only C has significant linear and quadratic coefficients. ``` library(nlme) fm <- lmList(y ~ x | group, data = data, pool = FALSE) fm0 <- lmList(y ~ 1 | group, data = data, pool = FALSE) Map(anova, fm, fm0) ``` and here is a plot of the points and fitted lines showing it fits well: ``` library(ggplot2) ggplot(data, aes(x, y, col = group)) + geom_point() + geom_smooth(formula = y ~ poly(x, 2, raw = TRUE), method = "lm") ``` (continued after graph) [](https://i.stack.imgur.com/Ig9tM.png) We can alternately use a gam. In the case of the data in the question it does not really make any difference but if the data in the question is not representative of your actual problem it may be useful. ``` library(mgcv) Map(\(g) summary(gam(y ~ x, data = data, subset = group == g)), unique(data$group)) ``` ## Update Have added an ANOVA to lm model and added a gam alternative. Note that both nlme and mgcv come with R so they do not have to be separately installed.
null
CC BY-SA 4.0
null
2023-05-16T14:34:52.020
2023-05-18T16:17:56.610
2023-05-18T16:17:56.610
4704
4704
null
616037
1
null
null
0
13
Is there a way or a specific command I can use in Stata for more insights and information after i Run the "estat classification" command and classification Tables for my probit model? I know how to interpret the "sensitivity and specificity" results obtained from my estat classification command, But: How or What additional Command in Stata helps me further specify which are the variables that were among the correctly (or falsely) predicted by my model? Note: I read something about perhaps "Residuals" can help ..? But I never worked with residuals before so not sure how to get them, or how-to interpret them in the model with a probit regression anyway. Thank you very much, Ruba
Probit "Postestimation", in Stata: How do i specify/utilize results of my probit "Classification tables ("estat classification" command)?
CC BY-SA 4.0
null
2023-05-16T14:45:00.900
2023-05-16T14:53:06.827
2023-05-16T14:53:06.827
388101
388101
[ "classification", "stata", "probit", "sensitivity-specificity" ]
616038
2
null
616033
1
null
What you observe there has been proved [in this arxiv-preprint by Pires & Branco](https://arxiv.org/abs/1902.04679). It may also be published elsewhere but I haven't seen it. Note that there is a problem here, which is that the (sample) Mahalanobis distance is intentionally defined to be affine equivariant; it gives all directions in n-dimensional space the same importance for standardisation, and will standardise in any direction according to the variance of observations along that direction. This means that the distance between any two observations is assessed relative to the overall variance/covariance structure ("Covstructure") of the point cloud. What is shown in the preprint is that if $n\ge M-1$, information in the data regarding the Covstructure is so scarce that every observation "on average" determines one dimension worth of variance/covariance entries, and in consequence every observation has the same Mahalanobis distance from every other observation (no observation can be shown as in any direction "outlying" with respect to others, as each of them points into an idiosyncratic direction and none is in any sense "between" others, so to say - note that I'm assuming data in "general position", i.e., spanning an as high-dimensional hyperplane as possible). This is a consequence of affine equivariance, which could be seen as the very essence of Mahalanobis distance. Arguably, if you penalise it, it will morph into something that is really quite different, and the name Mahalanobis distance may no longer be justified. The resulting distance will have to focus on some directions in $n$-dimensional space over some others, when it comes to standardising against the variation of the overall point cloud, which runs counter to the basic idea of Mahalanobis. For example, Euclidean distance with standardised variables is not affine equivariant and will use the variance along the main coordinate axes for standardisation. Variation in other directions due to correlation will be ignored. If you want to penalise in such a way that standardisation is mainly governed by the main coordinate axes, chances are you'll get something of a mix between the Mahalanobis-distance (which is a constant) and standardised Euclidean, which presumably amounts to being more or less equivalent to standardised Euclidean (depending on how exactly you use it). Note also that even Euclidean distance (with and without standardisation) has characteristics in high-dimensional settings that seem undesirable to many, see [Hennig, C.: Minkowski distances and standardisation for clustering and classification on high dimensional data. In: Imaizumi, Tadashi, Nakayama, Atsuho, Yokoyama, Satoru (Eds.) β€œAdvanced Studies in Behaviormetrics and Data Science. Essays in Honor of Akinori Okada”, Springer Singapore (2020), p. 103-118.](https://arxiv.org/abs/1911.13272)
null
CC BY-SA 4.0
null
2023-05-16T15:08:49.800
2023-05-16T20:39:55.703
2023-05-16T20:39:55.703
247165
247165
null
616040
2
null
103730
0
null
There is now an R package called SAEforest that provides the command MERFranger: [https://cran.r-project.org/web/packages/SAEforest/index.html](https://cran.r-project.org/web/packages/SAEforest/index.html) The focus of the package is not precisely on the MERF. However, it employs the same syntax as lme4, which may be more intuitive for people who are used to that package. Also, the ranger object can easily be extracted and treated separately.
null
CC BY-SA 4.0
null
2023-05-16T15:27:57.533
2023-05-16T15:27:57.533
null
null
297627
null
616041
2
null
616031
1
null
Your decision tree is rather simple. Just ask yourself one question: > When I get new data for which I have to make a prediction where I truly do not know the outcome, will I be able to calculate that f7 feature? If you will not be able to calculate that feature, then you should not be using it. If you will be able to calculate that feature, then do feel free to use it, especially since it seems to lead to strong performance. It sounds like you would not be able to calculate the `f7` feature until you know the outcome. Consequently, you have to observe the outcome before you can predict it, and at that point, you have a perfect predictor in that you can predict the observed outcome, even if this is a bit like predicting yesterday’s stock prices by looking them up in The Wall Street Journal.
null
CC BY-SA 4.0
null
2023-05-16T15:42:41.017
2023-05-16T15:42:41.017
null
null
247274
null
616042
1
null
null
0
24
Let $X$ be an independent variable and $Y$ the dependent variable. Suppose we have the relationship $Y = f(X) + \epsilon$ for some unknown function $f(x)$ and some noise $\epsilon \sim N(0,1)$. If $f(x)$ is highly nonlinear, then we know that the performance of linear regression is going to be poor. However it might be possible that we can perform a nonlinear transformation $X\mapsto g(X)$ on $X$ such that the independent variable becomes linear. For example, if $f(x)=2^x$ then we can let $g(x)=\log_2 x$. The question I have is, for any given function $f(x)$, does there exist a function $g(x)$ such that $f(g(x)) = kx$ for some constant $k$? (i.e. $g(x)$ is β€œalmost a right inverse of $f$”) Is there any criteria for such function to exist? (for example $f$ being bijective is a sufficient condition) I think this is the same as $f$ having a right inverse because if we have $f(g(x)) = kx$, then we can replace $x$ with $x/k$ and we have $f(g(x/k))=x$ and hence $f^{-1}(x)=g(x/k)$. More generally, given a set samples $(X_1,Y_1),\cdots, (X_n,Y_n)$, how do we know if we should do a nonlinear transformation (and if so what kind of nonlinear transformation to do)? I tried to google about it but couldn’t find much useful information. Any help/ book recommendations/ reference is appreciated, thanks in advance.
Nonlinear transformation in simple linear regression and almost inverse function
CC BY-SA 4.0
null
2023-05-16T15:52:08.203
2023-05-16T15:52:08.203
null
null
388105
[ "regression", "linear-model", "data-transformation" ]
616043
1
null
null
0
19
I am currently writing my thesis and have to submit in 1 week, but am struggling to solve this issue on my own. I would therefore genuinely appreciate your input! The research question I'm trying to tackle within my thesis is: Does competition between development assistance donor countries affect democratization in African recipient countries? I use panel data for 49 African countries from 2000-2013 and employ time- and unit-fixed effects, which leads me to the following equation: `Polity_it = Ξ±_t+ Ξ±_i+ Ξ²*Treatment_it + x + e_it` where Polity_it is the level of democracy outcome for unit at a time , respectively, Ξ±_t stands for the time fixed effect, Ξ±_i is a unit fixed effect and Treatment_it is the treatment dummy variable indicating whether the unit has participated in the treatment at time . Ξ² is the parameter of interest, hence, the average effect of participating in the treatment. x is a placeholder for covariates and e_it is the standard error. I have two questions: - I assume, that by adding unit and time fixed effects, the interaction term between the treatment and the time variable is being subsumed and the binary Treatment_it dummy can therefore be used to capture the effect of treatment directly. Is that correct or do I still have to include an interaction term in the equation? - I use R for my statistical analysis. For the OLS regression, I use the 'lm' command. Since treatment anticipation is likely in the case of my research, the NEPT assumption may be violated (no treatment effect on the outcome in pre-treatment period). To test for this statistically, I want to run two additional analyses with the same equation, however, with anticipation horizon included (one year and two years, respectively). How can I correctly incorporate anticipation in the equation shown above on the one side, and in R on the other side? (the current R codeline is: `lm(Polity_V2 ~ Treatment + Polity_1999 + ODA2 + Oil + loggdp + logpop + pubcorr + conflict + factor(Year) + factor(ID), data = Paneldatenstruktur_aktuell)` If the "original" analysis reveals a significantly positive effect and the analysis including anticipation an even stronger positive effect, how should that be interpreted? Does that support the robustness of my findings because it reveals the same trends, or not, because the effect strength increases (yet, in the same direction). Thank you all in advance!! Your help is much appreciated!
How can I add an anticipation horizon to my OLS regression?
CC BY-SA 4.0
null
2023-05-16T15:55:01.077
2023-05-16T15:55:01.077
null
null
388106
[ "r", "least-squares", "panel-data", "fixed-effects-model", "lm" ]
616045
2
null
58230
0
null
Overall, degrees of freedom appear when we evaluate some objetive within a mathematical system that may or may not have constraints. That is why the most simple answer usually is: "variables - independent parameters". However, I would argue that many students studying statistics, data science, machine learning, phycology (etc) would probably not be satisfied with this answer because there is a lot of things happening behind the "n-1" idea, although it does come simply from a mathematical restriction (for example: we know the mean and want to estimate an statistic like the sample variance). I believe a good way to illustrate all this and empower the audience would be to first explain Dimension, Basis, and Subspaces in an easy way. After all, many things in data science and statistics can be viewed within that domain. Then, move to the idea of restriction within a space. Then, the degrees of freedom idea would arise. Maybe illustrate the concept with the geometry of a Linear Regression, the vector of residuals, and the estimated variance. This would probably be a good way to start. I don't have much time now, but I'll get back to it when I can.
null
CC BY-SA 4.0
null
2023-05-16T16:10:24.520
2023-05-19T21:42:14.993
2023-05-19T21:42:14.993
282988
282988
null
616046
1
null
null
0
66
Player A and B participate in a match where the probability that A will win each point is $p$, for B its $1-p$ and a player wins when he reach $11$ points by a margin of $\ge2$ The outcome of the match is specified by $P(y|p, A_{wins})$. If we know that A wins, his score is specified by B's score; he has necessarily scored $max(11, y + 2)$ points In the case of $y\ge10$ we have $P(A_{wins} \cap y|p) = \binom{10 + 10}{10}p^{10}(1-p)^{10} \cdot[2p(1-p)]^{y-10}\cdot p^2$ The elements represents respectively: - probability of reaching (10, 10) - probability of reaching y after (10, 10) - probability of A winning two times in a row I would like to change the constant $p$ assumption and draw $p$ from a beta distribution. The first part can be rewritten as as [beta-binomial](https://en.wikipedia.org/wiki/Beta-binomial_distribution) function: $$ P(A_{wins} \cap y|\alpha, \beta) =\binom{10+10}{10}\frac{B(10+\alpha, 10+\beta)}{B(\alpha, \beta)} \cdot \space _{...} \cdot \space _{...} $$ The second term contains an infinite series. Is it possible to express this also in terms of $\alpha$ and $\beta$? Edit After the comment of @whuber I realised that during the second phase player A and player B are alternately winning a point and hence, $p = \alpha \space/\space (\alpha + \beta)$ and $(1 - p) = \beta \space/\space (\alpha + \beta) $. Therefore, we can write $$ 2p(1-p)^{y-10} = (2\frac{\alpha}{\alpha + \beta}\frac{\beta}{\alpha + \beta})^{y-10} = (\frac{2\alpha\beta}{(\alpha + \beta)(\alpha + \beta)})^{y-10}$$ Next, $$ \sum^{z=\infty}_{z=0}r^{z}$$ where $z=y-10$ and $r=\frac{2\alpha\beta}{(\alpha + \beta)^2}$ Because $r<1$, an infinite geometric series reduction applies and we can write $$ \sum^{z=\infty}_{z=0}r^{z} = \frac{1}{1 - r} = \frac{1}{1 - \frac{2\alpha\beta}{(\alpha + \beta)^2}}$$ We can multiply the result with the third term and obtain $$ = \frac{(\frac{\alpha}{\alpha+\beta})^2}{1 - \frac{2\alpha\beta}{(\alpha + \beta)^2}} = \frac{\alpha^2}{\alpha^2 + \beta^2}$$ Is the above derivation done correct?
Express infinite serie in terms of a shape parameters of beta distribution
CC BY-SA 4.0
null
2023-05-16T16:15:21.177
2023-05-17T09:19:58.607
2023-05-17T09:19:58.607
233132
233132
[ "probability", "beta-binomial-distribution" ]
616047
1
null
null
3
34
I am analyzing an astronaut dataset to try and determine if a high number of hours flown in space (>1000) by an astronaut is related to premature death (which I defined as dying before the age of 75). As one can see from the table I attached, Premature Death and Flights Hours < 1000 corresponds to astronauts who died before reaching age 75 who had less than 1000 flight hours. Where I have a question is: Column B (Alive or Died Beyond Age 75) corresponds to both astronauts who died after age 75, AND former astronauts who are still alive and made it past 75 already. My reasoning for this is that even if these astronauts died tomorrow, they would still count as a statistic for having died beyond age 75. The chi-square statistic for this dataset is 1.1846. TheΒ p-value is .28. Hence, we fail to reject the null hypothesis that greater than 1000 flight hours and premature death are related. My question is: Does using astronauts who are alive but lived beyond 75 already bias or adulterate the analysis at all by using these astronauts who are currently alive? My reason for thinking this might be the case is because former astronauts who are currently alive, but below age 75, may indeed die before age 75. So it's almost like using a specific subset of the data that's "finished" while discarding the rest. Or do you think this is a fair analysis for the data given (especially seeing as how small the dataset is)? I will of course make it clear in documentation that astronauts who are alive are used in the dataset for the living beyond 75 group. [](https://i.stack.imgur.com/WGOlk.png)
Question about using the chi-square test on an astronaut dataset to determine if over 1000 hours in space and early mortality are related
CC BY-SA 4.0
null
2023-05-16T16:23:48.673
2023-05-16T17:07:54.633
2023-05-16T17:07:54.633
254766
254766
[ "hypothesis-testing", "statistical-significance", "p-value", "chi-squared-test", "descriptive-statistics" ]
616048
1
null
null
1
14
I was trying to run a multilevel model in R using lmer and my model failed to converge with the default optimizer. Then I used the bobyqa instead of default optimizer and my model converged. Now I need to run the (same) model in STATA and I need some help. Anybody knows how to modify the "optimizer" (not sure if it is called so) in xtmixed in STATA? I read STATA manual and tried all the options but it did not work; my model failed to converge with the default setting (similar to R). Any thoughts or suggestions would be very much appreciated. Thank you!
What is the equivalence of optimizer in lmer in R and xtmixed in STATA?
CC BY-SA 4.0
null
2023-05-16T16:41:54.620
2023-05-16T16:41:54.620
null
null
388110
[ "stata" ]
616049
1
null
null
1
8
For simplicity I am asking this question for a binary classification problem. In the non-robust case, for a dataset $(x_i, f(x_i))$, $i=1,\dots N$ with $x_i$ drawn iid from a distribution $D$, we would like to minimize the error probability $\mathbb{P}_{X\sim D}[h(X)\neq f(X)]$ for some hypothesis $h$ from a hypothesis set. In the robust case, we would like to minimize $\mathbb{P}_{X\sim D}[\exists z \text{ such that } |z|<\epsilon \text{ and } h(X+z)\neq f(X+z)]$ for a robustness parameter $\epsilon$. This seems related to the problem of reducing overfitting by introducing regularization, since when we regularize well small changes in the input result in small changes in the output (for example in a linear classifier). Are there any concrete relations between these two problems?
What are the relations between overfitting and susceptibility to adversarial perturbations in classification?
CC BY-SA 4.0
null
2023-05-16T17:03:01.627
2023-05-16T23:17:58.397
2023-05-16T23:17:58.397
388115
388115
[ "regularization", "overfitting", "adversarial-example" ]
616051
1
null
null
0
10
Suppose I wish to fit a fairly complicated hierarchical model, say twelve nodes in a Bayesian network. I have an initial set of data I can fit to this model, this data has an ordering, meaning data that is more recent is more important. I then have a second set of data which I must predict but I can predict sequentially. Initially I wanted to use belief propagation but I'm winding up with intractable integrals. So now my plan is to use MCMC on all the initial data (since I imagine that even in belief propagation the order of the updates would not affect the posterior) and then update my priors at each new data point running an MCMC simulation on each. Does this sound like a reasonable thing to do? Is there any fundamental difference in how this would perform if I used belief propagation instead (If the integrals could be done)?
MCMC on a clump of data and then updated with single data point
CC BY-SA 4.0
null
2023-05-16T17:04:37.733
2023-05-16T17:04:37.733
null
null
388116
[ "markov-chain-montecarlo", "belief-propagation" ]
616052
1
null
null
-1
70
I'm trying to understand the impact of 2 different binary interventions on the completion rate of around 10 courses. The courses all have different baseline completion rates. Intervention one was turned on for all courses at the same time. Intervention 2 was applied to roughly half of the 10 courses at a later date. I was hoping to do an A/B test but couldn't so now I am trying to model the impact. The dependent variable is "Completed" (binary yes/no). So I have a bunch of rows that look like `[PersondID | Intervention1 | Intervention2 | CourseIndicator1 | --- | CourseIndicator10 | Completed]` All variables are binary. There is also a course category label that could be made into another binary indicator variable (4 different categories), and I could potentially add basic demographic data for the user. A couple of thoughts I had on approaches: - Does a mixed effects logistic regression make sense here, for repeated measurements among the same course? There are likely not many repeated measurements among the same person, but it is technically possible.In R I believe this would be something like: ``` glmer(completed ~ Intervention1 + Intervention 2 + (1|CourseIndicator1) + ... + (1|CourseIndicator9)), data = model_data, family = binomial) ``` - There are also a few pairs of courses where the subject matter is more similar than others so I thought I could also do a number of pairwise comparison mixed effects models. I suppose this would be semi equivalent to adding a random effect from a subject matter indicator variable? Am I on the right path or is there something completely different I should consider? What is the best way to assess statistical significance after fitting the model? Thank you!
Mixed Effects or other Model for Binary Interventions and Binary Outcome
CC BY-SA 4.0
null
2023-05-16T17:17:59.463
2023-05-26T12:18:55.323
null
null
159323
[ "hypothesis-testing", "mixed-model", "binary-data" ]
616053
1
null
null
1
24
To help outline my question I will start with how this would normally be undertaken. Say this is animal study looking at the effects of a drug on withdrawal. The outcome in this case will be time spent asleep with the expectation that the drug improves sleep in animals undergoing withdrawal. There would be four groups: - No drug 'healthy' group - Withdrawal + no treatment group - Withdrawal + treatment group - Withdrawal + sham treatment group Ideally, you would want to see that the treatment causes improved sleep compared to untreated animals are no different than the no drug animals. Here the analysis could simply be a one way ANOVA. With animals you can, however, take baseline measurements before the experiment replacing the no drug group. This seems preferred design wise for several reasons like reducing total number of animals. But the question becomes how would one analyze the data because the baseline is now within group while the other groups would be between. Presumably, you could just do this descriptively (and maybe I'm over thinking this), but I was curious if any formal statistics would be appropriate.
Appropriate analysis for comparing treatment that is between groups to baseline collected within (multiple) groups
CC BY-SA 4.0
null
2023-05-16T17:28:08.323
2023-06-01T05:08:42.037
2023-06-01T05:08:42.037
121522
388114
[ "anova", "experiment-design", "treatment-effect", "baseline" ]
616054
1
null
null
0
13
Typically, we assume independent samples when performing ROC analysis. But for image data, e.g. in a segmentation problem, pixels in a neighborhood come from the same image and are spatially correlated. How does this affect the interpretation of, for example, AUC? How can we take into account these correlations and any subsequent biases to the AUC estimate?
Accounting for spatial correlations in ROC analysis for image segmentation
CC BY-SA 4.0
null
2023-05-16T17:49:28.767
2023-05-16T17:49:28.767
null
null
387850
[ "autocorrelation", "bias", "roc", "image-segmentation" ]
616055
1
null
null
1
34
Edit with graph: I am struggling a bit conceptually to make sense of a result I get when applying a linear mixed model to my reaction time data. I have a 2x2 within subjects design. When I plot the data by means of an interaction plot, one of the two lines is above the other, with non-overlapping confidence intervals. However, when I apply a linear mixed-model, which looks like this: ``` model26 = lme(log(RT_times) ~ location*task, ~1+location*task|participant,data= data,method='REML',weights = varComb(varIdent(form=~1|location*task)),control =list(msMaxIter = 1000, msMaxEval = 1000)) ``` I don't find any significant main effect. This is the output: ``` Linear mixed-effects model fit by REML Data: data_sac Random effects: Formula: ~1 + task * condition | pp Structure: General positive-definite, Log-Cholesky parametrization StdDev Corr (Intercept) 0.2479765 (Intr) tskndf cndtnv taskundef 0.1391700 -0.708 conditionvalid 0.1722409 -0.672 0.651 taskundef:conditionvalid 0.1848967 0.652 -0.627 -0.990 Residual 0.2490666 Combination of variance functions: Structure: Different standard deviations per stratum Formula: ~1 | condition * task Parameter estimates: invalid*def valid*def invalid*undef valid*undef 1.0000000 0.8943147 0.8514028 0.8917650 Fixed effects: log(latency) ~ condition * task Correlation: (Intr) cndtnv tskndf conditionvalid -0.680 taskundef -0.688 0.646 conditionvalid:taskundef 0.628 -0.938 -0.673 Standardized Within-Group Residuals: Min Q1 Med Q3 Max -7.10755334 -0.40245682 0.02502696 0.51551241 4.18246501 Number of Observations: 5209 Number of Groups: 56 ``` [](https://i.stack.imgur.com/i5JUI.png) To the contrary, the p-value for task is about 0.7. I find this very strange, as for another dataset with a comparable graph, I do instead get significant results. Now, I do get that the computation of the 95% CIs and the linear mixed model are different, so they might lead to different results, but I don't get how they can be SO different. There does not seem to be anything wrong with my data, I even removed outliers etc, so it is difficult for me to grasp what is going on. [](https://i.stack.imgur.com/37FoK.png) Hope the question is clear now. Many thanks for any insight you might provide!
Non significant difference between condition from LME model, when Confidence intervals clearly non overlapping
CC BY-SA 4.0
null
2023-05-16T17:49:57.987
2023-05-18T13:28:14.080
2023-05-17T12:10:38.283
388120
388120
[ "r", "statistical-significance", "confidence-interval", "lme4-nlme", "interaction" ]
616056
1
616057
null
2
20
I have one question about the evaluation metrics of classification models. I see many people report the precision and recall value for their classification models. Do they choose a threshold to convert predicted probability to predicted class and then calculate the precision and recall? If so, how do they choose the threshold? If we compare the AUC value across different models that built for one dataset by different people, it's very direct and comparable. However, the precision and recall will vary based on the chosen threshold. Isn't this too arbitrary? If two people build classification models for one dataset, they both report their own precision and recall value, we'll not know who's model is better since they may use different threshold.
Precision and recall reported in classification model
CC BY-SA 4.0
null
2023-05-16T18:07:13.777
2023-05-16T18:33:50.803
null
null
131488
[ "classification", "model-evaluation", "precision-recall", "precision" ]
616057
2
null
616056
2
null
You have identified a problem, yes. When you evaluate or compare models on their raw outputs (e.g., using log loss, Brier score, or AUC), you are comparing the models. When you evaluate models on precision, recall, or accuracy, you are comparing the models along with a decision rule (threshold), instead of comparing the models themselves. This is among the reasons why statisticians see [drawbacks](https://stats.stackexchange.com/q/603663/247274) to these threshold-based rules. The most common decision rule is to classify according to the category given the highest probability. Even when the threshold is tuned, values like precision and recall make sense, so a model that does better on both of those does have some kind of advantage over a competitor that does worse on both, since you are evaluating the entire pipeline of model along with decision rule. Nonetheless, the raw outputs can be quite useful, and as is discussed in the link above, statisticians often advocate for the evaluation of those raw outputs in order to make optimal decisions. Stephan Kolassa’s answer to my question [here](https://stats.stackexchange.com/questions/464636/proper-scoring-rule-when-there-is-a-decision-to-make-e-g-spam-vs-ham-email) gets into why and links to additional useful material.
null
CC BY-SA 4.0
null
2023-05-16T18:33:50.803
2023-05-16T18:33:50.803
null
null
247274
null
616058
1
null
null
0
28
I have data where all individuals have an observation at time 1, but then only half of the individuals have an observation at time 2, and I want to compare the means of times 2 and 1. Is a standard t-test the way to go here, despite the outcomes obviously being not independent due to half of the individuals contributing data to both times, or is there a more efficient approach that handles this correlation and also uses the fact that we only have 1 observation for half of the individuals?
Half paired t-test?
CC BY-SA 4.0
null
2023-05-16T18:56:58.493
2023-05-16T18:56:58.493
null
null
362564
[ "correlation", "t-test", "paired-data" ]
616059
2
null
615727
4
null
I think that you can solve this pretty easily using some simple methods which are commonly used in [survival](/questions/tagged/survival) or time-to-event analysis. For each kit that we have not received, we know that a duration $d$ has elapsed, and we want to know the probability that the total duration will be between $d$ and $T=d + k$ for some $k>0$. In other words, we've waited $d$ days to receive it so far, and we want to know the probability that we will have to wait $d+1, d+2, d + 3.17, \dots, d + 59.9, d+60$ days. We can do this if we can estimate the probability distribution of $t$ the total time elapsed between sending the kit out and when the kit is returned. For simplicity, we can assume this is an absolutely continuous random variable. [From probability theory](https://stats.stackexchange.com/questions/416193/conditional-survival-probability-up-to-time-t-given-t-s) we know that $$p=\mathbb P(t < T | t > d) = \frac{F(T)-F(d)}{1 - F(d)} $$ for $F$ the CDF. So we just need to estimate a suitable distribution $F,$ whence we can compute the probability of receiving each kit on each day between $k=1$ and $k=60$ given the $d$ that we know for each kit. So you need to estimate $F$. If you wish to use a parametric distribution, my recommendation for $F$ is to choose a distribution that has support only for positive values (because you're measuring duration). Some suggestions: - Exponential - Gamma - Weibull Of course, there are lots of other parametric options. You could also use a non-parametric estimate of $F$, such as the ECDF; see [empirical-cumulative-distr-fn](/questions/tagged/empirical-cumulative-distr-fn). You'll have to decide what to do about the [censoring](/questions/tagged/censoring) that is present in your data. - The simplest option is to estimate $F$ using only the returned kits. This might work well enough if the number of non-returned kits is very small. - However, the simplest option will bias the estimates to be smaller than if you had complete data; estimating the distribution in a way that accounts for the censoring effect for non-returned kits seems more prudent. You can find a worked example in A. Clifford Cohen (1965) "[Maximum Likelihood Estimation in the Weibull Distribution Based On Complete and On Censored Samples](https://www.stat.cmu.edu/technometrics/59-69/VOL-07-04/v0704579.pdf)", Technometrics, 7:4, 579-588, DOI: 10.1080/00401706.1965.10490300 I would not expect the data about states to change any aspect about this -- my assumption is that all states would have essentially the same distribution $F$. If you find that is not the case, then the simplest thing is to build 50 models, one for each state. While this is simple, these models might wildly differ, especially for the states where data are scarce. A [hierarchical-bayesian](/questions/tagged/hierarchical-bayesian) model might ameliorate that. After we have $F$, then estimating the number of kits we receive each day is trivial. If we assume that all kits are independent, then the expected number of kits received on a given day is the sum of the probabilities of receiving the kits (each kit $i$ is a Bernoulli trial with probability $p_i$). The easiest way to get an interval estimate is to do a Monte Carlo simulation, but there are probably better ways.
null
CC BY-SA 4.0
null
2023-05-16T18:58:40.750
2023-05-19T17:14:42.980
2023-05-19T17:14:42.980
22311
22311
null
616060
1
616071
null
3
117
I am having difficulties with the following problem: Assuming $X$ and $Y$ follow a bivariate normal distribution with $\mu = 0$ and $\Sigma=\begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix}$ and $U=X^2+Y^2$, I want to prove that the inequality $P(U>a)\leq \exp\left(-\frac{a}{2(1+|\rho|)}\right)$ holds for all $a$. So far, I tried to solve this using the Chernoff bound $P(X\geq a)\leq e^{-ta}\cdot\mathbb{E}\left[e^{tX}\right]$ but couldn't find or work out a closed form expression for the MGF $M_U(t)$. Maybe one way to go about this would be to use a generalised Chi-squared random variable ([https://math.stackexchange.com/questions/442472/sum-of-squares-of-dependent-gaussian-random-variables](https://math.stackexchange.com/questions/442472/sum-of-squares-of-dependent-gaussian-random-variables)), which does not have a closed form MGF, but a CF that could be used to derive the MGF. However I could not find any solution this way. Would be very glad if someone could even just guide me in the right direction!
Upper bound for sum of dependent normal variables
CC BY-SA 4.0
null
2023-05-16T19:02:43.163
2023-05-17T21:01:24.290
2023-05-17T21:01:24.290
20519
388123
[ "probability", "normal-distribution", "joint-distribution", "probability-inequalities", "bivariate" ]
616061
1
null
null
0
9
I have performed an exogeneity check on my control variables to assess their validity before incorporating them into my regression analysis. Upon conducting the analysis, I observed a significant correlation (P-value < 0.01) between three of my control variables and the independent variable. I assume that means there is covariance, should I exclude them as control variables? Or how do I go about deciding if the control variable should be included or not?
Exogeneity Check of Control Variables? Covariance
CC BY-SA 4.0
null
2023-05-16T19:44:53.100
2023-05-16T19:46:13.093
2023-05-16T19:46:13.093
388131
388131
[ "econometrics", "panel-data", "covariance", "controlling-for-a-variable", "exogeneity" ]
616062
1
null
null
0
20
(Cross-posted [here](https://stackoverflow.com/questions/76257206/repeated-measures-variance-analysis-in-r)) I have some data that look like this: ``` df <- data.frame(hormone = c(65.1, 32.8, 29.6, 42.3), medication = c('A', 'A', 'B', 'B'), lab = c('One', 'Two', 'One', 'Two')) ``` (it's actually much larger; this is just an idea of the format) I need to analyze the variance of data with Medication A vs. Medication B. Normally I would just do a variance test, but I need to control for the different labs, and I do not know how. What test allows me to compare the variance among hormone levels based on medication while adjusting for lab?
Repeated measures variance analysis in R
CC BY-SA 4.0
null
2023-05-16T20:10:42.163
2023-05-18T01:21:12.367
2023-05-17T13:57:25.697
254436
254436
[ "r", "variance", "repeated-measures" ]
616063
1
616093
null
1
18
Suppose that I have a model $f$ that is trained to predict hourly sales of 100 stores. Each day I retrain the model, and I have 2400 data points each day. Naively, I can split the data into sth like hour 0 - hour 22 as training set, and hour 23 as the validation set. However this does not make too much sense cause the model has not yet seen the pattern on hour 23, how do we expect it to perform well on hour 23? However this seems like the only reasonable approach to validate the model.
validation for time series models
CC BY-SA 4.0
null
2023-05-16T20:10:59.250
2023-05-17T06:24:06.283
2023-05-17T06:23:37.187
1352
291544
[ "regression", "time-series", "forecasting", "cross-validation", "validation" ]
616064
1
null
null
0
12
How should I approach a univariate timeseries with multiple rows of data per timestamp? Using stocks as an example, I am attempting to identify a time series model, or suitable alternative, that identifies macro relationships between independent variables and stock prices over time. The issue is that I have multiple rows of data per timestamp. I do not believe that VARMAX is appropriate, because I am not attempting to identify the effect of one stock on another. I also do not believe that ARIMAX is appropriate because I have multiple linear equations per timestamp, one for each stock. I am basically attempting to perform linear regression to obtain coefficients for the independent variables, with a time component added. I have considered the following three options, but I'm not sure if they are reasonable approaches: - Converting the timestamp into additional features such as "year" and "month", and running non-timeseries models such as xgboost. - Performing regression on each timestamp separately to collapse the matrix of stocks into a single row of coefficients for the timestamp, and then modeling the coefficients and stock prices together in a multivariate VAR model. - Averaging the data across the stocks for an individual timestamp, and then running an ARIMAX model. My concern with this approach being that I'll lose valuable information regarding interactions between features as two features move together or move apart over time. I want the model to learn information at a macro level regardless of stock, to then apply a prediction at the micro level of a single stock.
What is an appropriate timeseries model to identify macro trends for multiple rows of data per timestamp
CC BY-SA 4.0
null
2023-05-16T20:16:52.850
2023-05-16T20:16:52.850
null
null
388127
[ "machine-learning", "time-series" ]
616065
1
null
null
0
4
My question is simply why the Energy function of a RBM is given by: $$E = -\sum_{i,j} w_{ij} \, v_i \, h_j -\sum_i \alpha_i \, v_i - \sum_i \beta_i \, h_i$$ when the energy of the Boltzmann Machine is given by: $$ E = -\Biggl(\sum_{i<j}w_{ij}s_is_j + \sum_ib_is_i\Biggl) $$ Does the first is derived from the second when the interactions of the nodes are restricted to be only with nodes of a different layer? I searched but I couldn't find an answer. The formula for the Energy of the RBM are always given in that form.
Energy function of a RBM
CC BY-SA 4.0
null
2023-05-16T20:27:07.783
2023-05-16T20:27:07.783
null
null
326306
[ "restricted-boltzmann-machine" ]
616066
1
616133
null
0
28
I have performed the following experiment but I am not sure what statistical analysis perform. The aim is to test if a drug is lethal on a fish species. For this, I have 3 tanks with 10 fish in each tank. Each tank receives a treatment (control, low drug concentration or high drug concentration). The variable measured is the % mortality of each group (or tank) at 3, 6 and 9 days. I have 3 tank by trt, so I have 3 samples. I originally wanted to run an ANOVA (2 factors: trt and time), but I doubt the independence of the data, as I am measuring mortality on the same population group (or tank). Therefore, I am considering either a two-factor (treatment and time) repeated measures ANOVA or a kaplan-meier survival analysis. What do you think? Any advice would be of great help to me. Thank you very much!
ANOVA or survival analysis in this experiment?
CC BY-SA 4.0
null
2023-05-16T20:28:35.313
2023-05-17T12:36:53.180
null
null
360640
[ "anova", "survival", "kaplan-meier" ]
616067
1
null
null
1
36
The aim of my project is to find the association between a group of independent variables and a categorical dependent variable (with three levels). For variable selection, my supervisor suggested employing the ANOVA test for each variable separately among the levels of the categorical variable. (he also provided me this link as a reference [https://padhokshaja.medium.com/using-anova-and-multinomial-logistic-regression-to-predict-human-activity-cd2101a5e8bf](https://padhokshaja.medium.com/using-anova-and-multinomial-logistic-regression-to-predict-human-activity-cd2101a5e8bf)) But I was wondering if the ANOVA test could be employed before running the logistic regression. As I remember, for the Anova test, the response variable needs to be continuous. So, before conducting a multiple logistic regression, should I perform a univariate logistic regression instead? Thanks so much for your help.
Using ANOVA for variable selection in Multinomial Logistic Regression
CC BY-SA 4.0
null
2023-05-16T20:34:38.350
2023-05-19T23:55:42.637
2023-05-17T07:48:09.993
56940
383609
[ "logistic", "multiple-regression", "generalized-linear-model" ]
616068
1
616228
null
1
49
In the image and per the code at the bottom of this post, I plot survival curves for the `lung` dataset from the survival package using a fitted exponential distribution model (plot red line), using the K-M nonparametric model (plot blue lines), and run/show simulations using the exponential model (plot light blue lines) with the mean of the simulations shown as the black line. Exponential doesn't provide the best fit for `lung` data but I'm trying to better understand modeling with exponential and extreme value distributions. The model fit takes the form $log(T)$∼$β_0 + W$ where $β_0$ is the `fit$coef` and $W$ represents a standard minimum extreme value distribution (exponential in this case). In the example presented herein I only model $W$; in related posts I address the modeling of $β_0$ such as in [How to appropriately model the uncertainty of the exponential distribution model when running survival simulations?](https://stats.stackexchange.com/questions/615657/how-to-appropriately-model-the-uncertainty-of-the-exponential-distribution-model). The simulations herein are, for a change, from the perspective of individuals, modeled by generating random intercepts for the exponential distribution in the line of code `simPaths <- sapply(1:simNbr,function(i) 1-pexp(time,rate=1/rexp(1,rate = 1/exp(fit$icoef))))`. My question is why does the mean of the simulations shown in the black line differ so widely differ from the fitted base exponential model shown in the red line? When I simulated only the $β_0$ uncertainty in related posts, the simulations formed a band around the fitted base distribution similar in width more or less to the 95% CI around the Kaplan-Meier curve (the dashed blue lines). What am I doing wrong? [](https://i.stack.imgur.com/hccmv.png) Code: ``` library(survival) simNbr <- 25 time <- seq(0, 1000, by = 1) fit <- survreg(Surv(time, status) ~ 1, data = lung, dist = "exponential") # Compute exponential survival function for the base fitted model survival <- 1 - pexp(time, rate = 1/exp(fit$coef)) # Compute the survival curves for each simulation simPaths <- sapply(1:simNbr,function(i) 1-pexp(time,rate=1/rexp(1,rate = 1/exp(fit$icoef)))) plot(time,survival,type="n",xlab="Time",ylab="Survival Probability",main="Lung Data Survival Plot") # Plot simulations plotSims <- data.frame(time = time, do.call(cbind, lapply(1:simNbr,function(i) { lines(time, simPaths[, i], col = "lightblue", lty = "solid", lwd = 0.25) return(curve) } ) ) ) # Add average of simulations avgSurvival <- apply(simPaths, 1, mean) lines(time, avgSurvival, col = "black", lwd = 3) # Add Kaplan-Meier survival curve for the lung data lines(survfit(Surv(time, status) ~ 1, data = lung), col = "blue", lwd = 1) # Plot the base fitted survival curve using exponential lines(cbind(time, survival), type = "l", col = "red", lwd = 3) legend("topright", legend = c("Fitted exponential model","K-M & confidence intervals","Simulations", "Simulation mean"), col = c("red", "blue", "lightblue", "black"),lwd = c(3, 1, 0.25, 3),lty = c(1, 1, 1, 1), bty = "n") ```
Correctly simulating an extreme value distribution for survival analysis?
CC BY-SA 4.0
null
2023-05-16T20:38:31.960
2023-05-19T10:40:29.287
2023-05-19T10:40:29.287
378347
378347
[ "r", "survival", "simulation", "exponential-distribution", "extreme-value" ]
616069
1
null
null
3
90
Suppose you have a small dataset (perhaps 1000 labels), and you are using cross-validation to train different models and to choose the best one (according to their cross-validation scores). It seems that at some point, this process (model selection) will begin to overfit. Is there any way to estimate the optimal number of models you should try, before this model selection process becomes counterproductive?
When does model selection begin to overfit?
CC BY-SA 4.0
null
2023-05-16T20:47:20.970
2023-06-03T01:54:12.240
2023-05-18T18:56:20.380
35791
35791
[ "machine-learning", "cross-validation", "model-selection", "overfitting" ]
616070
1
null
null
0
18
I built a neural network model in R using the Keras' package and for some reason, the mean of the predicted values of the neural network doesn't match the mean of the original values. For every other model I've built, this is never the case. Am I doing something wrong, or is this a possibility?
Should the mean of NN's predicted values match the mean of the original values?
CC BY-SA 4.0
null
2023-05-16T21:30:18.433
2023-05-16T21:30:18.433
null
null
310875
[ "r", "neural-networks", "artificial-intelligence" ]
616071
2
null
616060
6
null
We can first evaluate the probability $P[U > a]$ by noting $X^2 + Y^2 \overset{d}{=} \xi^2 + \eta^2 + 2\rho\xi\eta$, where $\xi, \eta$ i.i.d. $\sim N(0, 1)$. Therefore, for $a > 0$: \begin{align} & P[X^2 + Y^2 > a] = P[\xi^2 + \eta^2 + 2\rho\xi\eta > a] \\ =& \iint\limits_{[(x, y): x^2 + y^2 + 2\rho xy > a]}\frac{1}{2\pi}\exp\left(-\frac{1}{2}(x^2 + y^2)\right)dxdy. \tag{1} \end{align} To evaluate this double integral, apply the polar coordinates transformation $x = r\cos\theta, y = r\sin\theta$ with $r > 0, \theta \in [0, 2\pi)$. The integral $(1)$ then becomes \begin{align} \iint\limits_{[(r, \theta): (1 + \rho\sin(2\theta))r^2 > a]}\frac{1}{2\pi}r\exp\left(-\frac{1}{2}r^2\right)drd\theta. \tag{2} \end{align} Note the region $[(r, \theta): (1 + \rho\sin(2\theta))r^2 > a]$ is contained in the region $[(r, \theta): (1 + |\rho|)r^2 > a]$ and the integrand is positive, hence the integral $(2)$ is bounded above by \begin{align} \iint\limits_{[(r, \theta): (1 + |\rho|)r^2 > a]}\frac{1}{2\pi}r\exp\left(-\frac{1}{2}r^2\right)drd\theta = \int_{\sqrt{\frac{a}{1 + |\rho|}}}^\infty e^{-r^2/2}rdr = \exp\left(-\frac{a}{2(1 + |\rho|)}\right). \end{align} This completes the proof. --- To see why $X^2 + Y^2 \overset{d}{=} \xi^2 + \eta^2 + 2\rho\xi\eta$, note that if $\begin{bmatrix} X \\ Y \end{bmatrix} \sim N_2(0, \Sigma)$, then $\begin{bmatrix} X \\ Y \end{bmatrix} \overset{d}{=} C\begin{bmatrix} \xi \\ \eta \end{bmatrix}$, where $\begin{bmatrix} \xi \\ \eta \end{bmatrix} \sim N_2(0, I_{(2)})$ and $C = \Sigma^{1/2}$ is the [square root matrix](https://en.wikipedia.org/wiki/Square_root_of_a_matrix#Positive_semidefinite_matrices) of $\Sigma$. It then follows that \begin{align} X^2 + Y^2 &= \begin{bmatrix} X & Y \end{bmatrix}\begin{bmatrix} X \\ Y \end{bmatrix} \\ & \overset{d}{=} \begin{bmatrix} \xi & \eta \end{bmatrix}C'C\begin{bmatrix} \xi \\ \eta \end{bmatrix} = \begin{bmatrix} \xi & \eta \end{bmatrix}\Sigma\begin{bmatrix} \xi \\ \eta \end{bmatrix} \\ &= \xi^2 + \eta^2 + 2\rho\xi\eta. \end{align}
null
CC BY-SA 4.0
null
2023-05-16T21:55:01.730
2023-05-17T13:52:21.667
2023-05-17T13:52:21.667
20519
20519
null
616072
2
null
615932
2
null
If $x_i\sim\text{Laplace(0, b)}$ and $$y=\sum_{i=1}^nx_i$$ then $y$ is a mixture of double (two-sided) gamma distributions: $$f(y)=\sum_{i=1}^nw_i\frac{b^i|y|^{i-1}}{2\Gamma(i)}e^{-b|y|}$$ with $$w_{i\in2...n}(n)=\binom{2n-i-1}{n-i}2^{i-2n+1}$$ and $w_1=w_2$. I'm not up for showing the derivation, but it could be done by partial fraction expansion of the characteristic function of the sum of $n$ iid Laplace random variates. A quick check in R: ``` set.seed(1044174532) fcoeff <- function(n) { # function to get coefficients of a gamma mixture with the same distribution # as the sum of n Laplace(0, 1) random variates C <- numeric(n) k <- (2*n - 3):(n - 1) # C[2:n] <- choose(k, (n - 2):0)/2^k C[2:n] <- exp(lgamma(k + 1) - lgamma((n - 1):1) - lgamma(n) - k*log(2)) C[1] <- C[2] C } n <- 5L # number of Laplace r.v. to sum s <- 1e5L # number of samples # sum of iid Laplace(0, 1) r.v. as a gamma mixture x <- rgamma(s, sample(n, s, 1, fcoeff(n)))*sample(c(-1, 1), s, 1) # sum of iid Laplace(0, 1) r.v. directly y <- rowSums(matrix(rexp(s*n)*sample(c(-1, 1), s*n, 1), s, n)) ks.test(x, y) #> #> Asymptotic two-sample Kolmogorov-Smirnov test #> #> data: x and y #> D = 0.00194, p-value = 0.9918 #> alternative hypothesis: two-sided plot(ecdf(x), col = "blue") plot(ecdf(y), col = "orange", add = TRUE) ``` [](https://i.stack.imgur.com/19m7k.png) From this, get the CDF and quantile functions for the distribution of $y$: ``` pnlaplace <- function(q, b, n) { 0.5 + sign(q)*sum(pgamma(abs(q), 1:n, b)*fcoeff(n))/2 } qnlaplace <- function(p, b, n) { if (p < 0.5) { uniroot(\(a) pnlaplace(a, b, n) - p, c(n*log((1 - p)/2), 0))$root } else { uniroot(\(a) pnlaplace(a, b, n) - p, c(0, -n*log((1 - p)/2)))$root } } ``` --- To determine the minimum value of $a$ for $n=2$ (two reports), first specify $b$ and the desired confidence and power for the hypothesis test: ``` b <- 1 n <- 2L alpha <- 0.05 # type I error beta <- 0.1 # type II error ``` Next, find $x_0$ (the value of $s_0$ that will give the desired confidence level) and $x_1$ (the value of $s_1 - x_0$ that will give the desired power). $a$ is the average of $x_0$ and $x_1$: ``` # value of s0 to reject null hypothesis with 1 - alpha confidence (x0 <- qnlaplace(1 - alpha, b, n)) #> [1] 3.27181 # value of s1 - x0 needed for a test with 1 - beta power (x1 <- qnlaplace(1 - beta, b, n)) #> [1] 2.397277 (a <- (x0 + x1)/n) #> [1] 2.834543 ``` Check the type I and type II errors with a simulation: ``` mean(rowSums(matrix(rexp(1e6*n)*sample(c(-1, 1), 1e6*n, 1), 1e6, n)) > x0) #> [1] 0.050136 mean(rowSums(matrix(rexp(1e6*n)*sample(c(-1, 1), 1e6*n, 1), 1e6, n)) + n*a < x0) #> [1] 0.100319 ``` Similarly determine the minimum value of $n$ to achieve the desired confidence and power given $a$: ``` a <- 1 # find the lowest value of n that satisfies the specified errors (alpha, beta) # first find the lower and upper bound on n to pass to the solver n <- 2L while (1 - pnlaplace(qnlaplace(1 - alpha, b, n) - a*n, b, n) < 1 - beta) n <- n*2L (n <- ssanv::uniroot.integer(\(n) pnlaplace(qnlaplace(1 - alpha, b, n) - a*n, b, n) - beta, c(n/2L, n), step.power = log2(n) - 2)$root) #> [1] 17 ``` Again, check the errors with a quick simulation: ``` # value of s0 to reject null hypothesis with 1 - alpha confidence (x0 <- qnlaplace(1 - alpha, b, n)) #> [1] 9.574182 mean(rowSums(matrix(rexp(1e6*n)*sample(c(-1, 1), 1e6*n, 1), 1e6, n)) > x0) #> [1] 0.050118 mean(rowSums(matrix(rexp(1e6*n)*sample(c(-1, 1), 1e6*n, 1), 1e6, n)) + n*a < x0) #> [1] 0.099337 ``` --- These functions are capable of working with large $n$, but because of the use of nested solvers, the process of finding the minimum $n$ becomes noticeably slower as $n$ gets large: ``` a <- 0.01 system.time({ n <- 2L while (1 - pnlaplace(qnlaplace(1 - alpha, b, n) - a*n, b, n) < 1 - beta) n <- n*2L n <- ssanv::uniroot.integer(\(n) pnlaplace(qnlaplace(1 - alpha, b, n) - a*n, b, n) - beta, c(n/2L, n), step.power = log2(n) - 2)$root }) #> user system elapsed #> 32.29 0.47 32.80 n #> [1] 171277 ```
null
CC BY-SA 4.0
null
2023-05-16T21:57:01.597
2023-05-18T19:11:55.690
2023-05-18T19:11:55.690
214015
214015
null
616073
1
null
null
3
24
## Scenario Consider the following scenario in which you have some data $\mathcal D = \{(X_i,Y_i)\}_{i = 1,2,\dots, N}$ and a candidate model $M$ (for instance, for the conditional distribution $\text {Pr}(Y \vert X) = f_M(X,Y)$). You know the functional form of $f_M$ from previous studies, or perhaps you have selected it from an independent dataset $\mathcal D '$. Your present goal is to fit $f_M$ to the new dataset and perform some inference (say on the parameters that define $f_M$). You fit $f_M$ to the new dataset $\mathcal D$ and, after some model checking (e.g. checking residuals), you find out that your data violates some basic assumption of the original model $M$ - perhaps the $XY$ distribution has changed, or perhaps $M$ was not so well-specified to begin with. Moreover, the violation also suggests a new model $M'$ (functional form $f_{M'}\neq f_M$), that seems to perform much better on $\mathcal D$. At this point, you would like to fit the new $f_{M'}$ to the dataset $\mathcal D$ and perform inference within this improved model. To make things concrete: imagine you are fitting Ohm's law $V = RI$ on experimental data collected on Mars, and your model checks show that, on Mars, $V = RI + \kappa I ^4$. You now want to provide confidence intervals for $R$ and $\kappa$. --- ## The issue The improved model $M^\prime$ is a random object that depends on the data $\mathcal D$, which in general invalidates any naive inference that treats $M'$ as if it was fixed in advance. This is the usual problem of Selective Inference. However, there is a complication here with respect to the usual setting of Selective Inference: the concrete improvement $M \to M'$ was suggested by the data, but there was no a-priori fixed family of possible models - the space of possible choices $\mathcal M $ has an unknown (and somewhat hard to define) cardinality. --- ## Questions Does any formalism exist, which is able to deal with complex and ill-defined situations of this kind (if only putting loose bounds for some special cases)? Otherwise, what best practices would you recommend when reporting this kind of results?
Is valid inference possible after a model improvement suggested by model checking?
CC BY-SA 4.0
null
2023-05-16T22:19:06.040
2023-05-16T22:19:06.040
null
null
27275
[ "inference", "model-selection", "selectiveinference" ]
616074
1
null
null
1
22
I have a dataset where each row corresponds to a country and contains independent variables such as "per capita income" and "mean education status." Additionally, there are two variables of interest: "number of restaurants" and "number of restaurants with Michelin star." Our goal is to predict the ratio of "number of restaurants with Michelin star" to the total "number of restaurants" using the independent variables "per capita income" and "mean education status." However, it's important to note that the number of restaurants varies significantly among countries, ranging from as low as 3 to over 100. This raises the question of whether we should consider weighted regression based on the number of restaurants. Given that both the numerator and denominator are count variables, What is the true regression model that i should use?
Regression Model for a Ratio Outcome with Count Variables as Numerator and Denominator
CC BY-SA 4.0
null
2023-05-16T23:06:47.770
2023-05-16T23:06:47.770
null
null
380702
[ "regression", "generalized-linear-model", "count-data", "weighted-regression", "ratio" ]
616076
1
null
null
2
41
Let's say I already have a simple rule which provides me the variables that help determine the output label and relationship between variables and label is rather simple and deterministic one, can I still go ahead and build ML model for the below reasons? a) Automation b) Rank the outcomes based on their probability outcome than have a binary yes or no based on rule based system? What do you think? Is it worthwhile do use simple random forest algorithm to do the above task? Our users want the likelihood than a hard yes or no. Every user has different opinion on how to rank them. And it would be close to impossible to bring the users to a conclusion on how to rank them based on their busy schedule. So, as a objective measure, can't we use ML to get probabilistic predictions though the label is generated using simple rule-based system?
Using ML for simple rule based to obtain likelihood
CC BY-SA 4.0
null
2023-05-16T23:53:59.167
2023-05-17T02:24:17.930
2023-05-17T02:24:17.930
241460
241460
[ "machine-learning", "probability", "classification", "random-forest", "data-mining" ]
616077
1
null
null
3
32
Summary: If we have an unbiased MLE $\widehat{\sigma_1}$ of an exponential distribution parameter, and the confidence intervals for its estimates are given by the $\chi^2$ distribution; and we find another unbiased estimator $\widehat{\sigma_2}$ with known relative efficiency to the first; do we have an expression for confidence intervals on the second estimator? My understanding of relative efficiency (RE) on i.i.d. samples is that RE = $\frac{\mathrm{Var}(\widehat{\sigma_1})}{\mathrm{Var}(\widehat{\sigma_2})} = \frac{n_1}{n_2}$ – meaning that the relative efficiency expresses the relative sample size of the second estimator needed to match the variance of the first. I.e., we should get the same confidence in $\widehat{\sigma_2}$ with $RE \times n_1$ as we have estimating $\widehat{\sigma_1}$ with $n_1$ samples. However, monte carlo simulations on a specific example show this is not the case! --- $Z \sim \mathrm{Rayleigh}(\sigma)$. The following is an unbiased MLE for $\sigma$ (where cG is a bias correction term [defined here](https://en.wikipedia.org/wiki/Rayleigh_distribution#Parameter_estimation)): $$\hat{\sigma}=c_G(n) \sqrt{\frac{\sum^n z_i^2}{2(n-1)}},$$ and since $\sum^n z_i^2 \sim \chi_{2(n-1)}^2,$ we can get confidence bounds on $\hat{\sigma}$ using the quantile function of the $\chi^2$ distribution. I have verified all of this with simulations. Now I have found another unbiased estimator $\widehat{\sigma^\ast}$ based on order statistics $z_{(m)}$. For example: When $n=10$ then $\sum z^2 \approx az^2_{(6)}+bz^2_{(9)}$ for particular coefficients of a and b. Simulation confirms what [literature](https://www.jstor.org/stable/1266424) says about the efficiency of this estimator: E.g., for $n=10$ the relative efficiency RE = $\frac{\mathrm{Var}(\hat{\sigma})}{\mathrm{Var}(\widehat{\sigma^\ast})} \approx$ 87%, which is confirmed by verifying $\frac{\mathrm{Var}(\hat{\sigma}, n=9)}{\mathrm{Var}(\widehat{\sigma^\ast}, n=10)} \approx 1$. Given all of the above, is there any expression for the confidence bounds on $\widehat{\sigma^\ast}$ for $n=10$? Based on the definition of RE, I expected that the confidence intervals on $\widehat{\sigma^\ast}$ should be given by $\chi_{2(87\% n-1)}^2$. However when I simulate this I find that the 95% confidence level is less than the true parameter 2.1% of the time, so too pessimistic. (The simulation on the optimal $\hat{\sigma}$ shows 95% level less than true parameter 5.0% of the time, as expected.) Thinking that the problem might be that I'm looking at the square root, I tried $\chi_{2(\sqrt{87\%} n-1)}^2$, but that gave 95% confidence level less than true parameter 3.8% of the time, so it's still too pessimistic. Meanwhile, using the same confidence formula as for $\hat{\sigma}$ gives 95% confidence level less than true parameter 6.1% of the time, so too optimistic.
Adjusting confidence interval of estimator by efficiency
CC BY-SA 4.0
null
2023-05-16T23:55:06.980
2023-05-23T17:33:53.390
2023-05-23T17:33:53.390
34792
34792
[ "confidence-interval", "estimators", "efficiency", "rayleigh-distribution" ]
616078
2
null
616076
1
null
There isn’t a probability of falling into a particular category. There are hard rules that yield the deterministic (synthetic) labels. Your job is to convey this to your customer. If you must, perhaps you can call the probabilities $0\%$ and $100\%$. This is legitimate, as those probabilities will correspond to events never and always happening, respectively. If you want to use a simpler rule or one that does not require you to measure data that are expensive to measure, then that is a different story, but you do not appear to have that.
null
CC BY-SA 4.0
null
2023-05-17T00:05:33.763
2023-05-17T00:05:33.763
null
null
247274
null
616079
1
null
null
3
71
If I understood correctly, in principle when we estimate an unconditional/unadjusted treatment effect, it means marginal effect and vice-versa. If so, I wonder if the average treatment effect (ATE) at the population level is the marginal effect estimate only? Can we estimate also conditional ATE by conditioning on covariates. I searched a variety of resources, but still not sure about these. So, I really appreciate your clarity. I look forward to receiving your support.
Can we estimate ATE (average treatment effect at the population level) using both marginal or conditional models?
CC BY-SA 4.0
null
2023-05-17T00:34:35.883
2023-05-19T19:46:20.777
null
null
332276
[ "marginal-effect", "conditional" ]
616080
1
616115
null
3
71
I'm a little lost on how to show how $X_{t}=\Phi(\frac{W_{t}}{\sqrt{T-t}})$ $0\leq t\leq T$, where $W_{t}$ is the usual Brownian Motion, is a Uniformly Integrable Martingale? My goal is to try and show $\mathbb{E}(X_{t}\mid \mathcal{F}_{s})=X_{s}$ for $s\leq t$, but my thought ends fairly quickly trying something like $\mathbb{E}(\Phi(\frac{W_{t}}{\sqrt{T-t}})\mid\mathcal{F}_{s})=\mathbb{E}(\Phi(\frac{W_{t}-W_{s}+W_{s}}{\sqrt{T-t}})\mid\mathcal{F}_{s})=\mathbb{E}(\Phi(\frac{W_{t}-W_{s}}{\sqrt{T-t}}+\frac{W_{s}}{\sqrt{T-t}})\mid\mathcal{F}_{s})$ Obviously I want to show the last term simplifies to $\Phi(\frac{W_{s}}{\sqrt{T-s}})=X_{s}$, but how should I proceed from the line above?
Distribution Function of Standard Normal is a U.I. Martingale?
CC BY-SA 4.0
null
2023-05-17T01:18:07.080
2023-05-17T10:37:30.977
2023-05-17T02:24:48.087
8336
384698
[ "probability", "stochastic-processes", "martingale" ]
616081
2
null
7129
1
null
The goal is to make it easy for the reader to understand the important differences. Too many digits obscures the meaningful difference between values in a table. Too few leaves out important information. Here's a great discussion: [https://newmr.org/blog/how-many-significant-digits-should-you-display-in-your-presentation/](https://newmr.org/blog/how-many-significant-digits-should-you-display-in-your-presentation/) and here's a much more detailed analysis: [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4483789/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4483789/)
null
CC BY-SA 4.0
null
2023-05-17T01:52:02.650
2023-05-17T01:52:02.650
null
null
388139
null
616082
1
null
null
1
22
My question is about using UMAP as a dimensional reduction technique before HDBSCAN clustering. I have a dataset of ~5000 observations each with ~20 descriptors. According to HDBSCAN guidelines, HDBSCAN clusters decently on less than 50 dimensions, so in theory I should be able to cluster directly on the raw data. However, when I do so, HDBSCAN classifies about a quarter of observations as noise and the density-based cluster validation (DBCV) implementation in the HDBSCAN API gives a validity index of ~0.2. In contrast, clustering on a 2-dimensional UMAP embedding yields a validity index of ~0.8, which is much better. Additionally, all observations are clustered (e.g. there is no noise). I've experimented with UMAP embedding into intermediate dimensions and different HDBSCAN hyperparameters, such as 10-dimensions or 5-dimensions, but it seems that clustering on 2-dimensions yields a slightly better DBCV score than anything else. It just seems a little suspicious to me that there is absolutely no noise when clustering on the 2D UMAP embedding, although the clustering does pass the eye-test. Additionally, a consistent trend is that the lower the minimum cluster size hyperparameter for HDBSCAN (e.g., min_clust_size < 40), the higher the DBCV score, which sort of makes sense given that smaller min cluster sizes allow for more precise clustering, but it seems undesirable for data exploration to have 20 different clusters with only 10 members each. Does UMAP artificially eliminate noise? Is there a better metric than DBCV to evaluate cluster quality?
HDBSCAN on UMAP output
CC BY-SA 4.0
null
2023-05-17T01:59:10.817
2023-05-17T01:59:10.817
null
null
388133
[ "noise", "dbscan", "umap" ]
616083
1
null
null
2
36
I want to re-calculate the last column of Table 3 of [Attention is All You Need](https://arxiv.org/abs/1706.03762), i.e. number of params in the models. But numbers from my calculation do not match. |Model |Params from Table 3 ($\times 10^6$) |My Calculation | |-----|-----------------------------------|--------------| |base |65 |63014912 | |B (1) |58 |55937024 | |B (2) |60 |58296320 | |big |213 |214110208 | My calculations are as following: Number of parameters in each multi-head attention layer: $$ N_{att} = N(W^O) + (N(W_i^Q) + N(W_i^K) + N(W_i^V)) \times h $$ $$ = h \times d_v \times d_{model} + (d_{model} \times d_k + d_{model} \times d_k + d_{model} \times d_v) \times h $$ where $N(.)$ is the size (number of elements) of a matrix. Number of parameters in each Feed Forward Network: $$ N_{FFN} = 2 \times d_{model} \times d_{ff} + d_{model} + d_{ff} $$ Therefore, the total number of parameters is: $$ 2 N \times N_{FFN} + (N + 2 N) N_{att} + N_{voc} \times n_{model} $$ where $N_{voc}$ (the size of the vocabulary) is $37000$ according to section 5.1. Substituting all numbers in Table 3 into the above formula, we get $63014912$ for base model. If let $N_{voc}$ be $40000$, then the numbers for base, (A) ~ (E) models seem to be matched with those in the table, but $217182208$ is got for big model, while $213M$ is given in the table.
Attention is All You Need: How to calculate params number of the models?
CC BY-SA 4.0
null
2023-05-17T02:13:16.447
2023-05-18T05:55:09.070
2023-05-18T05:55:09.070
388140
388140
[ "neural-networks", "natural-language", "llm" ]
616084
2
null
615987
0
null
There are a variety of clustering methods based on probability density estimation. Most famous among density-based clustering methods are DBSCAN (and HDBSCAN), which connect points within certain distance thresholds. There are also level-set clustering methods (such as DeBaCL) which estimate a tree of clusters based on varying the threshold $t$ for level set $L_t := \{x: \hat{p}(x) > t \}$. Furthermore, there are mode-based clustering methods (such as the mean-shift algorithm) which identify modes and use these to define clusters, partitioning the space into regions where the gradients of the KDE density lead to each given mode. The mean-shift algorithm, interestingly, can be used to identify not only modes, but also ridges, which is useful for identifying cosmic filaments from astronomical data. In fact, while mixture model clustering methods (eg KMeans, Gaussian Mixture Models) are more popular, density-based clustering methods are arguably statistically superior. Mixture models typically suffer from non-identifiability, and the distribution of the maximum-likelihood estimator is often hard to analyze. Meanwhile, the kernel density estimator has excellent and well-established statistical properties, so clustering methods based on it often carry stronger statistical guarantees. See Ryan Tibshirani's lecture notes on clustering for more on this topic: [https://www.stat.cmu.edu/~ryantibs/statml/lectures/clustering.pdf](https://www.stat.cmu.edu/%7Eryantibs/statml/lectures/clustering.pdf).
null
CC BY-SA 4.0
null
2023-05-17T02:37:28.633
2023-05-17T02:37:28.633
null
null
388141
null
616085
1
null
null
0
44
I have a sample of only 142 numbers from a distribution of 3852 numbers ranging from 0 to 53, but it is censored below 35 (The values ​​exist, but I don't have access.), so I have only the values in the right tail from 35 to 53. Is it possible to estimate what kind of distribution this is, with just these 142 numbers? And also calculate the mean, standard deviation (or other parameters) of this curve? Using Scipy, R, Matlab, or any method? ``` sample= [53,53,51,49,49,49,48,48,48,47,47,47,47,47,47,46,46,46,46,46,45,44,44,44,43,43,43,43,43,43,43,43,43,43,43,43,42,42,42,42,42,42,42,42,42,42,41,41,41,41,41,41,41,41,41,41,41,41,40,40,40,40,40,40,40,40,40,40,40,40,39,39,39,39,39,39,39,39,39,39,39,39,38,38,38,38,38,38,38,38,38,38,38,38,38,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,36,36,36,36,36,36,36,36,36,36,36,36,36,36,36,35,35,35,35,35,35,35,35,35,35,35,35,35,35,35,35] ```
Estimating a censored distribution curve
CC BY-SA 4.0
null
2023-05-17T02:53:54.367
2023-05-17T18:45:46.710
2023-05-17T18:45:46.710
388145
388145
[ "distributions", "curve-fitting", "truncation" ]
616086
2
null
616085
0
null
You can use the `fitdistrplus` package in R to fit some distributions. Here is an example. I start by plotting the data with a histogram. By inspection of the shape, and seeing that it's integer, I guess that it could be described by a Poisson distribution lower-truncated at 34. I then fit this using the `fitdist` function plus the general truncated Poisson distribution function in the `extraDistr` package. Finally, plotting it shows that the truncated Poisson fits the existing data well. ``` library(fitdistrplus) library(extraDistr) sample <- c(53,53,51,49,49,49,48,48,48,47,47,47,47,47,47,46,46,46,46,46,45,44,44,44 ,43,43,43,43,43,43,43,43,43,43,43,43,42,42,42,42,42,42,42,42,42,42,41,41 ,41,41,41,41,41,41,41,41,41,41,40,40,40,40,40,40,40,40,40,40,40,40,39,39 ,39,39,39,39,39,39,39,39,39,39,38,38,38,38,38,38,38,38,38,38,38,38,38,37 ,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,36,36,36,36,36,36,36,36,36 ,36,36,36,36,36,36,35,35,35,35,35,35,35,35,35,35,35,35,35,35,35,35) hist(sample, breaks = seq(-0.5, max(sample) + 0.5, by = 1)) fd <- fitdist( data = sample , distr = "tpois" , fix.arg = list(a = 34) , start = list(lambda = 35) # guess by inspection of histogram , discrete = TRUE ) summary(fd) plot(fd) ``` This returns a $\lambda \approx 36$, which means that if (and this is a big if, since there is no evidence to back this up) the data truly comes from a Poisson distribution, but all the data below 35 has been truncated, then from standard knowledge about the Poisson distribution, the mean and standard deviation of the ultimate generating distribution are both $\approx 36$.
null
CC BY-SA 4.0
null
2023-05-17T03:12:56.993
2023-05-17T03:12:56.993
null
null
369002
null
616088
1
null
null
1
9
I'm motivated here by a problem for robust Bayesian analysis. Let $l(Y|X)$ be the likelihood and let $\{p_\xi(X)\}$ be a parameterized family of prior distributions where $\xi$ denotes the hyperparameters. For example, they could be Gaussian distributions where the hyperparameters are different choices for the mean and covariance. For some choice of $\xi$ the unnormalized posterior is $q(X|Y) = l(Y|X)\cdot p_\xi(X)$. I want to see how the choice of $\xi$ affects the expectation $\mathbb{E}_q(f)$ for some function $f$. Now, suppose I am able to reasonably sample from each posterior with MCMC. But, I can't hope to do this for all the choices of $\xi$ that I'm interested in. So, instead I want to estimate the expectation using importance sampling for my range of targets. There are a lot of schemes for doing this. I am wondering if something basic could work, like running MCMC using a prior from the same family that is wide and covers all the other priors I am looking at.
Importance sampling for a parameterized family of distributions using a wide distribution from the same family
CC BY-SA 4.0
null
2023-05-17T04:40:01.860
2023-05-17T04:40:01.860
null
null
388148
[ "bayesian", "markov-chain-montecarlo", "monte-carlo", "prior", "importance-sampling" ]
616089
1
null
null
1
11
I was trying to use masked image modeling in huggingface and I saw [ViTForMaskedImageModeling](https://huggingface.co/docs/transformers/model_doc/vit#transformers.ViTForMaskedImageModeling) in their documentation but I did not understand how it reconstructs the original image `loss, reconstructed_pixel_values = outputs.loss, outputs.reconstruction` also, it doesn't reconstruct the original image correctly. It gives me noise. ``` url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k") model = ViTForMaskedImageModeling.from_pretrained("google/vit-base-patch16-224-in21k") num_patches = (model.config.image_size // model.config.patch_size) ** 2 pixel_values = image_processor(images=image, return_tensors="pt").pixel_values # create random boolean mask of shape (batch_size, num_patches) bool_masked_pos = torch.randint(low=0, high=2, size=(1, num_patches)).bool() outputs = model(pixel_values, bool_masked_pos=bool_masked_pos) loss, reconstructed_pixel_values = outputs.loss, outputs.reconstruction reconstructed_pixel_values = reconstructed_pixel_values.detach().numpy() reconstructed_pixel_values = np.transpose(reconstructed_pixel_values[0], (1, 2, 0)) plt.imshow(reconstructed_pixel_values) plt.show() ``` [](https://i.stack.imgur.com/VtsB3.jpg)
Why does ViTForMaskedImageModeling not construct the original image correctly?
CC BY-SA 4.0
null
2023-05-17T04:45:11.890
2023-05-17T04:45:11.890
null
null
116480
[ "neural-networks", "python", "computer-vision", "transformers" ]
616090
1
null
null
0
20
I have 10 regions with facilities (different number of facilities in each region) that test people for presence of condition A. However, there are also facilities, that were included in the pilot project, and in case person has condition A (positive result) the one can also be tested for presence of condition B. I want to calculate the coverage of testing on condition B in each region, e. g. what percentage of people with condition A were also tested on condition B on the facilities that were included in the pilot project in each region in each given month (ratio of number of people tested for condition B / all people tested), and then compare the coverage in each region in different periods. As the total number of facilities in each region is different and the number of facilities included in the pilot project is also different, how can I calculate the weighed percentage of condition B testing coverage, therefore to take into account the number of facilities in the process of percentage calculation. Demo table: |Facility name |Condition A result |Included in the pilot project |Condition B result |Date tested for condition B | |-------------|------------------|-----------------------------|------------------|---------------------------| |Xy |Positive |1 |Positive |01/04/2023 | |Np |Positive |0 | | |
Weighed average (percentage) depending on the number of facilities in each given region
CC BY-SA 4.0
null
2023-05-17T05:33:41.610
2023-05-23T13:33:26.423
null
null
375942
[ "percentage", "weights", "ratio", "weighted-mean" ]
616091
1
null
null
1
24
I understand that you would consider multilevel or hierarchical linear mixed effects model with your data are nested with multiple level and be grouped. However, I assume that the observation will belong to single group in the nested data. However, if the there is a portion of observations would belong to multiple groups. But in general the data are nested with specific level. Can I consider the mixed effect model to control the unobserved effect from different level? Or I can only add two dummy variable as fixed effect from two different level and ignore the impact from the higher level?
Multilevel model for nested data with obs could be in multiple groups
CC BY-SA 4.0
null
2023-05-17T05:38:51.593
2023-05-17T06:32:45.573
2023-05-17T05:48:36.423
3277
305735
[ "multilevel-analysis", "cluster-sample" ]
616092
1
null
null
2
53
For example, if my experimental design requires the subjects to participate in two identical sessions at two different time points(factor 1). In each session, the subjects go through all three conditions(factor 2) and the score for each condition is measured. I am aware that if I want to account for both factors and their interaction, the code should be something like `aov(score~condition*timepoint + Error(subject/(condition*timepoint))`. However, what if I am not interested in the effect of time, and the only reason that the participants are required to complete 2 sessions is to collect more data points? (Let's assume the performance won't improve over time). In this scenario, should I ignore this time factor and just do `aov(score~condition + Error(subject/condition)` which seems to be equivalent to having another group of participants, or should I do `aov(score~condition + Error(subject/(conition*timepoint)`, which seems to account for the time as a within group factor despite I am not treating it as an IV? Thank you for your help, I am really confused
How to conduct repeated measures Anova when there are two within factors but I am only interested in the effect of one factor?
CC BY-SA 4.0
null
2023-05-17T05:50:39.157
2023-05-17T05:50:39.157
null
null
388153
[ "r", "mixed-model", "anova", "repeated-measures", "experiment-design" ]
616093
2
null
616063
0
null
This is (almost) precisely the correct approach. > this does not make too much sense cause the model has not yet seen the pattern on hour 23 Indeed. However, it has seen sales in hour 22 and in hour 0 at the very beginning of the day. I hope you are [not dummy-coding the hour, but using smooth transformations like Fourier terms](https://stats.stackexchange.com/a/478175/1352), so the model will likely be able to interpolate the cycles. > how do we expect it to perform well on hour 23? By using Fourier terms, as above. Alternatively, collect more data. Especially since you are re-fitting the model every day, you apparently have multiple days' worth of data - so just use all of them in a seasonal model. Then you will actually have observed hour 23 from a previous day. In this setup, it may make sense to use a full day of 24 hours for your evaluation, or run rolling origins for 24 different 1-hour-ahead forecasts. (Or anything in between, like 4 different 6-hour-ahead forecasts.) Note that hourly data often exhibits [multiple-seasonalities](/questions/tagged/multiple-seasonalities), i.e., the pattern in each day will likely differ between weekdays and weekends. [Here are some possible models that can deal with this.](https://dfep.netlify.app/sec-multiple-seasaonality.html) We have resources on forecasting here: [Resources/books for project on forecasting models](https://stats.stackexchange.com/q/559908/1352)
null
CC BY-SA 4.0
null
2023-05-17T06:24:06.283
2023-05-17T06:24:06.283
null
null
1352
null
616094
2
null
616091
0
null
Ok, I think you want a model that has two sets of non-nested levels: you want a random physician effect and a random hospital effect. Whether it's a hierarchical model or not depends on your definition (and, I suppose, software), but it's a reasonably straightforward mixed model. For example, in R's lme4 you could have a model specified by ``` y~x1+x+x3+x4+(1|physician)+(1|hospital) ``` where `x1`-`x4` are predictors that could be at the patient or physician or hospital level and `physician` and `hospital` distinguish the physicians and hospitals You could also describe the same model as a Bayesian linear or generalised linear model with a shrinkage prior on the doctor and physician factors and flat or weakly informative priors on the coefficients of $x_1\dots x_4$; you could fit it with Stan or JAGS, and probably with one of the more user-friendly interfaces like `rstanarm` or `brms` I don't know if any of the software that describes mixed models hierarchically and uses maximum likelihood will fit these non-nested models (eg `HLM`) will fit these models, but someone else might comment.
null
CC BY-SA 4.0
null
2023-05-17T06:32:45.573
2023-05-17T06:32:45.573
null
null
249135
null
616095
1
null
null
1
14
Assuming I have ROC-AUC for multiple subsets of samples, is there a way to produce a "weighted" average of AUC or something that will represent/approximate the AUC for the complete population of the samples?
Combination of AUC for different subset of samples
CC BY-SA 4.0
null
2023-05-17T06:36:20.670
2023-05-17T06:57:43.850
2023-05-17T06:57:43.850
211655
211655
[ "roc", "auc" ]
616096
1
616097
null
5
329
I know that when validating we are interested in knowing how the model performs in real-world scenarios, so we want the class ratios during validation/test to be the original ones. Say, however, that we are performing some kind of parameter search/optimization. If we are comparing different possible configurations, I guess we should never use validation loss to compare the models, since if this loss is not weighted, the minority class has little "representation" and therefore we could be choosing a model that is performing good on the majority class but not so good on the minority one. We should instead use a metric that considers both classes equally. Is that right? I believe this reasoning would not only apply to comparing models but also for lr schedulers that take into account validation metrics. Torch's ReduceLROnPlateau uses a validation metric to adjust learning rate. In their examples they use validation loss but for the same reason I just stated, I believe this might not be the best idea when we have data imbalance. I know there are posts that somehow answer this but I have not found any that argues about model comparison or lr scheduling using validation loss.
When dealing with data imbalance, shouldn't we never compare models based on validation loss, or at least weight it?
CC BY-SA 4.0
null
2023-05-17T07:11:44.373
2023-05-17T07:28:31.673
null
null
386354
[ "optimization", "loss-functions", "unbalanced-classes", "validation", "model-comparison" ]
616097
2
null
616096
7
null
You should use a loss that accurately reflects the "real world loss" you are trying to minimize by using your model (in the context of subsequent decisions). Then the "problem" disappears, or more precisely, never is a problem. Suppose you have a rare disease, with an incidence of one in a hundred, but which is fatal. If you use a loss that does not account for the difference in consequences or costs, like [accuracy](https://stats.stackexchange.com/q/312780/1352), your model will be tempted to label all instances as negative. However, once you do include a much larger loss if an instance is incorrectly labeled "healthy" (a false negative) than for a false positive etc., you are actually comparing apples to apples, and the rarity of the target class in the validation sample is outweighed by the severity of the costs we incur on these cases by misclassifying them. (Of course, your dataset needs to be large enough so you actually do have some instances of the target class in the validation sample.) You may find this thread interesting: [Are unbalanced datasets problematic, and (how) does oversampling (purport to) help?](https://stats.stackexchange.com/q/357466/1352)
null
CC BY-SA 4.0
null
2023-05-17T07:20:17.970
2023-05-17T07:20:17.970
null
null
1352
null
616098
2
null
616096
4
null
Answering you with a question: if not validation loss then what? Certainly, the training metrics won't be any better here. The desirable scenario is that your validation set resembles the real-world data that you will see in prediction time. In such a case, if the real-world data is equally imbalanced, the performance on the validation set would be similar to the prediction time. If the proportion of the minority group in the validation set is different than in the real-world data, you can use weighted loss, as you noticed. However there are many myths and ways of handling imbalanced data, so you can read in more detail about them in other questions tagged as [unbalanced-classes](/questions/tagged/unbalanced-classes). Finally, I'm not sure if this is what you mean, but a completely different problem is if what you aim for is having a model that is equally good for all the groups. This is a slightly different problem, since as [Simpson's paradox](https://en.wikipedia.org/wiki/Simpson%27s_paradox) shows, such a model does not need to necessarily work best for all the data, so you would need to decide if you care more about overall performance or the within-group performances. Again, this could be achieved by picking a loss function that reflects the problem you are trying to solve.
null
CC BY-SA 4.0
null
2023-05-17T07:28:31.673
2023-05-17T07:28:31.673
null
null
35989
null
616100
2
null
616069
1
null
When too many models are tried, and they become too complex for the small dataset being used, overfitting can begin in model selection. It's a warning sign when the models begin to fit the noise rather than the real pattern. It can be challenging to determine how many models are necessary. However, there is a method called "[early stopping](https://en.wikipedia.org/wiki/Early_stopping)" that can be used in conjunction with monitoring validation scores. When the validation scores stop changing, the training of new models is halted using this method. Always aim for the simplest solution. Overfitting is more likely to occur in datasets with fewer than 1000 labels, so using simpler models is recommended. Occam's razor is a failsafe method. It's not a perfect science, after all. It's a matter of trying things out, gaining insight, and making adjustments as you go.
null
CC BY-SA 4.0
null
2023-05-17T07:53:29.237
2023-05-17T14:42:51.340
2023-05-17T14:42:51.340
35791
387520
null
616101
1
null
null
1
19
I am a beginner in time series forecasting using ML, and I am encountering a strange phenomenon. I have air quality data, in which I have information of various pollutants. The goal is to predict AIR QUALITY INDEX. There are about 10 pollutants and various weather features. The AQI feature(ground truth) has been created from the dataset itself(ignore why this is so), but note that two features, PM10 and PM2.5 are highly correlated to target AQI once it is created(0.8 & 0.81 respectively).Also note that this means that AQI at time(t) is being created from pollutant & weather information at time(t). For 1 step forecasting, I have shifted AQI column 1 step vertically - so now it becomes a problem where ML model must learn to predict 1 day future AQI using data available at time t. I used random forest for forecasting. Also note that I created additional features such as lag of AQI by 1 day, 2 day ,rolling 30 day and 7 day average. I have not used rolling mean or standard deviation of PM2.5 & PM10. The strange phenomenon: The forecasts are lagging. [](https://i.stack.imgur.com/Rymxn.png) I know this maybe due to the underlying process being random walk , but I highly doubt so. The actual AQI for 6 years look like that. Any ideas why this might be happening?
Need help in random forest time series forecast
CC BY-SA 4.0
null
2023-05-17T07:56:22.813
2023-05-17T07:56:22.813
null
null
388164
[ "machine-learning", "time-series", "forecasting", "random-forest" ]
616102
2
null
616022
2
null
There are 14 basis functions here, not 15; one is removed when the identifiability (sum-to-zero) constraint is applied to the basis. All of the weights (coefficients) for the basis functions are shrunk to some extent if the smooth is penalized. As there are 14 basis functions there will always be 14 coefficients associated with this smooth regardless of the amount of shrinkage. If the smoothing parameter, $\lambda$, is sufficiently high, those coefficients will be shrunk to be effectively zero. However, there is no reason to presume that 9 or 10 of the coefficients (in your case) will be all shrunk to effective zero. The penalty is controlling the wiggliness of the estimated smooth and that can counterintuitively require non-zero values of all the coefficients to achieve a fit that uses 4 to 5 effective degrees of freedom (EDF). The point that the help page is making is that the estimated function (the page uses the word "term") can be shrunk towards a constant function at large $\lambda$ values rather than towards a linear function. So, there is certainly shrinkage going on; the smooth in your model uses ~4 EDF when it could have used 14 EDF: ``` library("mgcv") library("gratia") m_pen <- gam(mpg ~ s(disp, bs = "ts", k = 15), data = mtcars, family = gaussian(), method = "REML") m_unpen <- gam(mpg ~ s(disp, bs = "ts", k = 15, fx = TRUE), data = mtcars, family = gaussian(), method = "REML") cs <- compare_smooths(m_pen, m_unpen) draw(cs) ``` [](https://i.stack.imgur.com/lnD8a.png) Clearly the estimated smooth has been shrunk away from the unpenalised fit. However, because of the parameterisation being used for the smooth and the wigglines penalty it's not easy to see where the shrinkage has taken place in terms of the model coefficients. In an alternate parametrisation, the so-called natural parameterization, the EDF of the individual basis functions that comprise a smooth are on a scale that demonstrates the shrinkage, but that requires you to change (reparameterise) the basis functions. If we draw the basis functions weighted by the model coefficients, for both models, you'll get some idea of where the shrinkage has happened: ``` bs_pen <- m_pen |> basis() |> draw() bs_unpen <- m_unpen |> basis() |> draw() library("patchwork") bs_pen + bs_unpen + plot_layout(ncol = 2) ``` [](https://i.stack.imgur.com/Xamp0.png) It's clear here that many of the basis functions have been shrunk towards zero functions. Many of the coefficients are close to zero here ``` > coef(m_pen) (Intercept) s(disp).1 s(disp).2 s(disp).3 s(disp).4 s(disp).5 20.09062500 -7.78783043 -0.12857499 -1.83287539 -0.17948960 -0.05298061 s(disp).6 s(disp).7 s(disp).8 s(disp).9 s(disp).10 s(disp).11 0.66157030 0.40009721 0.42537796 0.17876507 0.45666967 0.32813318 s(disp).12 s(disp).13 s(disp).14 -0.43256068 -2.75386484 -14.07700587 ``` indicating low weights for certain functions.
null
CC BY-SA 4.0
null
2023-05-17T08:13:46.987
2023-05-17T08:13:46.987
null
null
1390
null
616103
1
null
null
0
48
I'm trying to implement Neural Collaborative Filtering recommender system using Keras, the dataset I'm using is movielens-small. Whatever I do to hyperparameters or network, when training, the training loss(MAE) decreases nicely but validation loss (always starts lower than train loss?) stays in place or slightly rises. [](https://i.stack.imgur.com/20Rdy.png) [](https://i.stack.imgur.com/4PUo0.png) In a few examples of implementations of this rec system validation loss looks similar: [https://keras.io/examples/structured_data/collaborative_filtering_movielens/](https://keras.io/examples/structured_data/collaborative_filtering_movielens/) [](https://i.stack.imgur.com/FIkXa.png) What I'm asking about is how it is possible and does it make any sense that validation loss gets lower than train loss at epoch 1 and doesn't decrease anymore after that. Here is a code of my net: ``` # https://files.grouplens.org/datasets/movielens/ml-latest-small.zip import numpy as np import pandas as pd from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_absolute_error from sklearn.model_selection import train_test_split from tensorflow.keras.models import Model, Sequential from tensorflow.keras.layers import Embedding, Flatten, Input, Dot, Dropout, Dense, BatchNormalization, Concatenate from tensorflow.keras.optimizers import Adam from tensorflow import keras ratings_df = pd.read_csv('./ml-latest-small/ratings.csv', header=0, names=['user_id', 'movie_id', 'rating', 'timestamp']) # Changing id of movies to 0...n fixed_movie_id_list = list(ratings_df["movie_id"]) old_to_new_id_dict = dict() new_index = 0 for index, movie_id in enumerate(fixed_movie_id_list): if old_to_new_id_dict.get(movie_id) == None: old_to_new_id_dict[movie_id] = new_index fixed_movie_id_list[index] = new_index new_index += 1 else: fixed_movie_id_list[index] = old_to_new_id_dict[movie_id] ratings_df["old_movie_id"] = ratings_df["movie_id"] ratings_df["movie_id"] = fixed_movie_id_list ratings_df["user_id"] = ratings_df["user_id"].apply(lambda x: x-1) ratings_df = ratings_df.reset_index(drop = True) ratings_df["rating"] = MinMaxScaler(feature_range=(0,1)).fit_transform(ratings_df[["rating"]]) train, test = train_test_split(ratings_df, test_size=0.2, stratify=ratings_df['user_id'], random_state=1) users_len = len(ratings_df.user_id.unique()) movies_len = len(ratings_df.movie_id.unique()) movie_embedding = 50 user_embedding = 50 input_movie = Input(shape=[1], name='input-movie') input_user = Input(shape=[1], name='input-user') mf_movie_embedding = Embedding(input_dim = movies_len + 1, output_dim = movie_embedding, name='mf_movie_embedding')(input_movie) mf_user_embedding = Embedding(input_dim = users_len + 1, output_dim = user_embedding, name='mf_user_embedding')(input_user) mf_movie_flatten = Flatten(name='mf_movie_flatten')(mf_movie_embedding) mf_user_flatten = Flatten(name='mf_user_flatten')(mf_user_embedding) mf_output = Dot(axes=1)([mf_movie_flatten, mf_user_flatten]) mlp_movie_embedding = Embedding(input_dim = movies_len + 1, output_dim = movie_embedding, name='mlp_movie_embedding')(input_movie) mlp_user_embedding = Embedding(input_dim = users_len + 1, output_dim = user_embedding, name='mlp_user_embedding')(input_user) mlp_movie_flatten = Flatten(name='mlp_movie_flatten')(mlp_movie_embedding) mlp_user_flatten = Flatten(name='mlp_user_flatten')(mlp_user_embedding) mlp_concatenate = Concatenate(axis=1)([mlp_movie_flatten, mlp_user_flatten]) mlp_concatenate_dropout = Dropout(0.2)(mlp_concatenate) mlp_dense_1 = Dense(32, activation='relu', name='mlp_dense_1')(mlp_concatenate_dropout) mlp_batch_norm_1 = BatchNormalization(name='mlp_batch_norm_1')(mlp_dense_1) mlp_dropout_1 = Dropout(0.2)(mlp_batch_norm_1) mlp_dense_2 = Dense(16, activation='relu', name='mlp_dense_2')(mlp_dropout_1) mlp_batch_norm_2 = BatchNormalization(name='mlp_batch_norm_2')(mlp_dense_2) mlp_dropout_2 = Dropout(0.2)(mlp_batch_norm_2) mlp_output = Dense(8, activation='relu', name='mlp_output')(mlp_dropout_2) mf_mlp_concat = Concatenate(axis=1)([mf_output, mlp_output]) output = Dense(1, name='output', activation='relu')(mf_mlp_concat) NeuCF_model = Model([input_user, input_movie], output) NeuCF_model.compile(optimizer=Adam(), loss='mean_absolute_error') history = NeuCF_model.fit([train.user_id, train.movie_id], train.rating, epochs=10, validation_data=[[test.user_id, test.movie_id], test.rating]) import matplotlib.pyplot as plt plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('NeuCF_model MAE loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() ```
Is it ok to have low validation loss from the first epoch?
CC BY-SA 4.0
null
2023-05-17T08:40:05.280
2023-05-18T10:35:22.830
2023-05-17T13:58:39.763
388162
388162
[ "neural-networks", "python", "keras", "recommender-system" ]
616104
1
616116
null
5
193
I was wondering if there were some good references on communicating the results of data/statistical analysis to stakeholders or laypeople. I'm not looking necessarily for [references related to data visualization](https://stats.stackexchange.com/questions/261752/whats-a-good-book-or-reference-for-data-visualization), even if it may be useful. In fact, I'd be particularly interested by something relative to the use of language when presenting statistical results, in particular regarding the balance between expressing correctly an interesting statement, while making it accessible to laypeople.
Good references on communicating the results of a statistical analysis to laypeople or non-expert stakeholders?
CC BY-SA 4.0
null
2023-05-17T08:42:13.523
2023-05-17T14:42:02.817
2023-05-17T14:42:02.817
44269
164936
[ "references", "reporting", "communication" ]
616105
2
null
292291
0
null
Other alternatives to regularization: - Using ensemble methods. - Oversampling and data augmentation. - Combine the variables to get new ones, for example with PCA. - Adding random noise at every step of the optimization. - Smoothing the data. - Dropout: typically used with NN, but could also be used with the covariates. - Standardization always improves the results. - Using a bayesian prior is equivalent to regularization. - Replace categorical (with many values) fixed effects with random effects (because it reduces the number of parameters).
null
CC BY-SA 4.0
null
2023-05-17T08:52:41.380
2023-05-17T08:52:41.380
null
null
23802
null
616106
1
null
null
0
5
This might be a basic question, but I was wondering if it's possible to create a score that tells how confident we can be in a certain dataset. Maybe a score that tells how likely we are to estimate the mean of the data with good accuracy? Assuming we have a dataset normally distributed, can we use the number of rows, number of features (and maybe even the data itself) to say that it has a certain score (ranging from 0 to 1) that tells how confident we are in the performance of an ML algorithm (e.g) LR that runs on this data?
Confidence calculation in the performqnce of an ML algorithm knowing the sample size and number of features?
CC BY-SA 4.0
null
2023-05-17T08:56:06.497
2023-05-17T08:56:06.497
null
null
243855
[ "statistical-significance", "confidence-interval", "estimation" ]
616107
1
null
null
1
16
Context: I'm training a classifier on some fraud data. Only a chunk of data is labeled (~2000) so I'm trying a self-training approach, what I'm doing for now is: Iteratively training a model then labeling the unlabeled samples and feeding the samples where the model is more sure about the label to a training set for a new model It worked for me, it improved my model's performance on a holdout set. My questions: Is there a better way, or other things to try? I only found litterature using this approach in a deep learning context and I've been wondering if there is work on this for tabular data. I also was wondering if there is a way to inject noise in the data or the model (I'm using Catboost) like it's usually done in deep learning (image augmentation, dropout ...), in deep learning this helps the new model be different that the old model and enforce invariances in the decision function.
Is there a better way to self-train on tabular data?
CC BY-SA 4.0
null
2023-05-17T08:57:36.423
2023-05-17T08:58:05.953
2023-05-17T08:58:05.953
388168
388168
[ "machine-learning", "neural-networks", "catboost", "self-supervised-learning" ]
616108
1
616122
null
3
119
Wikipedia says that for a given numbers $\{x_i\}_{i=1}^{n}$ drawn from a half-normal distribution, the variance of that distribution can be estimated by sample variance $\hat\sigma^2 = \frac{1}{n} \sum_{i=1}^{n}{x_{i}^{2}}$. The bias-corrected estimator is written as ${\hat {\sigma \,}}_{\text{mle}}^{*}={\hat {\sigma \,}}_{\text{mle}}-{\hat {b\,}},$ where $b\equiv \operatorname {E} {\bigg [}\;({\hat {\sigma }}_{\mathrm {mle} }-\sigma )\;{\bigg ]}=-{\frac {\sigma }{4n}}$. How can I derive the expression for bias correction, and how it can be calculated for a given numbers $x_i$? Is it simply $\hat\sigma^{\ast} = \hat\sigma \left(1+\frac{1}{4n}\right)$?
How to estimate bias-corrected variance of a half-normal distribution?
CC BY-SA 4.0
null
2023-05-17T09:03:24.590
2023-05-17T13:09:56.830
null
null
382894
[ "normal-distribution", "variance", "bias-correction" ]
616109
1
null
null
0
20
I'm working in an unfamiliar Bayesian context here, so apologies if my terminology isn't entirely correct! Imagine I'm trying to predict the performance of players on a of a five-a-side football team. The team has seven players, and in any given game only 5 play. My outcome variable is the goal difference in the match, and the input data is the players, plus other controls. I can use dummy variables to create a binary variable for each player, and use them as a predictor in the model - but what I really want to do is take advantage of partial pooling to more efficiently predict the effect of player performance. I'd know how to do this if there was only one player per match - then I'm just creating a multilevel model with 'player' as a categorical variable - but I don't know how (or if) I can do this when there are multiple players per game. Does anyone have any thoughts about this or references I can read more at?
How to structure a multi-level model for a five-a-side football problem
CC BY-SA 4.0
null
2023-05-17T09:19:36.560
2023-05-17T09:19:36.560
null
null
341061
[ "hierarchical-bayesian" ]
616110
1
null
null
0
8
My experiment is based on quantifying levels of B12 vitamin in a sample of patients retrospectively selected based on values that can take (under our criteria): - normal - low levels: under the physiological or normal reference values but not enough to be considered deficiency - deficiency (which is associated with poor nutrition, gastric disease..). This data is excluded We are interested in low levels. Our hypothesis is to prove if mutations in 10 genes can be associated with this physiological low levels. This physiological low levels are quantified under the format of maximum and minimum value of B12. The genes can take 3 possible categories - wt: absence of mutation - het: half of mutation - hom: full mutation Hypothesis: - I would like to try if is it possible to try something like a cumulative mutative charge, if multiple het or hom states in multiple genes can be additive. I don't know how to approach this, because there can be multiple permutations, and maybe permutation 1 has a stronger association with low levels than permutation 2. Any ideas on how to carry out this? I don't really know if mutations are additive or opposite. - Not sure if some clustering approach can be helpful here, like PCA or euclidean distance, linear discriminant analysis... Any advice is welcome My idea is to execute a summary of levels of b12 for each gene and see in a boxplot possible differences, perhaps look for some association (or correlation), but I doubt it. Any idea or previous input or experiment like this would be tremendously welcome Thank you so much! [](https://i.stack.imgur.com/Pyfd3.png)
Calculate the influence of multiple mutation in the quantitative levels of a protein
CC BY-SA 4.0
null
2023-05-17T09:21:02.787
2023-05-17T09:21:02.787
null
null
339186
[ "r", "biostatistics", "permutation-test", "genetics" ]
616112
1
null
null
1
23
Do we need to differentiate which variable is independent or dependent when doing univariate analysis of variance to test whether the means are equal or not? Because, according to my understanding, the ANOVA test can be performed when the group variable is deemed an independent variable. My colleague believes that as long as we have one continuous and one categorical variable, we can conduct an ANOVA to determine whether the means around the levels of group/categorical variable are statistically distinct. From my perspective, however, we must ensure that the group variable or categorical variable is independent variable. For example, if one variable is Cholesterol and the other one is BMI levels, then as clinically the Cholesterol can affect BMI levels, so ANOVA can not be conducted here and simple logistic regression should be employed instead. Please give me some suggestions.
In ANALYSIS OF VARIANCE-Should we distinguish between independent and dependent variables?
CC BY-SA 4.0
null
2023-05-17T09:56:30.977
2023-05-17T11:35:34.260
2023-05-17T11:22:51.423
22047
383609
[ "anova" ]
616113
1
null
null
0
24
I have two numeric variables (death rates), each without confidence intervals, and I want to calculate the CI of the ratio between the two values. I was using the MOVERR method developed by Donner & Zhou but this was only applicable to values with confidence intervals. [https://rdrr.io/cran/pairwiseCI/man/MOVERR.html](https://rdrr.io/cran/pairwiseCI/man/MOVERR.html) (This is the information on the function and package) [https://pubmed.ncbi.nlm.nih.gov/20826501/](https://pubmed.ncbi.nlm.nih.gov/20826501/) (This is the link to the article) Is there a way to calculate the ratio's CI from two values without CI, using `R`? An example would be the annual average mortality rate of A virus = 2.2 and the annual average mortality rate of B virus = 12.2 and I want to calculate the ratio of A against B with a 95% confidence interval. Thank you
Confidence interval for ratio between two values without confidence intervals
CC BY-SA 4.0
null
2023-05-17T10:04:34.630
2023-05-18T02:36:31.633
2023-05-18T02:36:31.633
388172
388172
[ "r", "confidence-interval", "ratio" ]
616115
2
null
616080
5
null
Let $f(t,x) := \Phi\left(\frac{x}{\sqrt{T-t}}\right) = \Phi\circ g(t,x)$. By applying the chain rule we get $$\frac{\partial f}{\partial x}(t,x) = \frac{1}{\sqrt{T-t}}\phi\left(\frac{x}{\sqrt{T-t}}\right) $$ $$\frac{\partial^2 f}{\partial x^2}(t,x) = \frac{1}{T-t}\phi_x\left(\frac{x}{\sqrt{T-t}}\right) =\frac{-x}{(T-t)^{3/2}}\phi\left(\frac{x}{\sqrt{T-t}}\right) $$ $$ \frac{\partial f}{\partial t}(t,x) = \frac{x}{2(T-t)^{3/2}}\phi\left(\frac{x}{\sqrt{T-t}}\right)$$ Where $\phi$ denotes the standard normal PDF, [whose derivative is known](https://math.stackexchange.com/a/461154/857384). Now your process $X_t$ is given by $X_t = f(t,W_t)$, hence by [ItΓ΄'s lemma](https://en.wikipedia.org/wiki/It%C3%B4%27s_lemma#Mathematical_formulation_of_It%C3%B4%27s_lemma), we have that $$\begin{align*} dX_t = df(t,W_t) &= \left(\frac{\partial f}{\partial t} + 0 + \frac 1 2\frac{\partial^2 f}{\partial x^2} \right) dt + \frac{\partial f}{\partial x} dW_t\\ &=0 + \frac{\partial f}{\partial x} dW_t\end{align*} $$ Or in other words, $X_t$ can be written for all $0\le t< T$ as $$ X_t = X_0 + \int_0^t \frac{\partial f}{\partial x}(s,W_s)\ dW_s $$ Now we have the following > Theorem : if $(C(\omega,t))_t$ is adapted to the natural filtration of $(W_t)$ and progressively measurable $\mathbb{E}\left(\int_0^{T^*} C(\omega,s)^2\,ds\right) < \infty$ Then the process $Z_t := \int_0^t C(\omega,s)dW_s$ is a square integrable martingale on $[0,T^*]$ (see [this](https://almostsuremath.com/2009/12/06/martingales-and-elementary-integrals/) and [this blog post](https://almostsuremath.com/2010/03/25/preservation-of-the-local-martingale-property/) by George Lowther and links within for a proof and additional references). As the assumption of the theorem are satisfied in our case, it follows that $X_t$ is indeed a martingale on $[0,T^*]$ for all $T^*< T$ (note that we can not include $T$ in the interval since the integral explodes as $t$ approaches $t$, making the theorem's assumption fail). The uniform integrability follows immediately from the square integrability of $(X_t)$, indeed, for all $t\in[0,T^*]$ and any measurable set $A$, we have by Cauchy-Schwarz : $$\mathbb E[|X_t|\mathbf 1_A]\le \|X_t\|_2 \sqrt{\mathbb P(A)} \le \sup_{0\le t\le T^*}\|X_t\|_2 \sqrt{\mathbb P(A)} $$ Which, since the first factor is bounded, uniformly goes to $0$ as $\mathbb P(A)$ goes to zero.
null
CC BY-SA 4.0
null
2023-05-17T10:15:59.373
2023-05-17T10:37:30.977
2023-05-17T10:37:30.977
305654
305654
null
616116
2
null
616104
6
null
I picked up lots of useful tips from The Art of Statistics by [David Spiegelhalter](https://www.statslab.cam.ac.uk/%7Edavid/). I think he does an exceptional job of communicating some very abstract concepts without flattening the nuance in the process
null
CC BY-SA 4.0
null
2023-05-17T10:19:54.797
2023-05-17T10:19:54.797
null
null
238285
null
616117
1
null
null
0
63
Suppose $\mathbf X$ is normally distributed with mean $\mu$ and standard deviation $\sigma$. After sampling $n$ samples, we repeat the sampling process $m$ times and the sampling data is stored in an $m\times n$ matrix. Then, is the data on a specific column (not row) also normally distributed? My simulation in `R` shows that it's also normally distributed but I'm not sure if there's a proof for that. Also what about other distributions say chi-Square, exponential, etc.? More context: My question actually came from the assumption for the Simple Linear Regression model: \begin{align} Y=\beta_0 + \beta_1\cdot X + \epsilon \end{align} where $\epsilon$ is normal random variable (random error). But I also see the same model is written as: \begin{align} Y_i = \beta_0 + \beta_1\cdot X_i + \epsilon_i \end{align} where $\epsilon_i$ also normally distributed, and $X_i$ is the specific pair $(X_i, Y_i)$ available in the data set. For which, I suspect that the two models' assumptions are equivalent.
Distribution of specific column of a random variable after repeated sampling
CC BY-SA 4.0
null
2023-05-17T10:25:42.650
2023-05-24T19:31:50.537
2023-05-19T13:17:26.743
5176
383728
[ "distributions", "multivariate-normal-distribution" ]
616120
1
null
null
0
11
I have a case where I want to feed a network with polylines of data. The problem is that the input can be any number of polylines and the polylines can consist of any number of points. If we instead convert the polylines to vectors the requirements might be easier to formulate: - Network with a variable number of input vectors. - All vectors have a variable length (independent). - The input is NOT sequential. - For one set of input only one output is requested (i.e not very similar to NLP/translation) Any suggestions? I know that I can provide the network with a zero-padded 2D vector with the maximum number of "polylines" and the maximum number of "points" in each polyline as size but I would prefer not to since they can vary a lot. I have previously converted the polylines to images of fixed size with success but I am looking for an alternative way to do it without the loss of precision that comes from converting from polylines to an image.
Input non-sequential data of arbitrary size to network
CC BY-SA 4.0
null
2023-05-17T10:48:21.077
2023-05-17T10:48:21.077
null
null
370680
[ "machine-learning", "neural-networks", "natural-language" ]
616121
2
null
616117
1
null
For the time being, let $X = (X_1,\ldots, X_p) \sim N_p(\mu_p, \Sigma)$, $\mu = (\mu_1,\ldots,\mu_p)$ and $a$ a $p\times1$ vector. Then, for $Y = a^\top X$ it holds $$E(Y) = E(a^\top X) = a^\top \mu$$ $$\text{var}(Y) = a^\top \Sigma a$$ and \begin{align*} Y \sim N(a^\top \mu, a^\top \Sigma a).\tag{*} \end{align*} Now take $a = (1,0,\ldots,0)$ and you are done. Property $(*)$ can be proved by using the characteristic function of $X$. I'll give a proof of this later on.
null
CC BY-SA 4.0
null
2023-05-17T10:51:50.853
2023-05-17T10:51:50.853
null
null
56940
null
616122
2
null
616108
6
null
If $Y_1, \ldots, Y_n \sim \mathcal{N}(0, \sigma^2)$ and $X_i=|Y_i|$ for $i=1,\ldots,n$, we say that $X_1,\ldots,X_n$ is a random sample from a half-normal distribution with scale parameter $\sigma$. Note that $\sigma^2$ is not the variance of the half-normal, but rather that of the underlying normal. As you point out, the MLE of $\sigma^2$ is $\hat{\sigma}^2=\frac1n \sum_{i=1}Y_i^2=\frac1n \sum_{i=1}X_i^2$. By the invariance property of MLEs, it follows that $\hat{\sigma}= \sqrt{\frac1n \sum_{i=1}X_i^2}$. Notice that $\frac{n\hat\sigma^2}{\sigma^2} \sim \chi^2_n$, so the quantity $\frac{\sqrt{n}\hat\sigma}{\sigma}$ follows a [chi distribution](https://en.wikipedia.org/wiki/Chi_distribution) on $n$ degrees of freedom. From the properties of this distribution, we have $$ \mathbb{E}(\hat{\sigma})= \underbrace{\left(\sqrt\frac2n \frac{\Gamma\left(\frac{n+1}2\right)}{\Gamma\left(\frac{n}2\right)} \right)}_{:=k_n}\sigma $$ We deduce that $\hat\sigma^*=\hat\sigma/k_n$ is an unbiased estimator of $\sigma$. It also holds that $$ k_n = 1-\frac1{4(n+1)} + O(n^{-2}) $$ (see [here](https://en.wikipedia.org/wiki/Chi_distribution#Large_n_approximation)), which gives you a simple approximation for the bias-corrected MLE when $n$ is large, but this is now only asymptotically unbiased.
null
CC BY-SA 4.0
null
2023-05-17T10:54:22.370
2023-05-17T13:09:56.830
2023-05-17T13:09:56.830
238285
238285
null
616123
1
null
null
4
409
I have a machine learning classification problem where 0.05% of the population (N = 100k) is of the positive class. It is important that I don't misclassify these positives (aka I want to minimize the number of false negatives). I want to say with 95% confidence that my machine-learning model does not misclassify any positives. How large should my sample size be?
Determining the sample size of a very unbalanced machine learning problem
CC BY-SA 4.0
null
2023-05-17T11:20:12.563
2023-05-17T11:34:18.110
null
null
388178
[ "machine-learning", "confidence-interval", "unbalanced-classes" ]
616124
2
null
616123
10
null
- Classify everything as positive. You now are 100% sure you do not misclassify any positives. Problem solved. - I would thus recommend you be a little more detailed on the costs of misclassification, where you will likely need to keep subsequent decisions made on the basis of the classification in mind. The beginning of this answer may be helpful. - "Does not misclassify any positives" is an extremely hard standard to reach. Real life always contains edge cases and Black Swans. "Zero defects" sounds good on paper, but in practice it will mean that your sample will need to encompass the entire population, because you can never be sure that one of the data points you did not sample will throw off your model. - Once you have a good notion of the costs of misclassifications, you can start simulating with different sample sizes and recording the resulting costs your model causes. Beyond some sample size, these costs will hopefully stay more or less flat. Weigh this sample size against the costs of collecting and processing data. There is likely no closed formula that tells you the required sample size, because it depends hugely on your data - if you have a predictor that reliably tells you which class an instance belongs to, you need no sample size at all. Conversely, your problem may be so noisy that the cost you want to reach is simply not achievable: [How to know that your machine learning problem is hopeless?](https://stats.stackexchange.com/q/222179/1352)
null
CC BY-SA 4.0
null
2023-05-17T11:29:37.010
2023-05-17T11:29:37.010
null
null
1352
null
616125
2
null
616123
3
null
This is like a test for the probability parameter of a Bernoulli distribution. It is a bit tricky when you test the null hypothesis that the parameter is equal to zero. In that case, given the hypothesis is true, then the only outcome with non-zero probability is zero cases. The p-value is either 0 or 1, independent of the sample size. So you are testing effectively a [degenerate distribution](https://en.m.wikipedia.org/wiki/Degenerate_distribution). You can however compute a 95%-confidence interval like the [rule of three](https://en.m.wikipedia.org/wiki/Rule_of_three_(statistics)) and demand that the upper boundary, $3/n$, should be below some level.
null
CC BY-SA 4.0
null
2023-05-17T11:34:18.110
2023-05-17T11:34:18.110
null
null
164061
null
616126
2
null
616112
1
null
Analysis of variance compares means and variability about those means of an outcome or response variable, given information on one more other variables, which are often, but not necessarily, categorical. Whether you call that outcome dependent and the other variable(s) independent is a matter of taste. For other reasons, many statistical people find those terms overloaded and try to use more evocative terms. (It's not trivial that many beginners get muddled on which is which.) How you -- or whether you can -- interpret the results in terms of causation, process, mechanism or whatever else is a key question, but I call it a different question. Even in simple exercises the chain of causation and process can be hard to think about. If I compare say males and females on mathematics scores, that doesn't mean that I think or am hypothesising that anatomy or physiology determines or even influences mathematics scores. In the first instance I am checking to see if there are different means in statistical terms. If I find that there are different means, then I stop describing and have to wonder what else lies behind it, in this case quite possibly social or cultural or educational factors. If the example seems contentious, that is the point. We may need very detailed discussion of what is behind any pattern and it is usually the more difficult part of the analysis. Crudely, both you and your colleague have good points, but your ways of looking at analysis of variance are just different.
null
CC BY-SA 4.0
null
2023-05-17T11:35:34.260
2023-05-17T11:35:34.260
null
null
22047
null
616127
1
null
null
0
12
For simplicity, let's say that I regress Y on X using OLS in three different datasets, which all have different numbers of observations (1200, 1500 and 1900 respectively). In all three datasets, I standardized both X and Y (and also possible control variables), so the coefficients I am getting are standardized. I heard that one advantage of standardized coefficients is that they allow you to compare coefficients between datasets. But I am not 100% sure of this, hence why I am asking here. Question 1 : if I observe that the coefficient of X is 0.3 in dataset 2 and 0.8 in dataset 3, can I safely say that the correlation between X and Y is stronger in dataset 3 than in dataset 2 ? Question 2 : can this difference in coefficients be entirely due to the differences in sample size, or does standardization help mitigate the effect of sample size ? For example, let's say that Y is "university ranking" (with each dataset being a different university ranking) and X is "GDP per capita". Taking the example from question 1 again, can I say that ranking 3 is more correlated with GDP per capita on average than ranking 2, or is there a high chance that this difference in coefficients can be attributed to the difference in sample sizes ?
Can I safely compare standardized coefficients between datasets with differents numbers of observations?
CC BY-SA 4.0
null
2023-05-17T11:51:49.187
2023-05-17T11:51:49.187
null
null
382870
[ "interpretation", "standardization" ]
616128
1
616129
null
1
18
Given this network: ``` library(igraph) set.seed(5) g <- igraph::graph_from_atlas(100) E(g)$weight <- round(runif(ecount(g), 0, 10)) #g <- as.directed(g, "acyclic") plot(g, layout=igraph::layout_nicely(g), edge.label=E(g)$weight) ``` [](https://i.stack.imgur.com/ng53n.png) I compute the diameter: ``` dim <- igraph::diameter(g, weights = NULL) dim dimp <- igraph::get_diameter(g) dimp ``` The diameter equals 20, and the path of the diameter is 2->1->4->6->5. The diameter is the length of the longest geodesic. I don't understand why the diameter path is not 2->1->4->5. It looks longer to me as its length is 9+7+7=23.
igaph gives me a diameter, but I think the diameter is another one
CC BY-SA 4.0
null
2023-05-17T12:01:53.997
2023-05-17T12:09:53.833
null
null
154990
[ "graph-theory", "networks", "social-network", "igraph" ]