Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
βŒ€
ParentId
stringlengths
1
6
βŒ€
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
βŒ€
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
βŒ€
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
βŒ€
LastEditorUserId
stringlengths
1
6
βŒ€
OwnerUserId
stringlengths
1
6
βŒ€
Tags
list
616602
1
616603
null
9
372
My understanding is that the formula for a confidence interval when using a t-test is as follows: $\bar x \pm t_{n-1,\alpha / 2} \frac{S_d}{\sqrt{n}}$ where $S_d$ is the standard deviation of the data. Let me demonstrate an example of a paired t-test in R. ``` > a <- c(1,2,3,2,3,1,2,3,1) > b <- c(2,1,3,1,2,3,2,1,1) > t.test(a, b, paired = TRUE, alternative = "two.sided") Paired t-test data: a and b t = 0.5547, df = 8, p-value = 0.5943 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -0.7016018 1.1460462 sample estimates: mean of the differences 0.2222222 ``` The calculation of the mean, I understand. Since I am doing a paired t-test, I understand that my data is the difference between a and b, and I'm essentially performing a one-sample t-test compared to zero. I verify the mean as follows: ``` > mean(a-b) [1] 0.2222222 ``` As for the confidence interval, I just need the standard deviation, which I calculate as follows: ``` > sd(a-b) [1] 1.20185 ``` Using the confidence interval formula for a t-distribution and plugging in the t-statistic found above and sample size of n = 9, I get the following: ``` > mean(a-b) + c(-1,1)*(0.5547)*(sd(a-b))/sqrt(9) [1] 7.861113e-08 4.444444e-01 ``` So as you can see, the CI I get, to 3 decimal places, is (0.000, 0.444). That's nowhere close to what the t.test function gave me, which is (-0.702, 1.15). What did I get wrong here? Why are my results different?
Why doesn't R output for a paired t-test match the formula for a confidence interval in the t-distribution?
CC BY-SA 4.0
null
2023-05-22T16:27:27.703
2023-05-23T16:13:50.623
null
null
347818
[ "r", "confidence-interval", "t-test", "paired-data", "t-confidence-interval" ]
616603
2
null
616602
13
null
0.5547 is not the $t$ you are looking for. The $t$ you want to use in the equation is the critical value for the $t$-distribution with 8 degrees of freedom (n - 1). ``` qt(0.975, 8) = 2.262 ``` So ``` mean(a-b) + c(-1,1)*(qt(0.975, 8))*(sd(a-b))/sqrt(9) [1] -0.7016018 1.1460462 ``` which matches the output from `t.test()`. It's nice to check these calculations by hand sometimes to make sure you know where all the numbers are coming from.
null
CC BY-SA 4.0
null
2023-05-22T16:45:30.037
2023-05-23T16:13:50.623
2023-05-23T16:13:50.623
597
597
null
616604
1
null
null
5
60
The median-of-means estimator is often given as an alternative way to, given a sequence of IID random variables $X_1,...,X_N$, estimate the expectation value $\mathbb{E}[X]$ (see e.g. [these pdf notes by Yen-Chi Chen](http://faculty.washington.edu/yenchic/short_note/note_MoM.pdf), or [these slides by Gabor Lugosi](http://www.ub.edu/focm2017/slides/Lugosi.pdf)). The basic idea is that, instead of computing $\frac1 N\sum_{i=1}^N X_i$, we divide the observations into $K$ subsamples, compute the mean within each subsample, and then compute the median of the means. As discussed in the above notes, or also in ([math.ST:1509.05845](https://arxiv.org/abs/1509.05845)), the median-of-means estimator gives finite-sample exponential concentrations guarantees. It is also my understanding (though I'm less certain about this) that median-of-means only provides advantages for distributions with heavy tails. In particular, whenever the distribution is sub-Gaussian (and thus, in particular, whenever it's bounded), we have the same guarantees with just the standard mean. Assuming the above is correct, what are explicit examples that demonstrate the possible advantages of median-of-means? These, I believe, would be examples of distributions which give a distribution for the median-of-means that is "sharper" than the one for the standard mean. --- For reference, if I try to apply it, as a toy example, to a uniform distribution in $[-1,1]$, and compare the distribution of mean vs median-of-means for different $K$s, I get the following: [](https://i.stack.imgur.com/5DWQl.png) which shows the mean clearly always outperforming median-of-means. This is done using $N=20$ and $10^4$ realizations to obtain smoother (smoothed) histograms. Mathematica snippet used to generate it is: ``` With[{numSamples = 20, realizations = 10000, ks = {2, 4, 5}}, Table[ With[{data = RandomVariate[UniformDistribution[{-1, 1}], numSamples]}, { Mean @ data, Sequence @@ Table[Mean /@ Partition[data, k] // Median, {k, ks}] } ], realizations ] // SmoothHistogram[Evaluate @ Transpose @ #, PlotLegends -> {"mean", Sequence @@ Table["median-of-means, K=" <> ToString @ k,{k, ks}]}, PlotRange -> All, GridLines -> Automatic ]& ] ``` I figured that trying this with uniform distributions over a wider interval $[-M,M]$ one might start to see the difference, as these would be "closer" to having a "heavy tail" (I'm not entirely sure it makes sense, but it at least seemed sensible), but I always get the mean more concentrated around the average than median-of-means.
When is the median-of-means estimator better than the standard mean?
CC BY-SA 4.0
null
2023-05-22T16:49:53.580
2023-05-24T09:20:14.537
2023-05-24T09:20:14.537
82418
82418
[ "mean", "random-variable", "estimators", "median" ]
616605
2
null
608596
1
null
I think it is important to recognize again that the so called coupling layers are splitted into two. One part directly passes to next layer without any modifications (i.e. $\pmb x_{1:d}$). That's why you can just write $\pmb x_{1:d}$ instead of $\pmb y_{1:d}$ in the inverse formulation: \begin{cases} \mathbf{x}_{1:d} &= \mathbf{y}_{1:d} \\ \mathbf{x}_{d+1:D} &= (\mathbf{y}_{d+1:D} - t(\mathbf{y}_{1:d})) \odot \exp(-s(\mathbf{y}_{1:d})) \end{cases} Which is equivalent to \begin{cases} \mathbf{x}_{1:d} &= \mathbf{y}_{1:d} \\ \mathbf{x}_{d+1:D} \odot \exp(s(\mathbf{y}_{1:d})) + t(\mathbf{y}_{1:d}) &= \mathbf{y}_{d+1:D} \end{cases} As you can see, we acquired back the same result as the forward formulation of $\mathbf{y}_{1:d}$.
null
CC BY-SA 4.0
null
2023-05-22T17:50:46.887
2023-05-22T17:50:46.887
null
null
388580
null
616607
1
null
null
0
10
I am fitting a simple normal distribution in stan through R. The distribution depends on two parameters, $\mu$ and $\sigma$. Here the sample code: ``` library(rstan) n <- 100 mu <- 4 sigma <- 2 Y <- rnorm(100, mu, sigma) stan_code <- " data { int N; real y[N]; } parameters { real mu; real sigma; } model { mu ~ normal(0, 10); sigma ~ cauchy(0, 10); y ~ normal(mu, sigma); } " stan_input <- list(N = n, y = Y) stan_model <- stan_model(model_code = stan_code) fit <- sampling(stan_model, data = stan_input) results <- extract(fit) mean(results$mu) # [1] 3.888839 mean(results$sigma) # [1] 1.792918 ``` This all makes sense and no major issues here. The problem comes next. Let's assume I would like to do some sampling on the final distribution, given the posterior distributions of the parameters. The question I have is: Are the posteriors correlated or independence is assumed here? If I want them to be correlated in any way, shall I explicitly write the stan code to take the correlation into account? Thank you, Marco
Parameters Correlation in STAN
CC BY-SA 4.0
null
2023-05-22T18:37:24.207
2023-05-22T18:37:24.207
null
null
260499
[ "correlation", "normal-distribution", "fitting", "stan" ]
616608
1
null
null
0
15
I want to run a logistic regression on my dataset in R. I want to test the probability of which direction a fish is facing depending on my variables. I am considering using a weighted logistic regression since my data come from a complex survey design where there could be a possible lack of independence. My concerns come from nets located near another (possibly recapturing individuals) or some nets may catch more fish due to schooling behavior (species). I am searching for advice on how to weigh these observations. Dependent variable: Direction fish is facing in net (1: out; 0: in) Independent variables: Month, Year, Depth, Time Period, Site (all variables are categorical) ``` > summary(ALE) ``` ``` Sample_ID Year Month Site DepthCat TimePeriodCalc Species Length:993 2011:887 August :141 3S:228 Deep :549 Morning:591 ALE:993 Class :character 2013:106 July :623 4S:765 Mid :426 Day :112 STS: 0 Mode :character June :120 Shallow: 18 Evening:290 YEP: 0 May : 87 September: 22 Direction 0:284 1:709 ```
Weighted logistic regression for complex survey design
CC BY-SA 4.0
null
2023-05-22T18:37:42.290
2023-05-22T18:37:42.290
null
null
388583
[ "logistic", "survey-weights" ]
616610
1
null
null
2
35
I have two sets of translated text. The first set contains poor translations, and the second set includes perfect translations. I asked three evaluators to rate each of the translations. My objective here is twofold. First, I aim to demonstrate agreement among the evaluators using an Intraclass Correlation Coefficient (ICC) measurement. Second, I intend to use a T-test to highlight significant differences between the two sets of translations. I am familiar with how to perform ICC and T-tests. However, I'm uncertain about conducting a T-test when multiple raters are involved. My current thought process is to take the average rating from the three evaluators for each translation, and then use these average ratings to perform the T-test. The exampple below is my throught process. Group A - poor translations Translation 1: Rater 1 : 2, Rater 2 : 3, Rater 3 : 2 Translation 2: Rater 1 : 3, Rater 2 : 4, Rater 3 : 3 Translation 3: Rater 1 : 2, Rater 2 : 2, Rater 3 : 3 Group B - perfect translations Translation 1: Rater 1 : 8, Rater 2 : 7, Rater 3 : 9 Translation 2: Rater 1 : 9, Rater 2 : 8, Rater 3 : 8 Translation 3: Rater 1 : 7, Rater 2 : 8, Rater 3 : 8 First, we compute the average rating for each Translation in both groups: Group A Translation 1: Average = (2+3+2)/3 = 2.33 Translation 2: Average = (3+4+3)/3 = 3.33 Translation 3: Average = (2+2+3)/3 = 2.33 Group B Translation 1: Average = (8+7+9)/3 = 8 Translation 2: Average = (9+8+8)/3 = 8.33 Translation 3: Average = (7+8+8)/3 = 7.67 Now, I use the t-test to compare the means of the two groups. from scipy import stats ``` # These are your means for each Translation groupA_means = [2.33, 3.33, 2.33] groupB_means = [8, 8.33, 7.67] t_stat, p_val = stats.ttest_ind(groupA_means, groupB_means) print('t-statistic:', t_stat) print('p-value:', p_val) ``` ```
T test when there are different raters? I am not sure if my procedure is correct
CC BY-SA 4.0
null
2023-05-22T19:34:45.663
2023-05-22T23:12:04.567
2023-05-22T19:54:13.370
388585
388585
[ "t-test", "cross-validation", "intraclass-correlation" ]
616611
2
null
616547
2
null
This is a tough problem. If you want to do 1:1 matching, this will inherently be slow. The matching would take place one treated unit at a time, and it would need to search through 10 million control units 1 million times. No optimal matching method, like the ones you mentioned in your question, will be able to handle such a large dataset. `bigmatch` works by shrinking the distance matrix by imposing the strictest caliper it can before the matching becomes infeasible; I have found this still takes a very long time and often doesn't work at all because the algorithm to find the caliper is slow. Nearest-neighbor matching will be faster, but it too will take a very long time. There are ways you can speed it up, though. You can perform matching within strata of other variables, which is equivalent to exact matching on those variables. For example, if you had a "region" variable, you could do matching within each region, and you could also do exact matching within each sex-region, or within each sex-race-region, etc. The more variables you can exactly match on, the better your balance will be on those variables and the faster the matching will be. If you re not tied to 1:1 matching, there are other methods you can use to balance covariates. One is subclassification, in which you divide the sample into strata based on the propensity score (and optionally any other variable). Another is weighting, in which you estimate a weight for each unit based on the propensity score. Both of these methods require fitting a model for the propensity score, but there are regression and machine learning methods that can accommodate such large datasets. One final option is generalized full matching, which is an extremely fast form of optimal subclassification. It was designed to work with massive datasets like yours and can complete in seconds. It is available in `MatchIt` by setting `method = "quick"` or in the `quickmatch` package.
null
CC BY-SA 4.0
null
2023-05-22T19:49:50.153
2023-05-22T19:49:50.153
null
null
116195
null
616612
1
616716
null
0
22
I study coups d'etat. I would like to understand the relationship between leader/country characteristics and the likelihood of coup attempts. I have a leader-country-year dataset with one entry for every year that a given leader in a given country was in power. I also have data on leader and country characteristics. Some of these variables do not vary over time (e.g., was the leader elected) and some do (e.g., whether the country is at war with another country). I've made a mock dataset below to illustrate what my data look like. |leader_id |country_id |years_since_entering |elected |war |coup_attempt | |---------|----------|--------------------|-------|---|------------| |1 |1 |0 |1 |0 |0 | |1 |1 |1 |1 |1 |0 | |1 |1 |2 |1 |1 |1 | |1 |1 |3 |1 |1 |0 | |1 |1 |4 |1 |1 |0 | |1 |1 |5 |1 |1 |1 | |--------- |-------- |------------------- |-------- |--- |--- |--- | |2 |1 |0 |0 |1 |0 | |2 |1 |1 |0 |1 |0 | |2 |1 |2 |0 |0 |0 | |2 |1 |3 |0 |0 |0 | I would like to use a Cox PH model to understand the effect that these variables have on a leader's survival time until a coup attempt is made. So, the Cox PH "event" is coup_attempt. The covariates are elected and war. Some details I want to note: - There can be multiple coup attempts in one leader's tenure. (leader_id == 1 in the mock dataset experiences two coup attempts) - The same country can have multiple leaders, though they are never in the leadership position at the same time. (country_id == 1 in the mock dataset has two leaders) I'm planning to use the survival package in R and run a model like this: ``` library(survival) fit <- coxph(Surv(start, stop, coup_attempt) ~ elected + war + strata(country_id) + cluster(country_id), data = df) ``` Some questions I have: (I will explain the strata and cluster choices in the second question) - I know that I need to reshape my data to have start and stop intervals to use with the coxph function. Can I make all of the start and stop intervals in my coxph dataset the same length? Specifically, can I make them one year intervals so that they capture all of the changes in the war variable? The dataset would look like this: |leader_id |country_id |years_since_entering |elected |war |coup_attempt |start |stop | |---------|----------|--------------------|-------|---|------------|-----|----| |1 |1 |0 |1 |0 |0 |0 |1 | |1 |1 |1 |1 |1 |0 |1 |2 | |1 |1 |2 |1 |1 |1 |2 |3 | |1 |1 |3 |1 |1 |0 |3 |4 | |1 |1 |4 |1 |1 |0 |4 |5 | |1 |1 |5 |1 |1 |1 |5 |6 | |--------- |-------- |------------------- |-------- |--- |--- |--- |--- | |2 |1 |0 |0 |1 |0 |0 |1 | |2 |1 |1 |0 |1 |0 |1 |2 | |2 |1 |2 |0 |0 |0 |2 |3 | |2 |1 |3 |0 |0 |0 |3 |4 | - I included strata(country_id) + cluster(country_id) in the coxph function to account for the correlation between leaders of the same country. There might be a country specific effect (e.g., Iraq has a higher baseline likelihood of coups than Canada), which I aim to capture with strata(country_id). There may also be correlation in the observations of leaders of the same country (e.g., leaders from Iraq are correlated), which I aim to capture with the cluster(country_id). Does this use of strata and cluster make sense? Or would you recommend another way to address these issues? - A leader can experience multiple coup attempts. How do I account for the fact that a single unit can have multiple "events" in the Cox PH? - What, if anything, should I do about data censoring? My data end in 2019, but that doesn't mean that all leaders leave office or stop experiencing coup attempts in 2019. Any help with any of these questions would be very much appreciated. Thank you in advance!
Multiple events in Cox PH with time varying data
CC BY-SA 4.0
null
2023-05-22T20:12:15.343
2023-05-23T17:30:34.087
null
null
327612
[ "r", "survival", "cox-model" ]
616613
1
null
null
0
17
I have some data with a continuous outcome that is measured among 4 categorical variables: treatment group, gender, collection date, and sub_type. Gender is M/F, collection date can just be Mon/Wed, and sub_type is one of A, B, C, D. There are 10 treatment groups and one control. Treatment groups 1-5 were tested on sub_type A, while treatment groups 6-10 were tested on sub_type B. All treatment groups were tested on both Mon and Wed, and all treatment groups were tested on Males and females. The control was tested on all sub types A, B, C, and D. I am mainly interested in comparing each treatment group mean to control (expressed as a difference with 95% CI), within the other variables groups (e.g. Males on Monday in sub type A). I have simply set the data up as a linear model with each covariate added as an interaction term. ``` mod <- lm(value ~ group*study_day*gender*sub_type, data = dd) ``` and then determined the contrasts between each treatment group and control using `emmeans` ``` res <- emmeans(mod , ~ group | study_day*gender*sub_type, specs = trt.vs.ctrlk ~ group, fac.reduce = function(coefs) apply(coefs, 2, mean), by = c("study_day", "gender", "sub_type") ``` I obtain contrasts between each group and placebo, which is what I want. However, given that contrasts for cases where a group was not tested on a sub type (e.g. group 1 with subtype D), the contrasts are reported as "nonEst". I don't actually want this contrast, but I am unsure whether this means I am just setting up the model wrong in the first place. To me, adding the variables simply as non-interaction covariates `lm(out ~ group + gender + study_day...` doesn't necessarily make sense either. I should note that originally I tested these all as individual pairwise t-tests, but thought that I should be using emmeans to consider the degrees of freedom from the overall model. I thought of including the additional covariates as random effects, but I thought that since I am only interested in the specific levels of these covariates (e.g. A, B, C, D subtypes), including these as random effects was not necessary. Below is an excerpt from the emmeans results. [](https://i.stack.imgur.com/9TPB7.png) Below is the code used to set up the dummy data (there was probably a simpler way to do this): ``` library(tidyr) set.seed(10473) gender <- c("M", "F") study_day <- c("Mon", "Wed") sub_type <- c("A", "B", "C", "D") group <- c(1:10, "PBO") dat <- expand.grid(gender, study_day, sub_type, group,stringsAsFactors = TRUE) names(dat) <- c("gender", "study_day", "sub_type", "group") dat <- dat %>% mutate(n = 5) %>% uncount(n) %>% mutate(out = rnorm(n = nrow(.), mean = 2, sd = 1)) %>% mutate(out = case_when( group %in% c(1:5) & sub_type %in% c("C", "D") ~ NA_real_, group %in% c(6:10) & sub_type %in% c("A", "B") ~ NA_real_, TRUE ~ out )) %>% filter(!is.na(out)) %>% mutate() ```
How to determine contrasts in combinations of categorical variables with emmeans
CC BY-SA 4.0
null
2023-05-22T20:19:15.750
2023-05-31T20:48:24.410
null
null
374241
[ "r", "mixed-model", "anova", "lsmeans" ]
616614
1
null
null
0
10
I know that typically, one has a feature matrix of n samples by m features. Let's say I have a matrix X in this format. If I was going to perform hierarchical clustering on the samples, I know I would want to standardize each n-length vector of features so that the distances between each feature are comparable. If I was going to perform hierarchical clustering on the m features instead (to eventually do feature selection), would I still standardize the n-length vectors of features or would I standardize the m-length vectors of each sample? If I applied the same algorithm to cluster the features, it would be the transposed matrix of X, so I'm not sure which direction to standardize on (or rather, if I should standardize before or after transposing the matrix X). Thank you!
Standardize agglomerative feature clustering across samples or features?
CC BY-SA 4.0
null
2023-05-22T20:34:30.937
2023-05-22T20:34:30.937
null
null
388587
[ "feature-selection", "scikit-learn", "standardization", "hierarchical-clustering" ]
616616
2
null
616516
1
null
As I read your question, you are puzzled about the fact that some onesided tests use the upper tail, while others use the lower tail of a distribution to determine if the null hypothesis is rejected. The reason is, that one always puts the rejection where one would expect more frequent results when the null hypothesis is wrong, than when it is true. For the chi-square test, large values of the test statistic occur more often when the null hypothesis is wrong, therefore one uses the right-tail quantile. When you make a one-sided to compare the mean of two normal populations, the test statistic is something like $T = (\bar{X}_1 - \bar{X}_2)/\sqrt{S^2/n}$. This statistic has a t-distribution if the null hypothesis is true, and t-distributions are symmetric. The left quantile is just negative the right quantile. Many statistics books only use one quantile, for example the left, and put a " - " in front of the quantile if they need the other one (the right quantile). In reality, you are using either the left or the right quantile, depending on the alternative of a one-sided test.
null
CC BY-SA 4.0
null
2023-05-22T20:47:06.870
2023-05-22T20:47:06.870
null
null
237561
null
616617
1
null
null
0
20
By far I've become really familiar with the concept of GARCH but I'm still confused on how to go on with the implementation especially that I've seen multiple sources using different approaches: - Should I: a. Find an arima model using auto.arima , b. check the residuals for heteroscedasticity, c. fit a GARCH model to residuals ?? (and here I have no idea how to select the order of GARCH except for manually going over different models one by one) Or - Should I: a. Test for ARCH effect in raw data itself b. Use rugarch to fit the mean and variance equations simultaneously. And here do I do the fitting for the raw data itself or is it possible to use the residuals of the mean equation in case I have previously estimated it? (Again regardless of data used, I'm unable to find a way to select the order of the variance equation without having to go over multiple models one by one) --- Finally what's the difference between the two approaches and which one yield more accurate and efficient parameters? P.S. running auto.arima on my data gave me a (2,0,2) arima model. Basically I'm unsure how to proceed next.
Estimating and fitting a GARCH model
CC BY-SA 4.0
null
2023-05-22T20:51:20.667
2023-05-22T21:11:09.157
2023-05-22T21:11:09.157
388586
388586
[ "regression", "time-series", "forecasting", "arima", "garch" ]
616618
1
null
null
0
48
I have the following data and I want to compute the GINI and Accuracy for model validation purposes. But I tried to calculate the GINI and Accuracy using Python code, but it seems incorrect. I would like to compute the AUC, GINI and Accuracy by calculating the cumulative no of borrowers, cumulative no of goods, and cumulative no of bads. Because I want to implement this in Microsoft excel and Python, hence trying to calculate but no success Below are the codes: ``` # code 1 import matplotlib.pyplot as plt from sklearn.metrics import roc_curve, auc data = { "Decile": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], "No. Borrowers": [100, 300, 200, 300, 600, 200, 700, 800, 900, 1000], "Good Borrowers": [80, 160, 140, 220, 500, 1000, 560, 640, 1500, 800], "Bad Borrowers": [20, 140, 60, 80, 1000 ,1000 ,1400 ,1600 ,7500 ,200] } good_borrowers = data['Good Borrowers'] bad_borrowers = data['Bad Borrowers'] total_borrowers = [good_borrowers[i] + bad_borrowers[i] for i in range(len(good_borrowers))] cumulative_good_borrowers = [sum(good_borrowers[:i+1]) for i in range(len(good_borrowers))] cumulative_bad_borrowers = [sum(bad_borrowers[:i+1]) for i in range(len(bad_borrowers))] cumulative_good_borrower_ratio = [cumulative_good_borrowers[i]/total_borrowers[i] for i in range(len(total_borrowers))] cumulative_bad_borrower_ratio = [cumulative_bad_borrowers[i]/total_borrowers[i] for i in range(len(total_borrowers))] fpr,tpr,_ = roc_curve(data['Decile'], cumulative_good_borrower_ratio) roc_auc = auc(fpr,tpr) gini = (2 * roc_auc) -1 print("AUC: ", roc_auc) print("GINI: ", gini) #code 2: import numpy as np import matplotlib.pyplot as plt # Create a data frame to store the number of borrowers, good borrowers, and bad borrowers in each decile. data = { "Decile": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], "No. Borrowers": [100, 300, 200, 300, 600, 200, 700, 800, 900, 1000], "Good Borrowers": [80, 160, 140, 220, 500, 100, 560, 640, 150, 800], "Bad Borrowers": [20, 140, 60, 80, 100, 100, 140, 160, 750, 200] } # Calculate the ROC curve. fpr = [] tpr = [] for decile in range(0, len(data["Decile"])): good_borrowers_in_decile = data["Good Borrowers"][decile] bad_borrowers_in_decile = data["Bad Borrowers"][decile] total_borrowers_in_decile = good_borrowers_in_decile + bad_borrowers_in_decile fpr.append(bad_borrowers_in_decile / total_borrowers_in_decile) tpr.append(good_borrowers_in_decile / total_borrowers_in_decile) # Plot the ROC curve. plt.plot(fpr, tpr) plt.xlabel("False Positive Rate") plt.ylabel("True Positive Rate") plt.title("ROC Curve") plt.show() # Calculate the AUC. auc = np.trapz(tpr,fpr) print("AUC:", auc) # Calculate the accuracy. accuracy = (sum(data["Good Borrowers"]) + sum(data["Bad Borrowers"])) / sum(data["No. Borrowers"]) print("Accuracy:", accuracy) # Calculate the Gini coefficient. #gini = 1 - np.sum((np.array(data["No. Borrowers"]) * (np.array(data["No. Borrowers"]) -1))) / (np.prod(np.array(data["No. Borrowers"])) **2) def gini(data): borrowers = np.array(data["No. Borrowers"]) if 0 in borrowers: return None gini = 1 - np.sum((borrowers * (borrowers -1))) / (np.prod(borrowers) **2) return gini gini(data) print("Gini coefficient:", gini) ``` > Data: |Decile |No. Borrowers |Good Borrowers |Bad Borrowers | |------|-------------|--------------|-------------| |1 |100 |80 |20 | |2 |300 |160 |140 | |3 |200 |140 |60 | |4 |300 |220 |80 | |5 |600 |500 |100 | |6 |200 |100 |100 | |7 |700 |560 |140 | |8 |800 |640 |160 | |9 |900 |150 |750 | |10 |1000 |800 |200 | I hope that helps!
Calculation of the GINI coefficient,Accuracy and AUROC for credit scoring using Python code
CC BY-SA 4.0
null
2023-05-22T21:09:43.977
2023-05-23T04:59:59.197
2023-05-22T21:27:47.413
48756
48756
[ "python", "cross-validation", "excel", "gini", "credit-scoring" ]
616619
1
616625
null
1
31
I am studying [Introduction to Statistical Learning Theory by Bousquet, Boucheron and Lugosi](http://www.econ.upf.edu/%7Elugosi/mlss_slt.pdf). On pages 183 through 185 it considers the applicability of Hoeffding's Inequality to Empirical Risk Minimization (ERM). The setting is as follows. Let $Z_1,...,Z_n$ be $n$ i.i.d. samples drawn from a distribution. Let $\mathcal{F}$ be a class of functions. For every $f\in\mathcal{F}$, define its risk $R(f)$ and empirical risk $R_n(f)$ by $$ R(f)=\mathbb{E}f(Z) \quad\text{and}\quad R_n(f)=\frac{1}{n}\sum_{i=1}^nf(Z_i). $$ Let $f^*\in\mathcal{F}$ be the minimizer of $R(f)$ over all $f\in\mathcal{F}$, and $f_n\in\mathcal{F}$ the minimizer of $R_n(f)$ over all $f\in\mathcal{F}$. For simplicity assume that $f(z)\in[0,1]$ for all $z$. Hoeffding's Inequality says that for all $\delta\in(0,1]$, for a fixed $f\in\mathcal{F}$, $$ \mathbb{P}\left(\left|R(f)-R_n(f)\right|\le\sqrt{\frac{\log(2/\delta)}{2n}}\right)\ge 1-\delta. $$ However, this result does not say anything about $|R(f_n)-R_n(f_n)|$ because $f_n$ is random. My question: Is there any concrete example showing that $|R(f_n)-R_n(f_n)|$ does not obey the bound $\sqrt{\log(2/\delta)/(2n)}$ with probability $1-\delta$?
Example of Failure of Hoeffding's Inequality for Empirical Risk Minimization
CC BY-SA 4.0
null
2023-05-22T21:21:31.860
2023-05-25T22:03:52.267
2023-05-25T22:03:52.267
239348
239348
[ "machine-learning", "probability", "supervised-learning", "high-dimensional" ]
616620
2
null
540150
1
null
This is a problem known as ["domain adaptation"](https://en.wikipedia.org/wiki/Domain_adaptation), where your training data are distributed differently than the population you intend to apply fitted models to. In Python, there is a handy package called [Adapt](https://adapt-python.github.io/adapt/index.html), and I think (more robust) solutions conceptually similar to the original post's approach can be found in the ["instance based"](https://adapt-python.github.io/adapt/contents.html#adapt-instance-based) weighting options.
null
CC BY-SA 4.0
null
2023-05-22T21:22:44.097
2023-05-22T21:22:44.097
null
null
151726
null
616621
2
null
616582
0
null
Have a look at llamaindex or langchain for information injection in the promt of the llm: [https://gpt-index.readthedocs.io/en/latest/guides/primer/usage_pattern.html](https://gpt-index.readthedocs.io/en/latest/guides/primer/usage_pattern.html) This works without Finetuning/prolonged pre-training but comes with the cost of having to pay for the additional tokens in your "pre-promt" which is where you inject your new information
null
CC BY-SA 4.0
null
2023-05-22T21:35:01.940
2023-05-22T21:35:01.940
null
null
298651
null
616622
1
617017
null
1
86
The GitHub [Repository](https://github.com/Spencermstarr/EER-Research-Project/tree/main) for this research project has all of the code included in this question. Brief background context: I am just finishing up the work on my part as a coauthor on a research project exploring the properties of a newly proposed Automated Optimal Variable Selection algorithm. My role is to decide which existing automated feature selection techniques to use as the benchmarks, then run them and evaluate their performance using standard classification performance metrics in R. I ended up choosing 3 benchmarks: Backward Elimination Stepwise Regression, Forward Selection Stepwise Regression, and LASSO Regression. I have already done this, but in the process I ran into something most unexpected which I have not been able to find any journal articles, R documentation, or textbook sub sections to explain. My collaborator randomly generated 260,000 synthetic datasets for me to run the benchmarks on, so I ran 260k LASSO Regressions (initially using the enet() function from R's elastic net package), one for each dataset, got my results, and calculated my performance metrics. And then, I did so again using the glmnet() function from the glmnet package and found that the set of variables it selected in each dataset was not always identical to the set of variables selected by enet on the same dataset even when using the same random seed value for both. So, at that point, being exasperated, I threw my hands up and I re-did all again for a third time using the lars() function from the package of the same name and once again, the variables selected were slightly different. In the aforementioned Repository on GitHub, in the [Stage 2 Results](https://github.com/Spencermstarr/EER-Research-Project/tree/main/Stage%202%20Results) folder, the fact that each of these selected different variables can be verified by inspecting LASSO's Selections via glmnet.xlsx, LASSO's Selections via lars.xlsx, Variables Selected by LASSO ran via enet.xlsx, or Overall LASSO Performance Metrics.xlsx. To take an example at random per a helpful suggestion below in the comments, for dataset 0.25-3-1-1, these are each of their sets of selected factors respectively: - enet: X5, X21, X22 - lars: X21, X22 - glment: X5, X21, X22, X30 And for completeness with respect to this example, the correct set of regressors, i.e. the structural variables for the 0.25-3-1-1 dataset is: X5, X21, X22 Which means only enet got the exact right answer (a True Positive Rate of 1 and a True Negative Rate of 1). Here is the code I used to run them via the enet function: ``` set.seed(11) enet_LASSO_fits <- lapply(datasets, function(i) elasticnet::enet(x = as.matrix(dplyr::select(i, starts_with("X"))), y = i$Y, lambda = 0, normalize = FALSE)) # This separates out and stores just the coefficient estimates from each LASSO. LASSO_Coeffs <- lapply(enet_LASSO_fits, function(i) predict(i, x = as.matrix(dplyr::select(i, starts_with("X"))), s = 0.1, mode = "fraction", type = "coefficients")[["coefficients"]]) # Write my own custom lapply which will separate out and return a # new list containing just the Independent Variables # which are 'selected' or chosen for each individual dataset. IVs_Selected <- lapply(LASSO_Coeffs, function(i) names(i[i > 0])) ``` Here is the code for running them via the glmnet function: ``` set.seed(11) glmnet_lasso.fits <- lapply(datasets, function(i) glmnet(x = as.matrix(select(i, starts_with("X"))), y = i$Y, alpha = 1)) # This stores and prints out all of the regression # equation specifications selected by LASSO when called lasso.coeffs = glmnet_lasso.fits |> Map(f = \(model) coef(model, s = .1)) Variables.Selected <- lasso.coeffs |> Map(f = \(matr) matr |> as.matrix() |> as.data.frame() |> filter(s1 != 0) |> rownames()) Variables.Selected = lapply(seq_along(datasets), \(j) j <- (Variables.Selected[[j]][-1])) ``` Here is the code for running them via the lars function: ``` set.seed(11) lars_LASSO_fits <- lapply(datasets, function(i) lars(x = as.matrix(select(i, starts_with("X"))), y = i$Y, type = "lasso")) # This stores and prints out all of the regression # equation specifications selected by LASSO when called Lars.Coeffs <- lapply(lars_LASSO_fits, function(i) predict(i, x = as.matrix(dplyr::select(i, starts_with("X"))), s = 0.1, mode = "fraction", type = "coefficients")[["coefficients"]]) IVs.Selected.by.Lars <- lapply(Lars.Coeffs, function(i) names(i[i > 0])) ``` What is going on here? Am I doing something wrong or do each of these fitting functions use different underlying stopping conditions or starting conditions or something like that? p.s. 1 - The script I used to run my 260k LASSOs for the first time (by way of enet()) is the one in the GitHub Repo called "LASSO Regressions.R", the script in which I estimated them using the glmnet function is fittingly called "LASSO using the 'glmnet' package.R", and the one which in used lars to fit them is called "LASSO using Lars.R". p.s. 2 - By the way, I had re-ran all of my 260k Backward and Forward Stepwise Regressions using the stepAIC() function from R's MASS package instead what I used the first time around, namely, the step() function from the stats library and all of the variables it selected were identical in every case, and as a result of that, I had no doubts that this would be the same for LASSO.
Different sets of features selected by three different functions in R for running LASSO Regressions despite the same random seed for each
CC BY-SA 4.0
null
2023-05-22T22:01:32.793
2023-05-30T15:01:55.393
2023-05-30T15:01:55.393
373983
373983
[ "machine-learning", "feature-selection", "lasso", "glmnet", "reproducible-research" ]
616623
1
null
null
0
13
I have a simulated a time series data set several predictor variables and response class that I created to practice different analytical techniques before I apply methods to my real (very large) dataset and I'd like to use change point analysis to identify the segments of the data that are consistent and therefore likely consist of the individual behavioral bouts. While my question is ultimately theoretical in nature, it may be helpful to know that I'm using the changepoint library in program R and that I am using PELT with a range of penalty values (CROPS) to help me determine the optimal penalty values and thus number of change points in my dataset. However, I am confused as to what exactly these penalty values are and therefore how to choose an optimal "range" of penalty values to use. I found these two stackexchange posts that also ask similar questions but I don't see a firm answer onto what exactly the penalty value is [post1](https://stats.stackexchange.com/questions/60245/penalty-value-in-changepoint-analysis) and [post2](https://stats.stackexchange.com/questions/472817/changepoint-pelt-penalty). Confusing the matter a little bit, is that I think the values of the penalty values (pen.values in the cpt fcn) seem to change based on the method being considered. Like for binary segmentation the pen.value can take on a range from 0 to 1, and similarly an asymptotic penalty (which maybe is only available for the AMOC method?), the pen.value attributes is the theoretical type I error. In other places I've seen it specified that the pen.value takes on a value of 0, or "2*log(n)" or the length of the data. This makes me think that the penalty value should be, at least in some way, related to the length of the dataset. Or maybe the expected number of breakpoints in the dataset? My dataset is 15,900 observations long composed of 1590 seconds of behavior (sampled at a 10 Hz frequency). I've tried running a large combination of penalty values just to see how it affects the data set as well as varying the penalty type used to estimate the segments but all vastly overestimate the number of change points in my data set which makes me wonder if the penalties (like 0, 2*log(n), (# of seconds), or even n) is somehow unrealistic for my dataset? Can someone please explain what the penalty values are? And if these is any "rule of thumb" type metric that I can use to help me figure out a decent range to consider? Thank you for your help.
What is a reasonable range of penalty values to try in PELT changepoint analysis?
CC BY-SA 4.0
null
2023-05-22T22:07:57.090
2023-05-31T08:31:59.487
null
null
354343
[ "r", "change-point" ]
616624
1
null
null
0
25
I came up with a discussion around what does it mean that the sample mean of two distributions are proportional. This discussion started when talking about linear discriminant analysis and how this proportionality may affect to the calculations of the discriminants. Imagine we have two distributions that are scattered in a 2D plot and they are separated by some distance. Both means are separated by a distance d and somehow, they are proportional. My question here is... if two variables are correlated, I assume that both central points (means) are also correlated and so, they are proportional. But is this assumption always true? Because, could both variables be related and their sample mean not proportional? Please, refer to the image below to get an idea of what I am explaining. In the plot both clusters are separated and I can assume their central points are proportional, but is there any case in which they are not?Γ§ [](https://i.stack.imgur.com/52EyB.png) If I make such assumption for a Linear Discriminant Analysis, isn't it naive to consider that if both sample means are proportional, the whole analysis could be simplified in the maximization functions.... I don't know if I was clear, I am not very duche in the topic so apologies for the possible mistakes.
LDA: sample mean of two distributions are proportional
CC BY-SA 4.0
null
2023-05-22T22:12:10.060
2023-05-22T22:12:10.060
null
null
365263
[ "distributions", "discriminant-analysis", "latent-dirichlet-alloc" ]
616625
2
null
616619
1
null
The problem isn't so much finding an example -- the result will typically fail for some $n$ and $\delta$ when ${\cal F}$ has more than one entry -- as doing the calculations when most of the available tools are designed to give upper bounds. Here's a ridiculously extreme version: for every finite set of points $S$ in $\mathbb R$ define a function $f_S(z)$ that is 0 if $z\in S$ and 1 otherwise. Let $\cal F$ be the set of such functions. Then $f_n$ will be a function that is 0 at every observed point, giving $R_n(f_n)=0$ a.s. On the other hand, since all finite sets have measure zero, for any fixed $f_S$ we will observe no points in $S$ (a.s.) and have $f(Z_i)=1$ a.s and so $R(f)=1$. This does illustrate what causes the problem: overfitting. When ${\cal F}$ has multiple competing $f$s, the one that gets chosen as $f_n$ will typically have $R_n(f_n)<R(f_n)$, because that's how it got chosen.
null
CC BY-SA 4.0
null
2023-05-22T22:21:23.420
2023-05-23T03:31:51.833
2023-05-23T03:31:51.833
249135
249135
null
616627
2
null
616610
0
null
The use of the t-test here is not unreasonable, but it is not the most powerful way to analyze this data. In this context, what you really have is a repeated measures ANOVA with both a within and between group factor. As this may be sufficient to direct you to the correct analysis, I will stop here. If you would like more information on running such an RM-ANOVA, I will be happy to update.
null
CC BY-SA 4.0
null
2023-05-22T23:12:04.567
2023-05-22T23:12:04.567
null
null
199063
null
616628
1
null
null
0
11
Can I compare forecasting performance of rolling window VAR and usual forecast of model with ARIMA errors? Or maybe there is exist better way to compare forecasting performance?
Can I compare forecasting performance of rolling window VAR and usual forecast of model with ARIMA errors?
CC BY-SA 4.0
null
2023-05-22T23:13:32.060
2023-05-23T08:00:18.277
2023-05-23T08:00:18.277
53690
361080
[ "time-series", "forecasting", "arima", "vector-autoregression", "model-comparison" ]
616630
1
null
null
1
43
Could you please provide me neural network architecture suggestion(s) for video prediction (image sequence) regarding entering 144 images and predicting 48 images for each sequence? For illustration, I tested the Wavenet and CNNLSTM networks contained in [https://bitbucket.org/retiarus/tec_prediction/src/conv-lstm/model/networks.py](https://bitbucket.org/retiarus/tec_prediction/src/conv-lstm/model/networks.py) (but not only them) and I didn’t get good results. The data format I’m using is [batch_size=x, channels=1, size=144/48 (input or output), width=7, height=7]. The data are maps of a physical variable of space weather. The inputs are 144 images because they refer to 3 days and the outputs are 48 images because they refer to 1 day, for each sequence, considering that the temporal resolution is 30 minutes. For each new sequence of each training, validation, or test set, 30 minutes are advanced to the beginning of this one. There are 38,634 image sequences for the training set, 5,951 image sequences for the validation set, and 3,444 image sequences for the test set. The goal of the task is precisely to predict the behavior of this physical variable on the following day, seeing the 48 maps. Thanks in advance!
Neural network architecture suggestion(s) for video prediction (image sequence)
CC BY-SA 4.0
null
2023-05-22T23:29:48.717
2023-05-24T00:55:12.293
2023-05-23T21:28:33.520
388593
388593
[ "neural-networks" ]
616631
1
null
null
0
15
In [Recursive Partitioning for Heterogeneous Causal Effects by Susan Athey, Guido W. Imbens](https://arxiv.org/pdf/1504.01132.pdf), under section 2.5 Honest Splitting, two different datasets (called tr and est) are used for (a) creating the tree's split structure and (b) estimating leaf means. The splitting and cross-validation criteria is -EMSE(Ξ ) which consists of two expectations (Ξ  is essentially a tree, referred to as a partition in the paper and EMSE is a modification of estimated mean squared error). One of the expectations of -EMSE(Ξ ) involves the following term: $$V(\hat{\mu}^2(X_i;S^{est},\Pi))$$ The paper states: "We wish to estimate βˆ’EMSE(Ξ ) on the basis of the training sample S_tr and knowledge of the sample size of the estimation sample N_est" Specifically the following is used: $$V(\hat{\mu}^2(X_i;S^{est},\Pi))=\frac{S^2_{S^{tr}}(l(x;\Pi))}{N^{est}(l(x;\Pi))}$$ where $$S^2_{S^{tr}}$$ "is the within-leaf variance". My questions are: - To estimate the variance why is S^2 (in the numerator) computed from the tr set whereas N (in the denominator) is from the est set? In other words why not have both S^2 and N from the tr set once the est set is used to compute estimated leaf means? - Normally in statistics I see that the denominator when computing an unbiased sample variance is n-1 and not n (e.g. see here). In this case why is N used in the denominator instead of N-1. - In "We then weight this by the leaf shares pβ„“ to estimate the expected variance", what is the definition of leaf shares? Does "leaf shares are approximately the same" mean that the percentage of samples in each leaf is more or less the same for both the tr and est set?
Estimating variance of estimated mean in leaf when using Honest splitting
CC BY-SA 4.0
null
2023-05-23T00:45:48.927
2023-05-23T02:10:31.247
2023-05-23T02:10:31.247
269745
269745
[ "machine-learning", "variance", "causality", "cart" ]
616632
1
null
null
1
18
For a study, I ran an RI-CLPM over 4 time points, looking at the bidirectional association between negative social interactions (X) and depression (Y) over these 4 time points, with cross-lagged parameters constrained to be equal across time. Fit was excellent and parameter estimates for my cross-lagged paths suggested that higher within-person X (negative social interactions) at a previous time point predicted higher within-person Y (depression) at subsequent time point (so a positive relationship). These variables were not significantly related at the between-person level. Seemed simple enough to interpret at first. However, when looking at the descriptive statistics of my sample, the overall means of X and Y decrease over time. For example, negative interaction mean scores in my overall sample were 10 at T1, 8 at T2, 7 at T3 etc., and the same for depression scores. We wonder if this is possibly due to regression to the mean or an effect of being assessed. I am somewhat confused about how to interpret my findings. Given that the overall means of variables are decreasing over time, can I still interpret my results in this way: that when a person reports more X (e.g., negative social interactions) than they usually do, they also report more Y (e.g., depression) than they usually do at a subsequent time point, so kind of like an exacerbation effect. Or is this not actually representative of the data, and if so, is the correct interpretation: given that the sample means of X (negative social interactions) and Y (depression) seem to go down over time (e.g., a sort of recovery effect), that a "positive" cross-lagged path is actually showing some sort of stunting of this recovery effect. So I guess more broadly, I'm wondering what a "positive" within-person relationship might actually mean when the overall means of the variables are trending down over time. Hopefully this makes sense and happy to clarify!
How to interpret RI-CLPM parameter estimates when means of variables decrease over time
CC BY-SA 4.0
null
2023-05-23T01:12:48.577
2023-05-23T11:09:43.127
null
null
388597
[ "interpretation", "panel-data", "structural-equation-modeling" ]
616633
1
null
null
0
25
I'll caveat this question with that I do not think what I want to do is possible but suggestions in the right direction would be most helpful. Now my data consists of vectors of length six, where each element is discrete, here are some examples: ``` [(0, 12, 7, 7, 12, 0), (11, 16, 2, 2, 16, 11), (8, 7, 3, 3, 7, 8), (13, 5, 9, 9, 5, 13), (0, 12, 2, 2, 12, 0), (11, 2, 0, 0, 2, 11), (11, 5, 8, 8, 5, 11), (8, 11, 4, 4, 11, 8), (7, 11, 10, 10, 11, 7), (16, 7, 4, 4, 7, 16)] ``` Each position can take a value in [0,16]. Fitting a categorical distribution to this data is trivial. My problem is instead that I require the output, the sample, to have a certain symmetry. As you can see above each vector has reflective symmetry about the center of each array. Each vector is a a palindrome. Now, I require some form of PMF from whence I can sample, but where those samples can be random in all aspects but the order in which the elements appear - is this possible? With numpy we can of course do e.g. ``` import numpy as np np.random.choice(range(17),1,[1/6.]*6) ``` But this will not produce a symmetric sample as it stands. One idea I had was that one could sample half an array and concatenate a reversed sample of that same array to produce the 'sample' but I suppose I am wondering if there is a better way.
Generating samples from a categorical proposal distribution with specific output (sample) structure
CC BY-SA 4.0
null
2023-05-23T01:44:12.910
2023-05-23T01:44:12.910
null
null
37280
[ "categorical-data", "discrete-data", "discrete-distributions", "discrete-optimization" ]
616634
1
null
null
0
24
I am trying to understand the difference between running a GEE with robust S.E., vs running a GLM where robust S.E could be calculated posthoc based on the model output. For example, if one considers a binary outcome with grouped data, it seem like it could be modeled using either GEE or conventional GLM, where in both cases robust S.E. could be calculated (see here for example: [https://grodri.github.io/glms/r/robust](https://grodri.github.io/glms/r/robust)). I'm trying to understand the difference between these approaches. When would one be more correct to use over the other? Thank you!
Clustered data: GEE VS GLM with robust S.E
CC BY-SA 4.0
null
2023-05-23T03:24:20.413
2023-05-23T04:23:10.967
2023-05-23T04:23:10.967
292896
292896
[ "generalized-linear-model", "generalized-estimating-equations", "clustered-standard-errors", "robust-standard-error", "sandwich" ]
616635
2
null
616532
1
null
Let $B(.)$ denote the backshift operator, i.e. $B y_t = y_{t-1}$. Then we can write a general non-seasonal ARIMA(p, d, q) as $$ \underbrace{(1 - \phi_1 B - ... - \phi_p B^p)}_{\text{AR(p)}} \ \underbrace{(1-B)^d}_{\text{d differences}} \ y_t = \underbrace{(1+\theta_1 B + ... + \theta_q B^q) \epsilon_t}_{\text{MA(q)}} $$ and $\epsilon_t \overset{iid}{\sim}N(0, \sigma)$ You just need to substitute the parameter estimates in with $p=2, d=1, q=2$. If you want to remove the backshift operator, simply expand everything out and notice that $B^k y_t = y_{t-k}$
null
CC BY-SA 4.0
null
2023-05-23T04:00:15.100
2023-05-23T04:00:15.100
null
null
359717
null
616636
1
null
null
1
28
On Exercise 5.14 of [Wainwright](https://www.cambridge.org/core/books/highdimensional-statistics/8A91ECEEC38F46DAB53E9FF8757C7A4E), it provides a way to estimate maximum singular value of Gaussian random matrices using the one-step discretization bound and Gaussian comparison inequality as shown. [](https://i.stack.imgur.com/ShJ3b.png) Can we use Dudley Integral to estimate it? Intuitively I thought it works but I didn't work it out since there are too few examples of [Wainwright](https://www.cambridge.org/core/books/highdimensional-statistics/8A91ECEEC38F46DAB53E9FF8757C7A4E) about Dudley Integral. It was hard for me to cap it.
Using Dudley Integral to estimate maximum singular value of Gaussian random matrices
CC BY-SA 4.0
null
2023-05-23T00:54:47.693
2023-05-23T04:38:36.377
null
null
383159
[ "probability", "mathematical-statistics" ]
616637
2
null
616618
0
null
Given that you only have summary data (not the full dataset) then the scikit-learn metrics package will not work as expected. I found [this useful blog](https://kiwidamien.github.io/what-is-a-roc-curve-a-visualization-with-credit-scores.html) that I think may help you understand how that function works. The blog uses an example very similar to your question. Also, I need 50 creds to write comments on posts, so I wasn't able to ask for clarification. However, based on your data, it seems that you are trying to calculate the fpr and tpr of a good/bad borrower classifier. My interpretation is that the "No. Borrowers" column is the number of borrowers classified as good borrowers by the model given that you use a threshold of selecting the top $i^{th}$ decile, while the other two columns are the true number of good and bad borrowers at that decile. If that is the case, then we must make a few corrections to the code. Remember that $FPR = \frac{FP}{FP + TN}$ therefore at a given threshold we must divide the total number of accepted bad borrowers (the cumulative sum until the $i^{th}$ threshold) and divide by the total number of bad borrowers. similarly, the tpr may be computed using the formula: $TPR = \frac{TP}{TP + FN}$ Your accuracy can be calculated with the following formula $Acc = \frac{TP + TN}{Total\_population}$. In other words, your accepted positives (cumulative sum of good borrowers until $i^{th}$ threshold), plus your denied negatives (sum of bad borrowers beyond $i^{th}$ threshold), divided by you total number of borrowers. Here is my implementation of your code. I will ignore code #1 since I believe code #2 is closer to a good answer. I have also taken the liberty to make the code a bit cleaner by using more built-in tools from numpy and pandas. ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt # Create a data frame to store the number of borrowers, good borrowers, and bad borrowers in each decile. data = { "Decile": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], "No. Borrowers": [100, 300, 200, 300, 600, 200, 700, 800, 900, 1000], "Good Borrowers": [80, 160, 140, 220, 500, 100, 560, 640, 150, 800], "Bad Borrowers": [20, 140, 60, 80, 100, 100, 140, 160, 750, 200] } df = pd.DataFrame(data) total_population = df['No. Borrowers'].sum() df['TP'] = df['Good Borrowers'].cumsum() df['FN'] = df['Good Borrowers'].sum() - df['Good Borrowers'].cumsum() df['FP'] = df['Bad Borrowers'].cumsum() df['TN'] = df['Bad Borrowers'].sum() - df['Bad Borrowers'].cumsum() df['fpr'] = df.FP / (df.FP + df.TN) df['tpr'] = df.TP / (df.TP + df.FN) df['acc'] = (df.TP + df.TN) / total_population # Calculate the AUC. auc = np.trapz(df.tpr,df.fpr) print("AUC:", auc) # Calculate the Gini coefficient. gini = 2*auc - 1 print("Gini coefficient:", gini) # Plot the ROC curve. plt.plot(df.fpr, df.tpr) plt.xlabel("False Positive Rate") plt.ylabel("True Positive Rate") plt.title("ROC Curve") plt.fill_between(df.fpr, df.tpr, alpha = 0.1) plt.text(0.6, 0.4, f'AUC = {auc:0.3}\n Gini = {gini:0.3}') plt.show() ``` Please let me know if you need any clarification. I am pretty new to StackExchange so I'm still getting used to this type of communication skill :)
null
CC BY-SA 4.0
null
2023-05-23T04:59:59.197
2023-05-23T04:59:59.197
null
null
388530
null
616639
1
null
null
0
38
I toss a die five times X = 1, 2, 3, 4, 5. Means there are five throws. On each throw, we get a number on the die. This means we have 5 values for 5 throws. Y = 1, 5, 4, 3, 2 On the first throw (X = 1), I get Y = 1 on the die. In the second throw (X = 2), I get a value of Y=5 on the die. I compute the data proportions by dividing each value by the total sum of the values. Normalized data : 1/15, 5/15, 4/15, 3/15, 2/15 Here, 15 is the sum of all the values that I have observed during all of my 5 throws. Now, I know that although the sum of the proportional values will be one, we cannot call this a probability distribution of Y given X. Is there any way to link the proportion values with the probability distribution of Y? I wish to create a probability distribution which would reflect the proportion of Y for each X.
Using proportion to create a probability distribution
CC BY-SA 4.0
null
2023-05-23T06:00:23.007
2023-05-23T09:44:45.200
2023-05-23T09:44:45.200
338252
338252
[ "probability", "distributions", "proportion" ]
616640
2
null
610370
1
null
I ended up buying "An Introduction to Bayesian Inference, Methods and Computation" by Nick Heard which I felt that it describes the theory quite nice (although some computations become really long). For practical ideas I first read "Probabilistic Programming and Bayesian Methods for Hackers" (see link above).
null
CC BY-SA 4.0
null
2023-05-23T06:03:05.950
2023-05-23T06:03:05.950
null
null
298651
null
616642
2
null
551888
0
null
Taking the derivatives w.r.t. matrices and vectors can be quite tricky. However, they can typically be broken down to scalar equations as follows: $$y = b + \sum_j x_j w_j$$ In order to get the gradient w.r.t. the vector $W$, you just need to compute the gradient w.r.t. every entry $w_j$. The result will again be a vector. Apart from that, you will need the [chain rule](https://en.wikipedia.org/wiki/Chain_rule) as indicated by other comments and answers
null
CC BY-SA 4.0
null
2023-05-23T06:33:08.040
2023-05-23T06:33:08.040
null
null
95000
null
616643
2
null
223149
0
null
Answered in comments by Andy W: > Sorting the correlation matrix may provide clusters of variables, see here for one description of how to sort them
null
CC BY-SA 4.0
null
2023-05-23T06:35:43.890
2023-05-23T06:35:43.890
null
null
121522
null
616644
2
null
208936
1
null
I will answer you briefly, translation invariant means that the algorithm will recognize the object even if you shifted its position from any place in the picture to any other place, lets say , we have a cat in photo with grass background , if you shifted the cat to the corner of the image or any other place , model still recognize , it is not affected by the translation ( shifting )
null
CC BY-SA 4.0
null
2023-05-23T07:18:22.640
2023-05-23T07:18:22.640
null
null
388618
null
616645
2
null
616470
1
null
It sounds like you have data at an individual level, so you ought to be able to directly identify the count of people for which all three events are true. I would caution you against assuming independence, since it almost never holds in these types of studies. Instead, go back to your raw data and count the number of individuals for which all three events are true and then use a standard binomial confidence interval (e.g., Wilson score) to estimate the true joint probability of all three events.
null
CC BY-SA 4.0
null
2023-05-23T07:19:52.663
2023-05-23T07:19:52.663
null
null
173082
null
616646
1
null
null
0
21
I am calculating Gender Pay gap using gross salaries for employees. I'm doing two things to be specific: - Calculate Overall Gender pay gap for all the employees. (Taking all male/female employees to calculate average salaries) - Calculate Gender Pay gap at each level/grade (Employees divided by Seniority) I had a question about the numbers that I'm getting. Q. The Pay Gap for all individual grades is Negative (Women are paid more) except for one. (A total of 5 grades). But the overall Pay Gap I'm getting is quite Positive. Can anyone help explain what is happening here? Formula I'm using for Pay Gap: (Avg_Male_monthly_gross - Avg_Female_monthly_gross) / Avg_Male_monthly_gross Thank you.
Overall Average greater than Individual Category averages
CC BY-SA 4.0
null
2023-05-23T07:22:31.043
2023-05-23T07:22:31.043
null
null
388620
[ "mean", "median" ]
616648
2
null
616596
0
null
Let's look at the data: ``` library(data.table) DT <- fread("https://raw.githubusercontent.com/brian-o-mars/height-diameter-sim/main/tree_data.csv") library(ggplot2) ggplot(DT, aes(DIA, HT)) + geom_point() ``` [](https://i.stack.imgur.com/SxaQc.png) Clearly, the variance increases with increasing HT values. This means a log-transformation is a good idea. The log-transformed model: $log(y) = log(a) + c \cdot log(1 - exp(-b \cdot x))$ Luckily, this model is linear in `log(a)` and `c`, which means we only need a starting parameter for `b`. Often you can derive that from domain knowledge (e.g., check some publications). I will just guess it. ``` DT[, y := HT - 1.3] DT[, logy := log(y)] fitlog <- nls(logy ~ cbind(1, log(1 - exp(-b*DIA))), data = DT, start = list(b = 0.1), algorithm = "plinear") summary(fitlog) #Formula: logy ~ cbind(1, log(1 - exp(-b * DIA))) # #Parameters: # Estimate Std. Error t value Pr(>|t|) #b 0.14147 0.00790 17.91 <2e-16 *** #.lin1 4.46843 0.02337 191.24 <2e-16 *** #.lin2 1.20448 0.03098 38.88 <2e-16 *** #--- #Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 # #Residual standard error: 0.1988 on 1951 degrees of freedom # #Number of iterations to convergence: 3 #Achieved convergence tolerance: 1.797e-06 ``` A diagnostic plot shows that we have pretty good homogeneity: ``` plot(residuals(fitlog) ~ fitted(fitlog)) abline(0, 0) ``` [](https://i.stack.imgur.com/HLp3R.png) (You should investigate that one extreme value.) Let's look at predictions. On the log-scale: ``` ggplot(DT, aes(DIA, logy)) + geom_point() + stat_function(fun = \(x) predict(fitlog, newdata = data.frame(DIA = x)), color = "red") ``` [](https://i.stack.imgur.com/AYP4L.png) And on the original scale: ``` ggplot(DT, aes(DIA, HT)) + geom_point() + stat_function(fun = \(x) exp(predict(fitlog, newdata = data.frame(DIA = x))) + 1.3, color = "red") ``` [](https://i.stack.imgur.com/1TtQ4.png) Now, obviously, for large diameters you will have pretty large differences between measured and predicted values as you can see in the plot. If you want to improve predictions you need to include the additional data in your fits. Since you appear to have repeated measures, you should do that in any case. This means, you need to fit a non-linear mixed-effects model (see R package `nlme`). Since I don't have a full understanding of your dataset, non-linear mixed-effects models are not trivial to fit, and I have already invested enough time, I'll stop here.
null
CC BY-SA 4.0
null
2023-05-23T07:32:36.110
2023-05-23T07:32:36.110
null
null
11849
null
616650
2
null
616591
2
null
At least the 2nd and 3rd solutions are correct. Your design matrix has dependent variables. For example the third column can be expressed in terms of the first two columns $x_3 = 0.15 + 0.45 x_2$ and the equation can also be expressed as $$\begin{array}{} -0.2 + 0.4 x_ 1 + 1.2 x_3 &=& -0.2 + 0.4 x_ 1 + 1.2 (0.15 + 0.45 x_2) \\ &=& -0.2 + 0.4 x_ 1 + 1.2 (0.15 + 0.45 x_2)\\ &=& -0.02 + 0.94 x_1 \end{array}$$ ### Methods 2 and 3 This last equation on the right hand side is the solution given by the 2nd and 3rd methods which probably drop one of the columns. In R you get the same behavior when we use the function `lm` which gives as output ``` > lm(y~X+0) Call: lm(formula = y ~ X + 0) Coefficients: X1 X2 X3 -0.02 0.94 NA ``` The last column is ignored when you give a computer the task to solve the equation. ### Method 1 Your 1st method probably attempts to inverse the (non-invertible) matrix anyway. For example, the inverse command does give some output. In my computer (an online [https://www.tutorialspoint.com/execute_matlab_online.php](https://www.tutorialspoint.com/execute_matlab_online.php)) I get: ``` disp(inv(A.'*A)); -3.3777e+13 -1.0133e+14 2.2518e+14 -1.0133e+14 -3.0399e+14 6.7554e+14 2.2518e+14 6.7554e+14 -1.5012e+15 ``` and gives some output that is close but not exact (possibly due to round of errors). In my case I got -0.071429 0.785714 0.342857, which is close to a correct solution $-0.071429+0.15 \cdot 0.342857 \approx -0.02$ and $0.785714+0.45 \cdot 0.342857 \approx 0.94$ In your case the difference is larger $-254.4 + 1696\cdot 0.15 \approx 0$ and $-762.3 + 1696*0.45 = 0.9$ (but this might be due to the output being given with less precision) In R I can get the same result when I use the `solve` command while setting the tolerance parameter extremely low. In that case the inverse matrix is still computed; and it can be computed because the columns in the matrix X are not entirely dependent due to round off errors. ``` X = cbind(rep(1,20), seq(-1,1,length.out = 20), seq(-0.3,0.6,length.out = 20)) beta = c(-0.2,0.4,1.2) y = X %*% beta X = round(X,5) solve(t(X) %*% X, tol = 10^-50) %*% t(X) %*% y # [,1] #[1,] -0.0430674 #[2,] 0.8707996 #[3,] 0.1537806 ```
null
CC BY-SA 4.0
null
2023-05-23T07:52:21.713
2023-05-23T09:07:55.323
2023-05-23T09:07:55.323
164061
164061
null
616651
1
616673
null
0
29
I'm an AP Statistics student (high school senior) doing a final project investigating the correlation between an "overall score" as the response variable and five different "subscores" as explanatory variables (the overall score is NOT calculated from the subscores; rather, each score is given separately). I'm interested in doing some sort of inference (like a hypothesis test) to see if the multiple regression model that I got with my sample data is significant; i.e. I want to see if the model can provide statistically convincing evidence that the population multiple R value is not equal to 0. Could someone help me by telling me (in detail) how I would go about doing this inference? I only have Excel and RStudio at disposal (although I have very very little coding knowledgeβ€”almost negligible). I have taken one-variable calculus and have a very basic understanding of multivariable calculus, but I have never taken calculus-based statistics.
Testing the significance of a multiple R value from a multiple regression
CC BY-SA 4.0
null
2023-05-23T08:03:35.527
2023-05-23T10:59:16.480
null
null
388601
[ "hypothesis-testing", "correlation", "multiple-regression", "inference" ]
616652
1
null
null
0
81
I'm working on a project where we currently take a look at some sort of efficiency measure, where we numerically integrate the area under the efficient frontier up to the current portfolios' volatility and relate that to the total area spanning the difference of the upper and lower boundary of our markowitz bullet. Here is a pic of what we are calculating at the moment: [](https://i.stack.imgur.com/TJeBA.jpg) (Sorry for the crude pic). We calculate area A and B, which also includes area A and then calculate the ratio A/B. "Aktuelle Fondsperformance" denotes the location of the current portfolio that we try to analyze. The other %-points are reference portfolios, which are not really necessary for the analysis. At the beginning, i hoped for a more or less robust measure w.r.t. the expected mean, which turned out not to be the case as the shift of the mean of the efficient frontier and of the actual portfolio are not of the same magnitude (somewhat not surprising). Consequently, our ratio fluctuates more or less randomly between 0 and 100% with a strong tendency towards 0%: [](https://i.stack.imgur.com/DVDyb.png) My quesion therfore: If i assume the mean and volatility /covariance matrix are distributed in a certain way, how can i derive some sort of test statistics for my ratio A/B, that i can use for testing and which test seems to fit best? I remember vaguely that somewhere else on stack exchange, it has been suggested to use a Chi-Square test for this, others suggested a Likelihood-Ratio test, however, I'm in doubt whether this is the test that i need as i want to test, whether my ratio indicates that area A is large (and thus my portfolio is far away from the efficient frontier) and whether the assumtions for the test statistics (i.e. the ratio A/B) are met. Which test can i use (even for a first proxy) to test whether my ratio of A/B is statistically distinct from zero? EDIT: Here is some of my A/B ratio time-series: |UD Measure | |----------| I appreciate any hint Thanks a lot Thomas
Construction of a statistical test for a test statistic that fluctuates between 0 and 1
CC BY-SA 4.0
null
2023-05-23T08:10:04.287
2023-05-23T12:04:49.120
2023-05-23T12:04:49.120
357274
357274
[ "hypothesis-testing", "distributions", "statistical-significance" ]
616653
1
616656
null
2
69
Please help me understand the following: Suppose a tester recorded the quantity $Y=X_1+\cdots X_n$ where $X_i$ has a Poisson distribution with mean $\theta$. Now, the tester lost all samples $X_i$ and want to create fake observations $Z_i\cdots Z_n$. The tester knows that $$ P(\mathbf Z|Y)=\frac{P(\mathbf Z,Y)}{P(Y)} $$ It is known that $P(Y)$ is also a Poisson distribution with mean $n\theta$ while $P(\mathbf Z,Y)$ is a product of multiple Poisson pdfs with mean $\theta$. --- I am asked to find the likelihood function. However, the book hinted that the likelihood is a product of the conditional $P(\mathbf Z|Y)$ and $P(Y)$. Using that hint, I have shown that the likelihood is the proportional to the likelihood of $X_i\cdots X_n$. I am confused, why is $l(\theta)=P(\mathbf Z|Y)P(Y)$? This equation is the joint distribution $P(\mathbf Z,Y)$ and I argue that $P(\mathbf Z|Y)$ should be the likelihood by definition.
Why is the likelihood the product of the conditional PDF and the PDF of the parameter
CC BY-SA 4.0
null
2023-05-23T08:22:16.540
2023-05-23T08:31:58.977
null
null
338644
[ "likelihood" ]
616654
2
null
616639
0
null
I'm assuming that $X$ are the values (e.g. $X=3$ means throwing βš‚) and $Y$ are the counts (so $X=3$ and $Y=4$ means that you observed βš‚ four times out of $\sum_j Y_j = 15$ throws in total). First of all, your language is confusing. The "probability distribution of Y given X" sounds like a [conditional probability](https://en.wikipedia.org/wiki/Conditional_probability), while it doesn't seem that you are conditioning on anything here, but rather counting the outcomes. As for your question, $\hat p_i = Y_i / \sum_j Y_j$ would be the estimate of the probability of observing the $X_i$ outcome using the [empirical probability](https://en.wikipedia.org/wiki/Empirical_probability). This would be also the maximum likelihood estimator for the [categorical distribution](https://en.wikipedia.org/wiki/Categorical_distribution) that the dice throws are following. Alternatively, you can use the [Bayesian estimator](https://en.wikipedia.org/wiki/Categorical_distribution#Bayesian_inference_using_conjugate_prior) if you can define a prior for the probabilities. It is a valid way of estimating the distribution, though the obvious limitation is that the less data you have, the less precise and trustworthy the result would be.
null
CC BY-SA 4.0
null
2023-05-23T08:25:11.023
2023-05-23T08:25:11.023
null
null
35989
null
616655
2
null
171550
0
null
``` Given X random, then it has some variance (denote it by v), then we have ``` $$ \mathbb{V}[\frac{\sum_{i=1}^B X_i}{B}] = \frac{\sum_{i=1}^B \mathbb{V}[X_i]}{B^2} = \frac{Bv}{B^2} = v/B $$ Of course the insertion of variance inside the sum because $X_i$ are i.i.d., otherwise covariance comes to play.
null
CC BY-SA 4.0
null
2023-05-23T08:29:50.253
2023-05-23T08:30:36.230
2023-05-23T08:30:36.230
388627
388627
null
616656
2
null
616653
3
null
$P(\mathbf Z|Y)$ is the likelihood. I don't know where you found different information, but either it is wrong or you must have misunderstood it. If you [search our site](https://stats.stackexchange.com/search?q=likelihood), you'll find multiple examples.
null
CC BY-SA 4.0
null
2023-05-23T08:31:58.977
2023-05-23T08:31:58.977
null
null
35989
null
616657
1
null
null
0
7
I've analysed the unpaired survey responses using the Mann-Whitney U test and several questions show a change after treatment at p < 0.05. However I would now like to characterise the influence of a binary demographic variable (captured in the survey) on the results. Is it acceptable to split the survey results by the variable and analyse them separately? If so, how would I determine whether the variable was statistically significant? If not, what would be a good approach here?
How can I analyse the influence of a binary variable on a set of pre- and post-survey Likert scale questions?
CC BY-SA 4.0
null
2023-05-23T09:04:46.953
2023-05-23T09:04:46.953
null
null
7080
[ "wilcoxon-mann-whitney-test", "likert" ]
616658
1
616747
null
1
46
I'm new to statistics and I'm struggling to grasp the distinction between an independent and a dependent variable. For instance, if I want to examine the correlation between daily COVID-related deaths and the number of Facebook posts made on the same day, I believe that the number of daily deaths should be my independent variable (plotted on the x-axis of my scatter plot). I want to determine whether the number of deaths influences the number of posts. However, I've been informed that my thinking is incorrect, and I'm having trouble understanding why. Could someone please clarify this for me?
Understanding the Difference Between Independent and Dependent Variables
CC BY-SA 4.0
null
2023-05-23T09:07:27.740
2023-05-23T23:31:59.720
2023-05-23T23:31:59.720
345611
388630
[ "correlation", "causality", "predictor", "scatterplot", "dependent-variable" ]
616660
1
616662
null
0
35
I'm struggling to understand what likelihood free means in ABC, since ABC is using a model as simulator to produce $y_{simulated}$. However, to me is not clear the difference between model/simulator and model/likelihood function (I had a look [here](https://stats.stackexchange.com/questions/90276/abc-how-can-it-avoid-the-likelihood-function) too). So, if I got that right (please correct me otherwise) below are the differences. model/simulator: $y = b_0 + b_1 \times X$ model/likelihood: $\hat y = b_0 + b_1 \times X + \epsilon$, $\epsilon \sim N(0, \sigma)$ So, the likelihood seems that takes care of the residuals distribution. Could be logNormal or Beta or Poisson (?) But if that's the case, adding some normal random error in the simulator doesn't sound particularly difficult, so why to don't have it in ABC? Meaning that ABC cannot really work with Gaussian Process for instance?
Model, Likelihood & ABC
CC BY-SA 4.0
null
2023-05-23T09:13:01.260
2023-05-23T13:41:28.477
2023-05-23T09:22:24.903
162190
162190
[ "likelihood", "approximate-bayesian-computation" ]
616661
1
616703
null
2
66
I am struggeling to see where this problem fits - i.e. what topics this problem relates to, so I am not able to find the right literature. I want to use some particular information as a prior to a model. In general I want to say something about the distribution of $\theta$ given that I know that $C\cdot \theta \sim \mathcal{N}(\mu, \Sigma)$, when $C$ is rectangular (e.g. it can be a row vector of 1's). I want to use this information as a prior on the model parameters $\theta$. If it helps we can assume that the model is linear in the parameters $Y = X\theta + \epsilon$, with Gaussian noise $\epsilon$, and we have sufficiently many observations of $Y, X$ (I say this since the matrix $C$ may not regularize the problem sufficiently to guarantee a unique solution to the parameter estimate for any number of observations), so we can obtain an estimate of $\theta$ which is Gaussian. The simplest form of my problem is when $\theta \in \mathbb{R}^2$ and $(1, 1) \cdot \theta \sim \mathcal{N} (\mu, \sigma^2)$ for some given $\mu, \sigma^2$. Then I guess the (prior) distribution on $\theta$ should be a Gaussian along the line $\theta_2 = \mu - \theta_1$. Can we state that somehow? Such a prior should "pull" the parameter estimate towards this subspace defined by $(1,1)$, and the degree of "pulling" is determined by the relative uncertainty of the prior vs the observations of course. How can I use a prior like the above to compute a posterior? Are there conditions on $C$ (such as full row rank) for this to work? Could you show how the calculations of the posterior are carried out?
Parameter distribution of $\theta$ from a rectangular matrix multiplication $C\theta$
CC BY-SA 4.0
null
2023-05-23T09:21:49.353
2023-05-24T14:47:43.920
2023-05-23T10:57:44.810
357297
357297
[ "regression", "bayesian", "regularization", "prior" ]
616662
2
null
616660
2
null
The term likelihood-free is alas confusing as ABC requires a statistical model, hence referring to a specific likelihood function even though that function cannot be computed (hence the call to ABC). Hence, the simulation density $p_\theta(y)$ is identical to the likelihood function when (ideally) computed at the observed data, i.e., $$\ell(\theta) = p_\theta(y^\text{obs})$$ Note that ABC does not add an extra noise as you suggest in the question but only replicates the random process behind the data. In the toy example of a regression, with an observation $(y^\text{obs},x^\text{obs})$, the statistical model would thus be $$y=b_0+b_1x+\epsilon\quad\epsilon\sim N(0,1)$$ the likelihood function would be $$\exp\{-(y^\text{obs}-b_0-b_1x^\text{obs})^2/2\}$$ the simulation model would produce $$y^\text{sim}=b_0+b_1x^\text{obs}+\epsilon^\text{sim}$$ with $\epsilon^\text{sim}\sim N(0,1)$ to be compared with $y^\text{obs}$ in ABC, that is, the simulation $\theta$ from the prior would be accepted as approximately simulated from the posterior if $$\text{dist}(y^\text{obs},y^\text{sim})<\epsilon$$
null
CC BY-SA 4.0
null
2023-05-23T09:41:30.080
2023-05-23T13:41:28.477
2023-05-23T13:41:28.477
7224
7224
null
616663
1
null
null
0
17
I have the following equation: $C_i = \frac{EF_eS_i\delta_wW_j(\frac{t_i}{t_j})^m(\frac{v_i}{v_j})^n\alpha_w\gamma_f}{dwtD_i}$ where $S_i = S_{base}(.455L_i^2-.710L_i+1.280)$ And $L_i = \frac{v_i}{v_j}$ This is a simplified description of a ship's ($i$) carbon intensity for each hour at sea during a voyage. I have data to compute the equation and know what exponents $m$ and $n$ are with n described as a cubic, but I want to understand the influence of each factor on the carbon intensity. The only variables that change each hour in this equation are $v_i, S_i, D_i$. I have a panel dataset of ships (hourly, per voyage and ship). Aside from plotting each explanatory variable against $C_i$ and looking at r-squared, can this equation be transformed using a log-log specification and then perform statistical regression?
Statistical analysis of nonlinear equation
CC BY-SA 4.0
null
2023-05-23T09:45:18.570
2023-05-23T10:00:54.320
null
null
318567
[ "nonlinear-regression" ]
616664
1
null
null
0
43
Edited; Ive troede to specify my problem. Hope it makes more sense. Im a surgeon doing research. I use SPSS for statistics but in the group we are also able to use 'R'. We are performing af study investigating the development of complications after surgery and their relation to frailty. We are interested in number of complications, seriousness of complications and their development over time (after surgery). We have 500 patients, that have gone through surgery. They are categorized within 3 groups of frailty (not frail, frail, very frail). Groups are not equally sized but roughly with 300ptt, 150ptt and 50 ptts in each respectively. We have registered all complications after surgery and know what day these happened. NOTE that each patient often suffers from more than 1 complication. Furthermore the complications are graded from 1-5 regarding seriousness (where 5 is death). I would love to be able to illustrate data regarding: - development over time - cumulativeness in each group (but with regards to the diffent sizes of the groups - like a ratio) - dropout when patient is death (like in a Kaplan-Meier curve) Ive been trying to modify kaplan-Meier curves to be able to withtake multiple events, without luck. Im looking for a kind of plot that are able to illustrate the cumulation of complications(y-axis) over time (x-axis) within the 3 groups separately. Any complication should be registered as a "jump" in the graph (as a KM-plot) and the height of the jump should illustrate the relativeness to the number of patients left in the group that can suffer a complication, meaning that dead patients should be censored (also as in a KM-plot). It would be lovely if censored data (death) were illustrated (again - like a KM-plot). All in all a KM-like plot with rising graphs for 3 groups and with each individual being able to experience more than one event. If it is possible with a KM-algorithm in SPSS (or another software) we havent been able to figure it out. Any good ideΓ‘s as to where i should go. Maybe just a couple of keywords making it possible to improve my search-strategy on the internet. Kindly. Thomas K, Copenhagen.
Graphic presentation/analysis of multiple events of different character
CC BY-SA 4.0
null
2023-05-23T09:50:53.030
2023-05-24T05:42:24.093
2023-05-23T18:04:04.903
388632
388632
[ "data-visualization", "recurrent-events" ]
616665
1
null
null
1
13
My project model includes one input variable, two mediators (both measured at the individual level, but one is about the team so rWG will be calculated), one moderator, and two outcome variables (again, both measured at the individual level, but one is about the team so rWG will be calculated). It will be a pre-post design with experimental and control groups. I plan on the following data analysis in the order of CFA, Structural model (mediation and moderation analysis), and MANCOVA for the group differences. Is the process redundant? How do I better my analysis strategy?
Can I use MANCOVA and SEM?
CC BY-SA 4.0
null
2023-05-23T09:54:13.000
2023-05-23T16:10:07.657
2023-05-23T16:10:07.657
388631
388631
[ "experiment-design", "structural-equation-modeling", "ancova", "control-group" ]
616666
1
null
null
0
18
After running `glmnet` I can get a pseudo R-squared with `glmnet.fit$dev.ratio`. Does this take into account the complexity of the model? In other words, is `dev.ratio`equivalent to R-squared or adjusted R-squared?
dev.ratio in glmnet
CC BY-SA 4.0
null
2023-05-23T09:56:08.260
2023-05-23T09:56:08.260
null
null
212831
[ "r" ]
616667
1
616886
null
2
54
I am currently modellingusing the 'mgcv' package in R. My response variable is called log.tr, representing the log of residence time. My data looks a little bit like this: ``` set.seed(123) logtr <- seq(0.67, 4.29, length.out = 100) day_ <- sample(1:365, 100, replace = TRUE) # Random day values year_ <- sample(2000:2020, 100, replace = TRUE) # Random year values TEMPERATURE <- rnorm(100, mean = 25, sd = 5) # Random temperature values gam_tres_df <- data.frame(log.tr = logtr, day_ = day_, year_ = year_, TEMPERATURE = TEMPERATURE) ``` I am attempting to fit a generalized additive model (GAM) using the 'gam' function from 'mgcv'. The model formula I am using is: ``` gam(log.tr ~ s(day_, k = 40, bs = 'cc') + s(year_, k = 12) + s(TEMPERATURE, k = 40) + s(day_, year_), method = 'REML', data = gam_tres_df) ``` In this formula, I have included smooth terms for the variables day_, year_, TEMPERATURE, and an interaction term between day_ and year_. I have chosen specific degrees of freedom (k) for each smooth term. However, I suspect that the variables day_ and year_ may be dependant. I am considering adding a correlation structure to the model to account for this potential correlation. Specifically, I am thinking of using the `corARMA` function with the form `~ 1|Year` and an autoregressive moving average (ARMA) model with a lag of 3 (p = 3). My question is whether I should include this correlation structure (`corARMA(form = ~ 1|Year, p = 3)` or maybe (`form = ~ day | year)`) in my GAM model or if the current model specification without the correlation structure is appropriate. Also I wanted to know how to define this correlation structure for seasonal daily data; since i want to consider residual variation from year to year. Please let me know if you need further information or have any additional questions!
Seasonal GAM: are correlation structures needed?
CC BY-SA 4.0
null
2023-05-23T09:57:14.690
2023-05-25T11:59:23.813
2023-05-25T11:44:14.687
320046
320046
[ "time-series", "generalized-additive-model", "mgcv" ]
616668
2
null
616663
0
null
Probably yes. Various software packages offer you the option for bespoke non-linear models e.g. in SAS you have PROC NLMIXED (frequentist estimation) or in R you have things like [brms (Bayesian models)](https://cran.r-project.org/web/packages/brms/vignettes/brms_nonlinear.html) (there's also `nlme`). Key considerations would be parameterization (e.g. do you put some quantities that can only ever be positive on a log-scale and then put exp(quantity) in where needed?), numerical stability (some things might be more stable on a log-scale or with certain in-built stable functions) and whether there should be random effects (and on what - e.g. is there a random ship effect on the intercept or perhaps do other parameters vary between ships?).
null
CC BY-SA 4.0
null
2023-05-23T10:00:54.320
2023-05-23T10:00:54.320
null
null
86652
null
616669
1
null
null
0
38
The notion of an individual's "true probability" or "true risk" of a certain outcome $Y$ is contested [[1](https://link.springer.com/article/10.1007/s10654-020-00700-w), [2](https://onlinelibrary.wiley.com/doi/10.1111/j.1751-7176.2012.00592.x), [3]](https://arxiv.org/abs/2209.01687). Nevertheless, it is often discussed and useful for certain kinds of analyses. However, all the works I can find only describe this quantity in words and do not define it mathematically. How could one define and mathematically denote the "true individual probability" of a (binary) outcome $Y$ for a given individual? Is there a standard notation? Is there notation for something like "P(outcome y=1 for individual i, given all there is to know about this individual and the state of the universe)" ?
How to define & mathematically denote "true individual probability"?
CC BY-SA 4.0
null
2023-05-23T10:08:19.567
2023-05-23T11:03:46.960
null
null
131402
[ "probability", "notation" ]
616671
2
null
616669
3
null
Maybe you had problems with finding it because it is ambiguous. First of all, what is probability is a philosophical question, so there is no single answer to what it means. But that's a different story. If your quantity of interest is $X$, then the notation for the probability is simply $P(X)$. If you want to condition it on something, it's $P(X|Y,Z,\dots)$. > given all there is to know about this individual and the state of the universe But what would it be? If you knew everything about every atom in the universe, or even everything on the sub-atomic level, with a perfect understanding of the universe, then without any uncertainty you would know what would happen, based on your knowledge and the knowledge of the laws of physics. In such a case, the probability would be always equal to one; it would be certain.
null
CC BY-SA 4.0
null
2023-05-23T10:43:07.510
2023-05-23T11:03:46.960
2023-05-23T11:03:46.960
22047
35989
null
616672
1
null
null
1
10
So I have a dataset that consists of the batch correction through RUV-normalization of several microarray datasets containing tumoral and non-tumoral samples. The data is in Log2 RUV-normalized expression. I want to perform differential expression analysis. Is the limma package in R fit for this? From what I've read the limma package expects Log2 expression data without normalization, but some tutorials I also find use normalized data. Thank you very much to all!
Can the limma package be applied to Log2 RUV-normalized data?
CC BY-SA 4.0
null
2023-05-23T10:48:03.423
2023-05-23T11:46:12.040
2023-05-23T11:46:12.040
56940
388639
[ "r", "batch-normalization" ]
616673
2
null
616651
0
null
in R (RStudio is an Editor for R) you can use the lm()-function to compute a multiple regression. lm() per default provides an F-Test for the $R^2$ value. The null hypothesis of the test is whether $R^2$ is zero (see [here](https://datatofish.com/multiple-linear-regression-in-r/) for a simple example). Best, Stefan
null
CC BY-SA 4.0
null
2023-05-23T10:59:16.480
2023-05-23T10:59:16.480
null
null
187586
null
616674
1
null
null
0
12
The data i created contains heteroscedasticity. I already calculated the power so my idea was to basically do the same but switch the hypothesis so that H0: Heteroscedasticity and H1: Homoscedasticity and then count how many times the new H0 is rejected. But this is not possible in the bptest() feature i use in the lm-package. Can anyone help me? im completely new to R so i am struggling... is there any other way to calculate power of a test or switch the hypothesis?
How to test for size (Typ one error probability) on Breusch-Pagan Test in R?
CC BY-SA 4.0
null
2023-05-23T11:06:33.670
2023-05-23T11:06:33.670
null
null
null
[ "r", "regression", "hypothesis-testing", "heteroscedasticity", "breusch-pagan" ]
616675
2
null
616632
0
null
Typically, the RI-CLPM is fit to the data with a saturated mean structure (i.e., estimate a mean at each time-point). So you can interpret your estimates in the way you are doing it as long as there are no between-person differences in the change you are observing. To be on a safer side, you can fit an extension of the RI-CLPM called LCM-SR (latent curve model with structured residuals) which basically add a linear growth factor to the intercept factor (see [here](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4067471/)). Kind regards, S.
null
CC BY-SA 4.0
null
2023-05-23T11:09:43.127
2023-05-23T11:09:43.127
null
null
187586
null
616677
1
null
null
1
21
I am conducting a research and investigate the relationship between persona type (independent variable; a vs. b vs. c vs. d) and luxury perfume choice (dependent variable; niche vs prestige)(H1), moderated by brand prominence (loud vs quiet) (H2). At the start of the questionnaire each participant assigned a specific Persona (A, B, C, or D). In the survey, there were two choice moments: The first choice moment (H1) gives the participant a choice between two images (0 (prestige) and 1 (niche), respectively), with "Prominence" being 0. The second choice moment (H2) gives the participant another choice between two images (0 (niche) and 1 (prestige), respectively), with "Prominence" being 1. I would like to examine the causal relationship between the variables Persona (independent variable), Prominence (moderator), and Choice (dependent variable). Specifically, I am interested in understanding the dominant choices made by individuals within each Persona, both for Choice and Prominence, and if this choice/mean is significant. I have already performed a Mixed Effects Logistic Regression analysis with the following formula: model <- glmer(Choice ~ Prominence + Persona + (1 | ID), data = mydata, family = binomial) This analysis provided me with general results regarding the effects of Prominence and Persona on Choice. However, I want to further refine the results by specifically examining the choice patterns within each Persona. Hence, I tried to run emmeans but got error message "Error in emmeans(model, ~Persona + Choice) : No variable named Choice in the reference grid " . My specific questions are: How can I specify the choice patterns for Choice within each Persona separately? For example, I want to know which choice (0 or 1) is predominantly made within Persona A, Persona B, Persona C, and Persona D. Similarly, I want to specify the choice patterns for Prominence within each Persona separately. I want to determine at which value of Prominence (0 or 1) the choice of 0 or 1 is predominantly made within Persona A, Persona B, Persona C, and Persona D. I am seeking assistance in specifying these analyses in R so that I can understand the choice patterns within each Persona in more detail. Any help and suggestions that can contribute to analyzing these specific choice patterns within my dataset are greatly appreciated. Note that this is my first time running an analysis ever and am not familiar with R or any other tool. Any help or advice would be very appreciated! Thank you, Marie [](https://i.stack.imgur.com/92lob.png)
Analysis of causal relationship of within-subject choice patterns among different groups using R - emmeans / hlm / Mixed Effects Logistic Regression
CC BY-SA 4.0
null
2023-05-23T11:31:36.083
2023-05-31T20:21:10.417
2023-05-23T12:51:44.493
388644
388644
[ "regression", "multiple-regression", "linear", "lsmeans", "choice-modeling" ]
616678
2
null
596129
1
null
You should not plot $$(1-x)^2$$ Instead, you should plot $$(x-1)^2$$ The reason is that we look at how the loss behaves according to the assigned probability to the ground truth class. For binary cross entropy, it boils down to computing $-log(y_{pred})$. For MSE loss, where we compute in general $(y_{pred} - y)^2$, if we assume y=1, it means: $(y_{pred}-1)^2$. Basically, you can use this Python snippet to plot the losses: ``` x = np.linspace(0, 1, 1000) mse_loss = lambda x: (x-1)**2 log_loss = lambda x: -np.log(x) #Plot the losses plt.plot(x, mse_loss(x), label='MSE') plt.plot(x, log_loss(x), label='Log') plt.xlabel('Proba assigned to the correct class') plt.ylabel('Loss') plt.legend() plt.show() ```
null
CC BY-SA 4.0
null
2023-05-23T11:59:00.140
2023-05-23T11:59:35.860
2023-05-23T11:59:35.860
300260
300260
null
616679
1
616683
null
3
587
What is the formula to calculate the probability of getting 41 when I throw two 10-sided dice and four 8-sided dice? I’m looking for an algorithm for the general case of throwing multiples of two sets of differently-faced dice. ## Code if same number of faces Here is some Typescript based on the Python code [here](https://stats.stackexchange.com/a/495432/13255) that gives the probability if all the dice have the same number of faces: ``` function probabilityOfN(dice: number, sides: number, n: number): number { return Array.from({length: sides}, (_, i) => (1 / sides) * probabilityOfN(dice - 1, sides, n - i - 1)) .reduce((a, b) => a + b) } ``` ## Missing code if different number of faces ``` function probabilityOfN(diceA: number, sidesA: number, diceB: number, sidesB: number, n: number): number { // ??? } ``` The question is how to do the calculation for this function? Could be in Python as well. ## Test case Given: - diceA = 2 - sidesA = 10 - diceB = 4 - sidesB = 8 - n = 41 Then according to [https://dice.clockworkmod.com/](https://dice.clockworkmod.com/) the probability should come out as 0.01 [](https://i.stack.imgur.com/2OqEw.png)
Probability of a given result with multiples of mixed dice with different number of faces
CC BY-SA 4.0
null
2023-05-23T12:21:08.190
2023-05-24T17:43:16.913
null
null
13255
[ "probability", "distributions", "convolution", "dice" ]
616680
1
null
null
0
15
I'm working with different types of environmental spatiotemporal data. They share a target variable and have the same dimensions (latitude, longitude and time) but they are expressed in different styles/data types and show different features that I want to use as predictors. My question is: how can I train a Neural Network with these separate inputs, while making sure the network "knows" that the dimensions belong together? Here are the two data types: - The first type is having the data on a regular grid/matrix that has the size "latitude x longitude x time x features", which means that the dimension information (latitude, longitude, time) is contained in the position within the grid. They are very similar to images used in e.g. image classification that have different features/channels (where the equivalent to my features are the RGB-channels), only that there's an additional time dimension. I'd like to keep this data structure so I'd be able to use techniques from image classification, like working with 2D/3D convolutional neural networks. - The second type is a very sparse dataset with individual locations over time (point measurements), which is most efficiently displayed as a csv-style table, where the features and also the latitude, longitude and time dimensions are shown as individual columns. I'm reluctant to turn these sparse datapoints into a similar grid as the first dataset, as it would result in 99.9% NaNs and I'm not sure how the network would handle that. I'm aware that you can't label features for a neural network and the process of scaling/normalization will further anonymize things. Is there any kind of trick I can employ to better connect the two separate inputs given that they share the same dimensions? And at the same time make clear that the features are not the same and it is just additional information/predictors? As a first step I was thinking along the lines that I'll add all dimensions (latitude, longitude, time) to the regular grids as features and additionally add empty features representing features from the other dataset (and do the same for the other dataset), so that the total number of features match with the sparse (csv-style) dataset. That way both have "number of dimensions + number of features dataset 1 + number of features dataset 2" predictors. But I guess only because the number of features are the same it doesn't mean the network knows they are the same. Is there a way I could approach this? Or is it simply not possible and I'd have to turn either of the two into the other data type? So far the closest solution I've come across was to build a neural network with two separate inputs using tensorflow/keras in Python following [this example](https://pyimagesearch.com/2019/02/04/keras-multiple-inputs-and-mixed-data/), but, unless I misunderstand it, it doesn't fully answer my question of how to connect the separate inputs.
Neural network training with different data formats and predictors as inputs but both sharing same dimensions
CC BY-SA 4.0
null
2023-05-23T12:21:13.603
2023-05-23T12:57:30.340
2023-05-23T12:57:30.340
388638
388638
[ "machine-learning", "neural-networks", "predictor", "spatio-temporal" ]
616681
2
null
134976
0
null
Convolution method: If the random variables Xα΅’ are independent and identically distributed (i.i.d.), you can use the convolution property of probability distributions. The sum of independent random variables follows the convolution of their individual probability density functions (PDFs) or quantile functions. Let Q(x) be the quantile function of Xα΅’ and Y be the random variable defined as the sum of n i.i.d. Xα΅’. Then the quantile function Q_Y(x) of Y can be estimated using the convolution as follows: Q_Y(x) = Q βŠ— Q βŠ— ... βŠ— Q (n times) (x), where βŠ— denotes the convolution operation. This approach assumes independence and identical distribution of the Xα΅’.
null
CC BY-SA 4.0
null
2023-05-23T12:46:55.660
2023-05-23T12:46:55.660
null
null
359570
null
616682
2
null
616469
0
null
With only 5 time points and at most one event per individual, a discrete-time model would be the most natural. Interval-censored Cox regression is possible, but that's probably better reserved for situations where the time intervals differ among individuals. When the time intervals are the same for everyone, using binomial regression with a complementary log-log link provides a time-grouped proportional hazards model that's what you would get from an interval-censored Cox model. See [this page](https://stats.stackexchange.com/q/429266/28500). Other links in the binomial regression are possible, but wouldn't have the same proportional hazards interpretation. The different lengths of the time periods don't really matter, unless you explicitly model time as other than categorical in the binomial discrete-time survival model. A Cox model per se doesn't directly evaluate event times at all. It only uses the order of events in time. The survival curves you can generate from a Cox model simply re-express the ordered events in terms of the times at which they occurred.
null
CC BY-SA 4.0
null
2023-05-23T12:49:09.040
2023-05-23T12:49:09.040
null
null
28500
null
616683
2
null
616679
8
null
The brute force method would be to order your dice, enumerate all possible combinations of throws, count the number of "successful" throws and divide by the total number of possible throws. In your particular case, you have $10^2\times 8^4=409,600$ possible throws. A few lines of code tell you that $4,132$ of these show a total of 41. So the probability is indeed $\frac{4,132}{409,600}\approx 0.01$. In R (sorry, I'm more fluent in R, but this should be understandable): ``` dice <- c(rep(10,2),rep(8,4)) all_possible_combos <- expand.grid(sapply(dice,function(n)1:n)) sum(rowSums(all_possible_combos)==41) nrow(all_possible_combos) ``` The key part is the `expand.grid()` function that gives you all possible combinations. Here is the analogous histogram to yours: ``` hist(rowSums(all_possible_combos),breaks=seq(-0.5,sum(dice)+0.5)) ``` [](https://i.stack.imgur.com/SkojL.png) This works quickly enough with just six dice, and does not rely on the fact that you have only two types of dice. If you have much more, you may run into a combinatorial explosion. In this case, it may make sense to treat each group separately (see [How to easily determine the results distribution for multiple dice?](https://stats.stackexchange.com/q/3614/1352)) and sum over the possible ways to split the target number of 41 between the two groups of dice. The question just is whether the gain in performance outweighs the loss in understandability.
null
CC BY-SA 4.0
null
2023-05-23T12:56:52.117
2023-05-23T13:05:01.367
2023-05-23T13:05:01.367
1352
1352
null
616684
1
616685
null
1
43
I have this DAG [](https://i.stack.imgur.com/o0okF.png) As I understand it, the paths D <- Ed -> St -> P -> Su and D <- A -> P -> Su are both closed because the contain the collider P. If I condition on P, both these paths will be open. But that doesn't seem to be the case according to dagitty: ``` library(dagitty) d = dagitty('dag { A -> D A -> P D -> Em D -> Su Ed -> D Ed -> St Em -> Su P -> Su Se -> D Se -> Su St -> P St -> Su }') p = paths(d, from = 'D', to = 'Su') p$path[p$open] ``` ``` [1] "D -> Em -> Su" "D -> Su" "D <- A -> P -> Su" [4] "D <- Ed -> St -> P -> Su" "D <- Ed -> St -> Su" "D <- Se -> Su" ``` Also if I condition on P, it doesn't open the two paths mentioned at the start, but does open the path D <- A -> P <- St -> Su. Have I misunderstood the backdoor criterion? Does it have something to do with P being immediately next to the outcome Su?
DAG - why is the path open?
CC BY-SA 4.0
null
2023-05-23T13:05:06.040
2023-05-23T13:26:43.503
null
null
388405
[ "r", "causality", "dag", "causal-diagram" ]
616685
2
null
616684
0
null
$P$ would only be a collider if you were examining a path such as $A\to P\leftarrow St.$ That is, whether a node is a collider or not is actually dependent on the path you take through that node. So both the paths $D\leftarrow Ed\to St\to P\to Su$ and $D\leftarrow A\to P\to Su$ have no colliders in them. You can think of it this way: suppose you took the subgraph of your graph consisting only of the nodes $\{A, Ed, St, P, Su\}.$ Would the first path you mentioned have a collider in it? Answer: no. If you are considering $D$ as your cause, and $Su$ as your effect, you have a LOT of backdoor paths from $D$ to $Su$ that are unblocked. But if you were to condition on, say, the set $\{Se, St, P\},$ you would block them all.
null
CC BY-SA 4.0
null
2023-05-23T13:24:45.593
2023-05-23T13:24:45.593
null
null
76484
null
616686
2
null
616684
0
null
> As I understand it, the paths D <- Ed -> St -> P -> Su and D <- A -> P -> Su are both closed because the contain the collider P. That's not quite right. I believe the key thing you're missing is the idea that, in a DAG, variables may play different roles depending on the path we're looking at. Thus, a variable may be a collider in one path but a mediator in another. In this case, `P` isn't a collider in neither of the two paths you've picked. In fact, it's a mediator in both. The only back-door path in which `P` is a collider is `D <- A -> P <- St -> Su`. As you say, this back-door path is automatically closed because it has a collider.
null
CC BY-SA 4.0
null
2023-05-23T13:26:43.503
2023-05-23T13:26:43.503
null
null
333765
null
616687
2
null
616090
0
null
Maybe this calculation is not ideal, but I've obtained pretty decent result using this formula: > ( [People tested for Condition B] / [All people with Condition A] ) * ( [Facilities that test for Condition B / [All facilities tested for Condition A] ) All in all it is just the multiplication of two ratios.
null
CC BY-SA 4.0
null
2023-05-23T13:33:26.423
2023-05-23T13:33:26.423
null
null
375942
null
616688
2
null
281609
2
null
If you have many instances of each outcome categroy, then as Nike says, "Just do it." The usual methods will gives predictions just like they always do. If your concern is that the $32$-digit code has to be predicted, that comes later. Start by making a prediction about which user will log in. Once you have that, you can map it to the $32$-digit code that has meaning for your firm. For instance, use the model to classify the outcome as user $19$, and then use a few more lines of code to map that to $32$-digit ID code $14159265358979323846264338327950$. This could be as simple as a `dictionary`-type of object.
null
CC BY-SA 4.0
null
2023-05-23T13:35:30.093
2023-05-27T11:18:49.263
2023-05-27T11:18:49.263
247274
247274
null
616689
1
null
null
1
32
Given a set of 128x128 images from three classes, I obtained an accuracy of 50% with a SVM on the flattened images (16384 'features'). Is this an upper bound on the performance of a SVM using any features extracted from the images?
Upper bound on classification performance
CC BY-SA 4.0
null
2023-05-23T13:38:33.720
2023-05-24T11:04:56.147
2023-05-24T10:04:27.590
384140
384140
[ "machine-learning", "classification", "svm", "accuracy" ]
616690
2
null
243090
1
null
First, I would be remiss not to mention that a linear probability model for a multi-class problem sounds like a poor approach, and I would encourage you to pursue appropriate methods instead of shoehorning this problem into a method that is wildly inappropriate. However, there is no inherent issue with calculating the square loss between two matrices. Go element-wise. For example, with $2\times 2$ matrices: $$ L\left( \begin{bmatrix} y_{1, 1} & y_{1, 2} \\ y_{2, 1} & y_{2, 2} \end{bmatrix} , \begin{bmatrix} \hat y_{1, 1} & \hat y_{1, 2} \\ \hat y_{2, 1} & \hat y_{2, 2} \end{bmatrix} \right) \\= \left(y_{1, 1} - \hat y_{1, 1}\right)^2 + \left(y_{1, 2} - \hat y_{1, 2}\right)^2 + \left(y_{2, 1} - \hat y_{2, 1}\right)^2 + \left(y_{2, 2} - \hat y_{2, 2}\right)^2 $$ In this case, you act as if these are vectors in $\mathbb R^4$ instead of $2\times 2$ matrices. In many regards, the space of $m\times n$ matrices is equivalent to $\mathbb R^{m\times n}$, so this is totally reasonable. Slap on a square root, if you want. Divide by the number of matrix elements, if you want. Neither of those will change the optimum (except for the possibility of (hopefully slight) differences when you do the math on a computer). > As a separate but related question, I observe that all of the elements of my resulting $\hat{y}_i$ vectors are quite small, on the order of $10^{-4}$ to $10^{-1}$. Is this problematic? It seems that square loss will not be very meaningful when computing between vectors with one-hot 0/1 values and vectors with such small floats. However, when I assign each observation to the class with the largest element in $\hat{y}_i$ the 0-1 loss is relatively low (~86% accuracy) regardless of the fact that the magnitudes of the $\hat{y}_i$'s are small. This really is a separate question that warrants its own post. However, a potential issus is that it sounds like you have imbalance in your data where one class has much more representation than the others, meaning that you can classify every observation as that majority category and wind up with a fairly high accuracy like the $86\%$ you achieve. This is often the first, but far from the only, issue with classification accuracy that leads people to realize that classification accuracy [not the best measure for assessing classification models](https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models).
null
CC BY-SA 4.0
null
2023-05-23T13:53:17.933
2023-05-23T13:53:17.933
null
null
247274
null
616692
2
null
281609
0
null
You could reformulate the problem into a binary classification. Build a classification model that predicts if a user is going to log in. Then, use the probabilities returned by the model to flag the users who have a high chance of logging in as "positive" cases. The downside is that in such a case the "categories" are not necessarily mutually exclusive (two users can have similar probabilities), but I guess this is not what you needed in the first place.
null
CC BY-SA 4.0
null
2023-05-23T14:02:32.400
2023-05-23T14:02:32.400
null
null
35989
null
616693
2
null
612473
0
null
I found a possible approach in a very similar experimental setup from [Ein-Dor et al. (2020)](https://aclanthology.org/2020.emnlp-main.638/) to assess the attainment of significant improvements of the AL query strategies over the random baseline. Specifically, for each query strategy and method, I calculate the p-value of the two-sided Wilcoxon signed-rank and perform a Bonferroni correction to account for the multiple strategies examined. To compute the p-value for a given strategy S and method, we compare the micro F1-score values for all pairs ($S_{cik}$, $R_{dik}$), where $R$ represents the results from the random baseline, $d \in D$, and $D$ represents the two distinct corpora, $i = {1, ..., 19}$ is the step/iteration index, and $k = {1, ..., 5}$ is the experiment number. A possible implementation in Python would be: ``` from scipy.stats import wilcoxon from statsmodels.stats.multitest import multipletests ALPHA = 0.05 BASELINE_NAME = "random sampling" METHODS = ["BiLSTM", "BERT", "CNN", "CRF"] STRATEGIES = ["LC", "BatchBALD"] def test_al_strategies(): # create an array to store the p-values for each strategy and scenario p_values = np.zeros((len(STRATEGIES), len(METHODS))) for j, method in enumerate(METHODS): method_p_values = [] for i, strategy_name in enumerate(STRATEGIES): strategy_data = load_method_data(method=method, strategy=strategy_name) baseline_data = load_method_data(method=method, strategy=BASELINE_NAME) assert len(strategy_data) == len(baseline_data) # calculate the Wilcoxon signed-rank test p-value for the pairs _, p_value = wilcoxon(x=strategy_data, y=baseline_data) # store the p-value for the current strategy and method method_p_values.append(p_value) # perform Bonferroni correction on the p-values for the current scenario rejected, corrected_p_values, _, _ = multipletests( method_p_values, alpha=ALPHA, method="bonferroni" ) # store the corrected p-values in the array p_values[:, j] = corrected_p_values # print the corrected p-values and indicate whether the null hypothesis is rejected or not for j, method in enumerate(METHODS): for i, strategy_name in enumerate(STRATEGIES): is_rejected = p_values[i, j] <= ALPHA print( f"Strategy {strategy_name} vs. {BASELINE_NAME} with method {method}: " f"p-value = {p_values[i, j]:.10f}, " f"null hypothesis is {'rejected' if is_rejected else 'not rejected'}" ) print() ```
null
CC BY-SA 4.0
null
2023-05-23T14:05:21.710
2023-05-23T14:19:19.070
2023-05-23T14:19:19.070
385302
385302
null
616695
1
null
null
0
10
In the case where the curves (functions) are defined on a time interval.
What are the advantages of using Functional Data Analysis (FDA) over traditional Time Series or Stochastic Process approaches?
CC BY-SA 4.0
null
2023-05-23T14:29:18.560
2023-05-23T14:29:18.560
null
null
212074
[ "time-series", "stochastic-processes", "finance", "time-varying-covariate", "functional-data-analysis" ]
616696
1
null
null
0
20
I am currently aiming to find any statistical difference between site fidelity for four shark species between sex and sample site. This uses a mark-recapture method and thus the data is heavily zero inflated for sharks that were not recaptured. Since the values for site fidelity are not integers I am unable to fit a zero-inflated Poisson model using the "pscl" package. Are there any other options in terms of GLM model families that I could use, or are there other statistical tests that I could potentially use to find any differences between the groups?
Choosing a suitable GLM family for zero-inflated data non-integer data
CC BY-SA 4.0
null
2023-05-23T14:29:42.043
2023-05-23T14:29:42.043
null
null
333403
[ "generalized-linear-model", "poisson-distribution" ]
616697
1
null
null
0
49
I am modelling values that are rates, strictly positive, using a Gamma GLM and or GLMM (here using `glmer` in `lme4`) to do so, with a log link. Assume that I have a GLMM model that is of the form `glmer(rate~(1|location)+(1|year),data=data,family=Gamma(link="log"))` where both factors , location and year are random effects. The best estimate for the prediction of the mean at a new site and new year, that were both unobserved, is given by the overall mean of the model, so equal to exp(intercept). The summary of the model returns also a standard error, but that would be the standard error for the overall mean. How can one estimate the precision of a prediction for the mean at a new, unobserved, year-location combination? Is this available via `glmer`? A possible intuitive approach might be via a parametric bootstapp, resampling values from the distribution of the overall mean (intercept with corresponding precision), and then add sampled values from the variability associated with each of the random effects, and compute the variability of the resulting estimates of means for unobserved combinations of random effects. Is that a sensible thing to do? If not, can someone point me to a place that might describe how to do it?
How to estimate precision on a prediction for a GLMM for an unobserved level of a random effect
CC BY-SA 4.0
null
2023-05-23T14:40:12.627
2023-05-29T18:04:39.257
null
null
180421
[ "lme4-nlme", "glmm", "prediction-interval" ]
616698
1
null
null
0
5
I am using `statsmodels.tsa.stattools.acf` to calculate the lagged autocorrelation of innovations in Kalman filter with `alpha=0.05`. Innovation is defined as `observation - observed equivalent of model forecast`. And I want to calculate the lagged autocorrelation of the time series of innovations, to see whether autocorrelation lags at 1,...,10 time steps are zero. So that I would know whether my Kalman Filter is optimal. I used `statsmodels.tsa.stattools.acf` to calculate the autocorrelation. But I don't know what does it mean by the p value it returns? and what is the corresponding null hypothesis? what if the p-value returned is very small? say if I set `nlags=10`, i.e. to calculate the autocorrelation of 10 time steps lag, the result is `acf[10] = 0.219018`, and its p-value is `1.544171e-218`. I knew p value is the probability that we accept the null hypothesis. But I don't know what is the hypothesis over here.
What is the null hypothesis and p value in the lagged autocorrelation of innovations in Kaman filter
CC BY-SA 4.0
null
2023-05-23T14:47:44.833
2023-05-23T14:47:44.833
null
null
303835
[ "p-value", "autocorrelation", "statsmodels", "kalman-filter" ]
616699
1
616727
null
1
82
This simulation study is taken from this [article] ([https://pubmed.ncbi.nlm.nih.gov/35574725/](https://pubmed.ncbi.nlm.nih.gov/35574725/)). I am trying to generate this simulation Theoretical Set up Define the set of true basis functions, \begin{align*} \psi_{1}(t)= \sqrt{2} \cos\left(2 \pi t \right)\\ \psi_{2}(t)= \sqrt{2} \sin\left(4 \pi t \right)\\ \psi_{3}(t)= \sqrt{2} \cos\left(4 \pi t \right) \end{align*} such that the constraints $\lVert \psi_{k} \rVert^{2}=1$ if $k=k^{\prime}$, and $0$ otherwise, are fulfilled, $k=1,2,3$. We then independently sample the scores according to $\lambda_{i} \sim MVN(0,\Sigma)$, where $\Sigma=diag(10,6,3)$. Given the set of true basis functions and scores, the longitudinal trajectory can be formulated according to the Karhunen-Loeve expansion as, \begin{align*} Z_{i}(t)= \mu(t)+\lambda_{i,1}\psi_{1}(t)+\lambda_{i,2}\psi_{2}(t)+\lambda_{i,3}\psi_{3}(t) \end{align*} where the mean function $\mu(t)$ is assumed to be $0$. The individualized realization of the longitudinal trjectoiry $\left\{Z_{i}(t_{i,r)}, r=1,\ldots,R_{i} \right\}$ are assumed to have $\max(R_{i}) \leq 20$ for $\forall i$, constrained by censoring or event occurrence. We consider these $R_{i}$ visits to happen on a fixed time grid from $0$ to $25$, which increment of $25/\max(R_{i})$ unit. To link covariates to the time-to-event, we assume a proportional hazard model such that the hazard function follows, \begin{align*} h_{i}(t)=h_{0}(t) \exp{\left\{\alpha_{1} X_{i}+\int_{0}^{\tau} \phi(t) Z_{i}(t) dt\right\} } \end{align*} where $\tau$ is the maximum observation time. The fixed covariate $X_{i}$ is assumed to follow a Bernoulli distribution with a success probability of $0.50$, with the corresponding coefficient $\alpha_{1}$ set to $-1$. Consider the time-varying coefficient: \begin{align*} \text{Scenario} : \phi(t)=0.25 \psi_{1}(t)+ 0.50 \psi_{2}(t)+ \psi_{3}(t)\\ \end{align*} Here, we let the baseline hazard follow a Weibull distribution $h_{0}(t)= \kappa \rho (\rho t)^{\kappa-1}$ with increasing risk over time and consider $\kappa=2, \rho=0.096$. Given the above setup, the survival time $T_{i}$ can then be generated from the inverse of the cumulative hazard function $H^{βˆ’1}(u)$, where $u \sim U(0,1)$. We have assumed the independent censoring scheme in this simulation study, where $C_{i} \sim U(0, C_{max})$, with $C_{max}$ set at a value such that the $\%$ of being censored by the end of the study approximately matches our target censoring percentage. Coding Part ``` rho=0.096; kappa=2; alpha1=-1; N=300 # Generate random scores for the subject Sigma <- diag(c(10, 6, 3)) lambda <- MASS::mvrnorm(N, mu = rep(0, 3), Sigma = Sigma) ## Define the true basis functions psi_1 <- function(t) sqrt(2) * cos(2 * pi * t) psi_2 <- function(t) sqrt(2) * sin(4 * pi * t) psi_3 <- function(t) sqrt(2) * cos(4 * pi * t) # Scenarios for time-varying coefficient phi <- 0.25 * psi_1(t) + 0.5 * psi_2(t) + psi_3(t) # Define the mean function mu <- function(t) 0 # Set the mean function as 0 for simplicity # Generate the longitudinal trajectories Z <- matrix(0, nrow = N, ncol = length(t)) # values measured at time t only t <- seq(0, 25, by=(25/20)) for (i in 1:N) { for (j in 1:length(t)) { Z[i, j] <- mu(t[j]) + lambda[i, 1] * psi_1(t[j]) + lambda[i, 2] * psi_2(t[j]) + lambda[i, 3] * psi_3(t[j]) } } ################### # Survival process ################### # covariate --> N Bernoulli trials X <- sample(x=c(0, 1), size=N, replace=TRUE, prob=c(0.5, 0.5)) library(fda.usc) fdobj1=fda.usc::fdata(phi, t, rangeval = range(t)) fdobjZ=fda.usc::fdata(Z, t, rangeval = range(t)) phi_Z_int_1= as.vector(fda.usc::inprod.fdata(fdobj1,fdobjZ)) cox <- as.vector(exp(alpha1*X+as.numeric(phi_Z_int_1))) U <- runif(N, 0, 1) t <- (U*((rho^kappa)*cox)**-1)**(1/kappa) C <- runif(N, 0, 0.0001) obsT <- pmin(t, C) status <- t <= C ``` My questions: - How would I satisfy this condition: The individualized realization of the longitudinal trajectory $\left\{Z_{i}(t_{i,r)}, r=1,\ldots,R_{i} \right\}$ are assumed to have $\max(R_{i}) \leq 20$ for $\forall i$, constrained by censoring or event occurrence. We consider these $R_{i}$ visits to happen on a fixed time grid from $0$ to $25$, which increment of $25/\max(R_{i})$ unit. - How would I satisfy this condition: We have assumed the independent censoring scheme in this simulation study, where $C_{i} \sim U(0, C_{max})$, with $C_{max}$ set at a value such that the $\%$ of being censored by the end of the study approximately matches our target censoring percentage (assume like 33% or 66%) - So, this is related to part 1, so I need to know of $Z_{i}$'s should look like before I link them with my covariate, so is the t that I have even make sense. ( I am not sure if this question even makes sense) I am very sorry for the long post. I have been struggling at this problem for a while now. I appreciate any help I can get. Thank you for your time, and looking forward to reading/applying your comments. Update ``` H_cox<- function(x) { N <- 300 # Assuming a fixed value for N Z <- matrix(0, nrow = N, ncol = length(x)) # Assuming 100 time points for (i in 1:N) { for (j in 1:length(x)) { Z[i, j] <- mu(x[j]) + lambda[i, 1] * psi_1(x[j]) + lambda[i, 2] * psi_2(x[j]) + lambda[i, 3] * psi_3(x[j]) } } h_0 <- kappa * rho1 * (rho1 * x)^(kappa - 1) # covariate --> N Bernoulli trials X <- sample(x = c(0, 1), size = N, replace = TRUE, prob = c(0.5, 0.5)) phi_1 <- 0.25 * psi_1(x) + 0.5 * psi_2(x) + psi_3(x) fdobj1 <- fda.usc::fdata(phi_1, x, rangeval = range(x)) fdobjZ <- fda.usc::fdata(Z, x, rangeval = range(x)) fdobjh0 <- fda.usc::fdata(h_0, x, rangeval = range(x)) H <- fda.usc::int.simpson(fdobjh0) * fda.usc::int.simpson(fdobj1 * fdobjZ) * exp(alpha_1 * X) S <- exp(-H) return(S) } # Consider t value t=seq(from = 0.04, to = 1, length.out=25) H_cox(t) # Define the inverse function of H_cox inverse_H_cox <- function(y) { f <- function(x) H_cox(x) - y uniroot(f, interval = c(0, 1))$root } # Find the inverse of H_cox for a given y y <- 0.5 inverse <- inverse_H_cox(y) Error in uniroot(f, interval = c(0, 1)) : f.lower = f(lower) is NA In addition: Warning message: In if (is.na(f.lower)) stop("f.lower = f(lower) is NA") : the condition has length > 1 and only the first element will be used ```
Simulating models for longitudinal and survival data
CC BY-SA 4.0
null
2023-05-23T14:55:11.133
2023-05-25T13:47:37.633
2023-05-25T02:43:19.417
127026
127026
[ "survival", "panel-data", "cox-model", "functional-data-analysis" ]
616700
1
null
null
1
12
When fitting a simple MLP neural net on the Credit dataset by the package `neuralnet` in R and then plotting it, we get an error reported at the bottom of the plot, as here: ``` library(ISLR2) library(neuralnet) data(Credit) set.seed(1) nn <- neuralnet(Balance ~ Income + Limit + Rating + Cards + Age + Education, data = Credit, linear.output = TRUE) plot(nn, rep = "best") ``` How is the error 0.67 computed? The function's help doesn't help indeed, as the error here reported is much smaller than an SSE (RSS). Thanks.
What does the neuralnet error reported in plot mean?
CC BY-SA 4.0
null
2023-05-23T15:02:28.480
2023-05-23T15:03:05.813
2023-05-23T15:03:05.813
304805
304805
[ "r", "neural-networks" ]
616701
2
null
616585
1
null
"Feature importance" isn't as well defined as you might hope. There are ways to evaluate the contributions of predictors to a model fit by maximum likelihood, like your [icenReg](https://cran.r-project.org/package=icenReg) Weibull model, but they typically don't extend well to new data. For a predictor included in multiple coefficients, like a continuous spline fit, a multi-level categorical predictor, or a predictor involved in interactions, you can do a joint [Wald test](https://en.wikipedia.org/wiki/Wald_test) on all the coefficients involving it. That's sometimes called a "[chunk test](https://stats.stackexchange.com/q/27429/28500)." That test uses the coefficient estimates and (the inverse of) the corresponding part of the coefficient variance-covariance matrix to evaluate whether any of the coefficient values is significantly different from 0. [This answer](https://stats.stackexchange.com/a/589011/28500) provides some detail that you should be able to adapt to the coefficients (`coef(fit)`) and the variance-covariance matrix (`vcov(fit)` of your model. The resulting test statistic under the null hypothesis is distributed as chi-square with a number of degrees of freedom equal to the number of coefficients. As the expected value of the statistic under the null equals the number of degrees of freedom, the chi-square statistic for the "chunk test" on the predictor, minus the number of degrees of freedom, is a reasonable choice for an overall estimate of a predictor's contribution to the model. As Frank Harrell discusses in [Section 5.4 of Regression Modeling Strategies](https://hbiostat.org/rmsc/validate.html#sec-val-bootrank), however, the rankings of predictors aren't typically stable over multiple data samples. Even models of repeated bootstrap samples from the same data set can provide wildly different "feature importance" rankings. Try that on your own data and modeling process. If some of your variables are highly collinear, you might consider re-formulating your model to combine sets of highly related variables into single predictors for the model first. If you do that without considering the associations of those variables with outcome, you don't affect the interpretation of p-values and the like from the resulting model, while you cut down on the number of degrees of freedom used up in the modeling. If there's a reasonable biological/clinical argument for combining multiple variables this way, that approach also simplifies model interpretation. Think, for example, about the "Tumor, Node, Metastasis" combinations used to evaluate overall disease Stage in cancer. Harrell discusses the principles of this "data reduction" in [Section 4.7 of Regression Modeling Strategies](https://hbiostat.org/rmsc/multivar.html#sec-multivar-data-reduction) and illustrates application later in the document. The above assumes that you have a validated model to start with. With "many covariates, some interaction terms and some splines," there's a risk of overfitting the data unless you have a correspondingly large number of events in your data. It's the number of events, not the number of total cases, that provides power to a survival model. As I recall, your data set has a large number of cases but with only a small percentage having the event. You typically need on the order of at least 15 events per coefficient that you are estimating to avoid overfitting. If your model is overfit, then any "feature importance" culled from it won't extend well to new data at all, even beyond the random-sampling and collinearity issues discussed above.
null
CC BY-SA 4.0
null
2023-05-23T15:02:55.347
2023-05-23T15:02:55.347
null
null
28500
null
616702
1
616730
null
1
61
I am under the impression that: $E_{Y|X}[Y] = E_{Y|X}[Y|X] = \int y f_{y|X}(y|X) dy$ And by extension: $f_{y|X}(y) = f_{y|X}(y|X)$ Please correct me if I am wrong. Thank you! I understand that notation like this is dependent on whatever the author intends, so I am mostly hoping to ask for what convention you have all seen most commonly. Here are some places where I've seen notation like this: [](https://i.stack.imgur.com/p6zgg.png) [](https://i.stack.imgur.com/UUqLa.png) (Here $|X$ is in the subscript only, but the author seems to intend for $E[Y|X=x] = E_{Y|X}[Y]$) [](https://i.stack.imgur.com/RgdL3.png) (Here $|X$ is in both the subscript and the argument)
Are $E_{Y|X}[Y]$ and $E_{Y|X}[Y|X]$ equivalent?
CC BY-SA 4.0
null
2023-05-23T15:12:53.910
2023-05-23T19:26:59.720
2023-05-23T15:56:47.087
388225
388225
[ "probability", "expected-value" ]
616703
2
null
616661
1
null
In a Bayesian linear regression we can indeed encode the wanted form of relation by considering a prior covariance which is "infinite". This is sometimes called a diffuse prior or a partially diffuse prior. Consider as in OP the linear regression $\mathbf{y} = \mathbf{X}\boldsymbol{\theta} + \boldsymbol{\varepsilon}$ with a known noise variance $\sigma_{\varepsilon}^2$ where $\mathbf{X}$ is $n \times p$. Remind that the multivariate normal is a conjugate distribution for the parameter $\boldsymbol{\theta}$. If the prior for $\boldsymbol{\theta}$ is the multivariate normal with mean $\mathbf{b}_0$ and covariance $\boldsymbol{\Gamma}_0$ corresponding to a precision matrix $\mathbf{B}_0 = \boldsymbol{\Gamma}_0^{-1}$ then the posterior is also normal with mean $\mathbf{b}_n$ and precision $\mathbf{B}_n$ given by \begin{align*} \mathbf{B}_n & = \mathbf{B}_ 0 + \sigma^{-2}_\varepsilon \mathbf{X}^\top \mathbf{X}, \\ \mathbf{B}_n \mathbf{b}_n &= \mathbf{B}_0 \mathbf{b}_0 + \sigma^{-2}_\varepsilon \mathbf{X}^\top \mathbf{y}. \end{align*} The so-called diffuse prior corresponds $\boldsymbol{\Gamma}_0 = \lambda \mathbf{I}_p$ with $\lambda \to \infty$ or $\mathbf{B}_0 = \mathbf{0}$. If $\mathbf{X}$ has full column rank the posterior distribution tends to the normal with covariance $\sigma^2_\varepsilon [\mathbf{X}^\top\mathbf{X}]^{-1}$. This can be regarded as a proper posterior. If $\mathbf{X}$ has rank $< p$, we get an improper posterior which still can be used with some limitations. For instance if all the rows of $\mathbf{X}$ are identical to one vector $\mathbf{x}$ then we have a proper posterior for the corresponding response $\mathbf{x}^\top \boldsymbol{\theta}$ and we can make a proper prediction for a new observation corresponding to this design vector. A partially diffuse prior can be obtained by using a prior covariance $$ \boldsymbol{\Gamma}_0 = \boldsymbol{\Gamma}_0^{[0]} + \lambda \, \boldsymbol{\Gamma}_0^{[1]}, \qquad \lambda \to \infty \tag{1} $$ where $\boldsymbol{\Gamma}_0^{[0]}$ and $\boldsymbol{\Gamma}_0^{[1]}$ are positive semi-definite matrices. Depending on the design matrix $\mathbf{X}$ and the matrices $\boldsymbol{\Gamma}_0^{[0]}$ and $\boldsymbol{\Gamma}_0^{[1]}$ we can get either an improper posterior or a proper posterior. Even an improper posterior can be used with some limitations. If $\boldsymbol{\Gamma}_0^{[1]}$ has full rank the prior is equivalent to the diffuse prior with covariance $\lambda \mathbf{I}_p$, and the interest of the partially diffuse prior is when $\boldsymbol{\Gamma}_0^{[1]}$ is rank deficient, see example below. Back to the OP we can consider a prior information $\mathbf{C} \boldsymbol{\theta} \sim \mathcal{N}(\boldsymbol{\mu},\, \boldsymbol{\Sigma})$ for some $r \times p$ matrix $\mathbf{C}$ with full row rank. A special case is in [this question](https://stats.stackexchange.com/q/523922/10479) corresponding to a zero covariance in the constraint. We can get a (linear) re-parameterization of the linear regression as $$ \mathbf{y} = \mathbf{Z}_1 \boldsymbol{\gamma}_1 + \mathbf{Z}_2 \boldsymbol{\gamma}_2 + \boldsymbol{\varepsilon} $$ where the parameter $\boldsymbol{\gamma}$ stacks $\boldsymbol{\gamma}_1 := \mathbf{C} \boldsymbol{\theta}$ and $\boldsymbol{\gamma}_2$ which is a vector with length $p-r$. We can choose $\boldsymbol{\gamma}_1$ and $\boldsymbol{\gamma}_2$ to be a priori independent with the wanted (proper) prior for $\boldsymbol{\gamma}_1$, and with $\boldsymbol{\gamma}_2$ having the prior mean zero and the prior covariance $\lambda \, \mathbf{I}_{p-r}$ with $\lambda \to \infty$. We can derive the corresponding posterior for $\boldsymbol{\gamma}$ hence for $\boldsymbol{\theta}$. This posterior can be either proper or improper depending on the matrices $\mathbf{X}$ and $\mathbf{C}$. It is easy to see that if the matrix obtained by stacking the rows of $\mathbf{X}$ and those of $\mathbf{C}$ has full rank then the posterior will be proper. Example Suppose that $p=2$ and that $\mathbf{C}$ is the row matrix $[1,\, 1]$ as in OP so that $\mathbf{C}\boldsymbol{\theta}$ is a scalar r.v. To fix the ideas, we consider the simple linerar regression $y = \theta_1 + \theta_2 x + \varepsilon$ so that $\theta_1 + \theta_2$ is the value for $x= 1$ and $\theta_2 - \theta_1$ is the opposite of the value for $x = -1$. We take $\mu= -2$: if the prior variance $\Sigma$ is small the fitted line should nearly pass by the point $[1, \, -2]$ shown in blue, along with a $\pm 2 \sqrt{\Sigma}$ "error bar". Whatever be $\Sigma$ we should not be informative on the value for $x = -1$. The reparameterization can be $\boldsymbol{\gamma} = \mathbf{G}\boldsymbol{\theta}$ where $$ \mathbf{G} = \begin{bmatrix} 1 & 1 \\ - 1 & 1 \end{bmatrix}, \qquad \mathbf{G}^{-1} = \frac{1}{2} \, \begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}, $$ Let us choose the prior covariance for $\boldsymbol{\gamma}$ as above: $\text{Cov}(\gamma_1) = \Sigma$ and $\text{Cov}(\gamma_2) = \lambda$. The corresponding prior covariance for $\boldsymbol{\theta}$ takes the form (1) above $$ \text{Cov}(\boldsymbol{\theta}) = \mathbf{G}^{-1} \begin{bmatrix} \Sigma & 0 \\ 0 & \lambda \end{bmatrix} \mathbf{G}^{-\top} = \frac{\Sigma}{4} \, \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix} + \frac{\lambda}{4} \begin{bmatrix} 1 & -1 \\ -1 & 1 \end{bmatrix} $$ and note that the matrix coefficient of $\lambda$ has rank one. We can either use a software that accepts a partially diffuse prior, or simply take a large value for $\lambda$ as illustrated here. When $\Sigma$ is large the effect of the prior is negligible, and when $\Sigma$ gets smaller the regression line is driven towards the point $[1, \mu]$. ``` pdReg <- function(X, y, varNoise = 1, b0 = rep(0, ncol(X)), Gamma0, Gamma1, lambda = 1e4) { B0 <- solve(Gamma0 + lambda * Gamma1) Bn <- B0 + crossprod(X) / varNoise bn <- solve(Bn, B0 %*% b0 + crossprod(X, y) / varNoise) list(bn = bn, Bn = Bn) } set.seed(123) ## Observations with a specific theta n <- 10 varNoise <- 0.1 theta <- c(-1, 4) x <- seq(-1, 1, length = n) X <- cbind(Cst = 1, x = x) y <- X %*% theta + rnorm(n, sd = sqrt(varNoise)) ## standard OLS fit fit0 <- lm(y ~ X - 1) plot(y ~ x, pch = 16, col = "orangered", main = "OLS fit (black) and Bayesian fits (colors)") abline(coef(fit0)) ## Expectation 'mu' and variance (not sd) of C %*% theta mu <- -2 Sigmas <- c(1e0, 1e-1, 4e-4) cols <- c("SpringGreen3", "Orchid", "SteelBlue3") points(x = 1, y = mu, pch = 16, col = "SteelBlue", cex = 1.5) for (i in seq_along(Sigmas)) { Sigma <- Sigmas[i] ## Matrices in the partially diffuse prior Gamma0 <- Sigma * matrix(c(1, 1, 1, 1), nrow = 2) / 4 Gamma1 <- matrix(c(1, -1, -1, 1), nrow = 2) / 4 fit <- pdReg(X = X, y = y, varNoise = varNoise, b0 = c(mu, 0), Gamma0 = Gamma0, Gamma = Gamma1, lambda = 1e8) abline(fit$bn, col = cols[i], lwd = 2, lty = i) segments(x0 = 1 + (i-2) * 0.02, y0 = mu - 2 *sqrt(Sigma), y1 = mu + 2 *sqrt(Sigma), lwd = 2, col = cols[i]) } legend("topleft", col = cols, lty = 1:3, lwd = 2.6, legend = paste0("Sigma = ", Sigmas)) ``` [](https://i.stack.imgur.com/PY5uK.png)
null
CC BY-SA 4.0
null
2023-05-23T15:13:14.000
2023-05-24T14:47:43.920
2023-05-24T14:47:43.920
10479
10479
null
616704
1
null
null
1
27
In an experiment Subjects decide as fast as possible whether letter strings presented on screen are real Words or non-words and reaction time (RT) in milliseconds is recorded. We want to know for correct Word decisions, if Word Frequency (how often the word appears in books) affects reaction time (the answer is yes, a very robust result in psycholinguistics). In this article [https://www.frontiersin.org/articles/10.3389/fpsyg.2015.01171](https://www.frontiersin.org/articles/10.3389/fpsyg.2015.01171) it is convincingly suggested that this kind of data should be analyzed with a generalized mixed linear model using an inverse Gaussian or Gamma distribution and an identity link so the results can be interpreted in original units (milliseconds). I would like to adopt this approach, but also calculate a 95% confidence interval around the fixed effect estimate for Frequency, and this is where I run into trouble. The standard bootstrapping methods available do not seem to work with a `glmer` model with identity link (error: `*all* bootstrap runs failed!`). If I change the link to a (non canonical-) log link, bootstrapping seems to work, but then I if I try to back transform the reported interval around the fixed effect using the inverse of the natural log (`exp()`) I do not get sensible values (i.e., that roughly correspond to the confidence interval that I get with using `lmer()`. reproducible example: ``` library(lme4) library(languageR) data(lexdec) lexdec$RT.orig <- exp(lexdec$RT) # reaction time was log transformed in the data set, transforming back lexdec$Frequency.c <- lexdec$Frequency - mean(lexdec$Frequency) # centering frequency # for comparison an lmer model (assuming Gaussian error) mod <- lmer(RT.orig ~ Frequency.c + (1 | Subject) + (1 | Word), data = lexdec) summary(mod) # reaction time decreases with about 30 ms when wordfrequency increases by 1 confint(mod) # 95 CI [-38, -21] qqnorm(resid(mod)) # errors not normal # glmer model with inverse Gaussian errors and an identity link mod1.id <- glmer(RT.orig ~ Frequency.c + (1 | Subject) + (1 | Word), family = inverse.gaussian(link = "identity"), data = lexdec) # model with log link mod1.log <- glmer(RT.orig ~ Frequency.c + (1 | Subject) + (1 | Word), family = inverse.gaussian(link = "log"), data = lexdec) # bootstrapping always fails on the model with identity link confint(object = mod1.id, parm = "beta_", method = "boot") # when using the log link, bootstrapping does seem to work but results do not correspond to lmer above confint(object = mod1.log, parm = "beta_", method = "boot") # takes some minutes! # result: [2.5%, 97.5%] = [-5.05, 2.58] # exponentiating these values does not correspond at all to [-38, -21] obtained above with lmer ``` My questions are: - Is there a different way to get bootstrapped CI's for glmer models with an identity link? - If not, how can I back transform the results from the model with the log link to original units? - If 2 above also does not solve the problem, should I report Wald based CI's or is it preferable to just not report any CI in this case?
bootstrapped confidence interval for glmer model with identity link
CC BY-SA 4.0
null
2023-05-23T15:16:22.767
2023-05-23T15:29:05.563
2023-05-23T15:29:05.563
336729
336729
[ "r", "confidence-interval", "lme4-nlme", "bootstrap", "link-function" ]
616705
1
null
null
0
16
Background Hi all, I need some clarification on my approach if it's correct or not. I have a matrix (M_ij) with user ratings of images. The users (i) are on the horizontal axis and the images (j) are on the vertical axis. The cell ij has the ratings given by the user. These ratings are on a discrete scale from (1-5). This is a sparse matrix since each user did not rate all the images but rather the subset of images. My Approach I found a correlation matrix of users. Since I had 756 users, the resulting correlation matrix had the dimensions 756 x 756. To cluster these users based on how similar their ratings were, I converted this matrix to a dissimilarity matrix doing $2 \times \sqrt{1 - s}$ to get $d$, the correlation distance (based on [https://stats.stackexchange.com/a/36158](https://stats.stackexchange.com/a/36158)). Then I used the ward's method by using $d$ as the distance metric to perform hierarchical clustering. My Unclarities: - While the above approach seems valid, I do not understand why a correlation matrix must be converted to a distance matrix for clustering. I read this post, but I could not understand that if correlation coefficients can be treated as points, then why go through the trouble of converting it into a distance matrix for clustering? - What could be the reasons why this approach may be wrong? - Should I have not used the Ward method and had used any other method for hierarchical clustering using $d$? - Based on the geometrical properties (as mentioned here) of a positive semi-definite matrix, it must not have any negative eigenvalues. However, in my case, the correlation matrix had several negative eigenvalues, but yet I calculated $d$ from it. Is it wrong? or the resulting $d$ matrix was partially valid? Thank you in advance :)
Should similarity matrix must always be converted into a dissimilarity matrix for hierarchical clustering?
CC BY-SA 4.0
null
2023-05-23T15:19:08.127
2023-05-23T15:19:08.127
null
null
388667
[ "correlation", "clustering", "similarities", "hierarchical-clustering", "ward" ]
616707
1
616742
null
2
37
I have a set of data X containing members xi (these are pixel values). I need to calculate the median and MAD of f(X), where f is a color space transform from linear RGB to sRGB (i.e. f(x) = x^(1/2.2)). For reasons of computing speed I would strongly prefer not to have to calculate f(xi) for all xi as the images can be quite large and this calculation is done as part of a live preview, so calculation time is important. In most cases I already have the median and MAD of X computed. Are there simple functions I can apply to m(X) and MAD(X) to get m(f(X)) and MAD(f(X))? @Henry has confirmed in the comments that m(f(x)) = f(m(x)), so the focus of the question is now on MAD(f(x)). If there isn't a way of calculating it exactly, a computationally cheap way of approximating it or estimating it would also be useful.
How to calculate median and median absolute deviation of data with a function applied?
CC BY-SA 4.0
null
2023-05-23T15:25:32.020
2023-05-23T22:24:33.387
2023-05-23T21:08:46.913
388666
388666
[ "median", "mad" ]
616708
2
null
615074
3
null
I'll start with a quick derivation of the Euler-Lagrange formula, then I'll show how you can use it, then I'll show how it applies to your problem. Background Equation 22 is a functional -- a function that takes a function as input and outputs a real number. We want to find the function $q_k(z_k)$ such that the functional in equation 22 is maximized. In other words, we'd like to pick a function $q_k(z_k)$ such that the left hand side, $L_k$, of equation 22 is as large as possible. You know how to do this if I give you a function $f(x)$ of some variable x and ask you to maximize it; you start by looking for stationary points -- you differentiate $f(x)$ w/r/t x, set equal to zero and solve for x to find the stationary points. The difference here is that instead of trying to find the values of the input variable which are stationary points for a function, you are trying to find the input functions which are stationary points of the functional. Let's assume we're dealing with a normed linear space of functions defined on some domain [a,b] in x, i.e. a set of functions $V$ that satisfy some basic criteria ($f + g = g + f \hspace{0.4cm} \forall \hspace{0.2cm}f,g \in V$, there is a zero element 0 such that $0 + f = f$, for each $f$ in $V$ there is some element $g$ such that $f + g = 0$, etc.) and has an associated norm $\Vert f\Vert$ that yields the "length" of element $f$ from the set of functions. This enables us to quantify the "distance" between two functions f and g in this function space as $\Vert f-g\Vert$. One example of a possible norm on a space of functions of a single variable on a domain [a,b] might be: $$\int_a^b|f(x)|dx$$ Let's further assume all the functions in this space are well-behaved on the domain of interest -- both continuous and twice-differentiable. We can define an operation on functionals that accept as input functions from this space called the [Gateaux derivative](https://en.wikipedia.org/wiki/Gateaux_derivative) which is analogous to the directional derivative in 3-space. In other words, let's say we have a functional $F[f(x)]$ where $f(x)$ is from the set $V$ and we add to it some function $h(x)$ also from $V$ multiplied by a small constant $\epsilon$. Let's stipulate that $h(x)$ is a function that is zero at both ends of the domain on which the functions in $V$ are defined, i.e. $h(a) = h(b) = 0$. The Gateaux derivative is then: $$\delta F[f(x), h(x)] =\lim_{\epsilon\to0}\frac{F[f(x) + \epsilon h(x)] - F[f(x)]}{\epsilon} = \frac{d}{d\epsilon}F[f(x) + \epsilon h(x)] |_{\epsilon=0}$$ It can be shown that for a function $g(x)$ to be a stationary point of the functional $F[f(x)]$ (i.e. a possible maximum or minimum), a necessary condition is that $\delta F[g(x), h(x)]$ must be zero for all $h(x)$ that meet our criteria. Now say you have a functional $F[f(x)]$ that takes function $f(x)$ as input and outputs a real number, where $f(x)$ is an element from this set of functions we're working with. Let's start from a function $f_0$ and add to it $\epsilon h$, where $h$ is another function in the set and $\epsilon$ is a constant small enough that $f_0 + \epsilon h$ is still in the set $V$. Let's say that $F$ is of the form: $$F[f_0(x)] = \int_a^bL(x, f_0(x) + \epsilon h(x), f_0'(x) + \epsilon h'(x))dx$$ where $f_0'(x)$ and $h'(x)$ are the derivatives of $f_0(x)$ and $h(x)$. In other words, we're working with a functional that is an integral on [a,b] of an expression $L$ that contains $x$, $f_0(x)$ and $f_0'(x)$, and we're asking what happens if we add another function in the set multiplied by a small constant $\epsilon$ to the starting input $f_0$. You can probably see a few analogies to differentiating a function at a point $x_0$. For simplicity, I'll now start writing $f_0$ and $h$ in place of $f_0(x)$ and $h(x)$. Now let's differentiate both sides of our functional w/r/t the constant epsilon. (We can take the derivative inside the integral because we already stipulated all our functions are well-behaved.) $$\frac{d}{d\epsilon}F[f_0 + \epsilon h(x)] = \int_a^b \frac{\partial}{\partial \epsilon} L(x, f_0 + \epsilon h, f_0' + \epsilon h') dx = $$ $$\frac{d}{d\epsilon}F[f_0 + \epsilon h(x)] = \int_a^b \frac{\partial}{\partial f_0} L(x, f_0 + \epsilon h, f_0' + \epsilon h') h + \frac{\partial}{\partial f_0'} L(x, f_0 + \epsilon h, f_0' + \epsilon h') h' dx$$ We said that $\delta F[f(x), h(x)] = \frac{d}{d\epsilon}F[f(x) + \epsilon h(x)] |_{\epsilon=0}$; we just found $\frac{d}{d\epsilon}F[f(x) + \epsilon h(x)]$, so: $$\delta F[f(x), h(x)] = \frac{d}{d\epsilon}F[f(x) + \epsilon h(x)] |_{\epsilon=0} = \int_a^b \frac{\partial}{\partial f_0} L(x, f_0, f_0') h + \frac{\partial}{\partial f_0'} L(x, f_0, f_0') h' dx$$ We want to find where this is equal to zero. So do integration by parts on the right hand side; because we stipulated that $h(a) = h(b) = 0$, this is easily seen to be: $$\int_a^b \frac{\partial}{\partial f_0} L(x, f_0, f_0') h + \frac{\partial}{\partial f_0'} L(x, f_0, f_0') h dx$$ The [fundamental lemma of the calculus of variations](https://en.wikipedia.org/wiki/Fundamental_lemma_of_the_calculus_of_variations) (I won't prove this here) says that if $\int_a^b f(x) h(x) dx = 0$ for all twice differentiable, continuous $h(x)$ such that $h(a) = h(b) = 0$, it must be true that $f(x) = 0$ for all x in the domain [a,b] of interest. This gives us the [Euler-Lagrange formula](https://en.wikipedia.org/wiki/Euler%E2%80%93Lagrange_equation): $$\frac{\partial}{\partial f(x)}L(x, f(x), f'(x)) - \frac{d}{dx}\frac{\partial}{\partial f'(x)}L(x, f(x), f'(x)) = 0$$ To recap, we now know that if we want to find stationary points of a functional of the form: $$F[f(x)] = \int_a^b L(x, f(x), f'(x))dx$$ and the functions we're considering meet some basic criteria, these stationary points (maxima and minima) must satisfy Euler-Lagrange. You want to maximize equation 22, the criteria we named all apply, and equation 22 is of this form, therefore you'll use Euler-Lagrange to find functions that are stationary points (and therefore possible maxima of your lower bound). Now let's look at a couple examples of how to use this. Example To use Euler-Lagrange, we're going to take the expression inside the integral in your functional, take the derivative of it with respect to $f(x)$ and $f'(x)$, plug these into Euler-Lagrange and solve the resulting differential equation. You'll take the derivative w/r/t $f(x)$ in the same way that you would if f(x) were a variable a instead of a function. So for example, if your functional is: $$\int_a^b f(x)^2 + xf'(x)dx$$ then $L(x, f(x), f'(x)) = f(x)^2 + xf'(x)$, and $\frac{\partial}{\partial f(x)} = 2f(x)$ while $\frac{\partial}{\partial f'(x)} = x$. A common example is the task of finding the shortest path between two points in 2-space. We can use the arc length integral: $$\int_a^b \sqrt{1 + f'(x)^2}dx$$ In which case $L(x, f(x), f'(x)) = \sqrt{1 + f'(x)^2}$. The derivative w/r/t f(x) is 0. You can easily see that the derivative w/r/t f'(x) is $\frac{f'(x)}{\sqrt{1 + f'(x)^2}}$ (just mentally pretend that $f'(x)$ is some variable a and take the derivative w/r/t a). Plug this into Euler-Lagrange and you get: $$\frac{d}{dx}\frac{f'(x)}{\sqrt{1 + f'(x)^2}} = 0$$ This differential equation clearly yields that: $$\frac{f'(x)}{\sqrt{1 + f'(x)^2}} = C$$ Solve for $f'(x)$ and you'll see that $f'(x) = C_2$ for some constant $C_2$. Integrate w/r/t x and you'll see that $y=C_2x + C_3$ -- you can apply the boundary conditions to find the constants. Stationary points can be maxima or minima; in this case there is only one stationary point and we can show it is a minimum, thus establishing that the shortest path between two straight points is a line. (You knew this anyway, but this is another way to get to the same outcome.) Equation 22 For your equation 22, you have a functional of $q_k(z_k)$ and would like to maximize it. The formula inside the integral, the $L$ expression or Lagrangian, is: $$q_k(z_k)E_{-k}[log(p(z_k | z_{-k},x))] - q_k(z_k) \log q_k(z_k)$$ In this case the function that is input to the functional is $q_k(z_k)$. So we'll take the derivative of $L$ w/r/t $q_k(z_k)$ and $q_k'(z_k)$ and plug these into Euler-Lagrange. The derivative w/r/t $q_k'(z_k)$ is zero. The derivative w/r/t $q_k(z_k)$ (again, just mentally treat $q_k(z_k)$ as a variable a and take the derivative w/r/t it) is clearly: $$E_{-k}[\log(p(z_k | z_{-k},x))] - \log q_k(z_k) - 1$$ Plug this into Euler-Lagrange and we have that the stationary point of your functional (the only point that could be a maximum of your functional, which is your lower bound on the KL divergence from the true posterior) is found when: $$E_{-k}[\log(p(z_k | z_{-k},x))] - \log q_k(z_k) - 1 = 0$$ and there you have it! You may find chapter 4 of Logan's Applied Mathematics helpful as well, this will provide a similar but more detailed overview.
null
CC BY-SA 4.0
null
2023-05-23T15:28:46.870
2023-06-02T15:46:32.807
2023-06-02T15:46:32.807
362671
250956
null
616709
2
null
616679
3
null
Based on [@Stephan’s beautiful answer](https://stats.stackexchange.com/a/616683/13255), here is the Typescript code for posterity: ``` function probabilityOfNwithDifferentFaces(n: number, ...allDiceAndSides: [number, number][]): number { if (!allDiceAndSides.length) return 0 // trying to avoid combinatorial explosion const cleaned = allDiceAndSides.filter(([dice, sides]) => dice > 0 && sides > 0 && dice < 100 && sides <= 100) .slice(0, 7) if (cleaned.length === 1) return probabilityOfN(cleaned[0][0], cleaned[0][1], n) const minResult = cleaned.reduce((acc, [dice]) => acc + dice, 0) const maxResult = cleaned.reduce((acc, [dice, sides]) => acc + dice * sides, 0) if (n < minResult || n > maxResult) return 0 cleaned.sort((a, b) => b[1] - a[1]) const allRepeats: number[] = cleaned.flatMap(([dice, sides]) => Array(dice).fill(sides)) // console.log(allRepeats) const all_possible_combos: number[][] = [] for (let i = 1; i <= allRepeats.length; i++) { const die_combos: number[] = [] for (let j = 1; j <= allRepeats[i - 1]; j++) { die_combos.push(j) } all_possible_combos.push(die_combos) } const cartesian_product = (...arrays: any[]) => arrays.reduce( (a, b) => a.flatMap((d: any) => b.map((e: any) => [d, e].flat()))) const all_combinations = cartesian_product(...all_possible_combos) // @ts-ignore const sum_of_rows = all_combinations.map(row => row.reduce((a, b) => a + b)) // @ts-ignore const count_of_rows_summing_to_n = sum_of_rows.filter(sum => sum === n).length // console.log(count_of_rows_summing_to_n) const totalPossibleThrows = all_combinations.length // console.log(totalPossibleThrows) // console.log(probability) return count_of_rows_summing_to_n / totalPossibleThrows } ``` And some use cases: ``` function calculateAndReportWithDifferentFaces(n: number, ...allDiceAndSides: [number, number][]) { const p = probabilityOfNwithDifferentFaces(n, ...allDiceAndSides) const rollString = allDiceAndSides.map(([dice, sides]) => `${dice}d${sides}`).join(" + ") console.log(`Got ${p.toFixed(3)} for ${n} with ${rollString}`) } function differentFacesProbabilityTest() { console.log("=== Individual probabilities with different faces") calculateAndReportWithDifferentFaces(2, [1, 10], [1, 6]) calculateAndReportWithDifferentFaces(9, [1, 10], [1, 6]) calculateAndReportWithDifferentFaces(14, [1, 10], [1, 6]) calculateAndReportWithDifferentFaces(15, [1, 10], [1, 6]) calculateAndReportWithDifferentFaces(16, [1, 10], [1, 6]) calculateAndReportWithDifferentFaces(31, [2, 10], [4, 8]) calculateAndReportWithDifferentFaces(41, [2, 10], [4, 8]) calculateAndReportWithDifferentFaces(41, [4, 8], [2, 10]) console.log("Same faces check:") calculateAndReportWithDifferentFaces(2, [2, 10]) calculateAndReportWithDifferentFaces(11, [2, 10]) calculateAndReportWithDifferentFaces(19, [2, 10]) calculateAndReportWithDifferentFaces(20, [2, 10]) console.log("Incorrect inputs:") calculateAndReportWithDifferentFaces(5, [2, 10], [4, 8]) calculateAndReportWithDifferentFaces(53, [2, 10], [4, 8]) } ```
null
CC BY-SA 4.0
null
2023-05-23T16:06:01.380
2023-05-23T16:06:01.380
null
null
13255
null
616710
1
null
null
0
12
I'm doing a cross-lagged panel modeling for three time points. For one specific construct of interest, I have three items that were used for across three time points, but the number of items used for the first two time points differs: 3-point likert scale for the first two time points (yes/agree, not sure, no/disagree), and than 5-point for the last time point (strongly agree, somewhat agree, neither agree nor disagree, somewhat disagree, strongly disagree). May I compress the 5-point scale to a 3-point scale to make them more comparable, or should I just use different scales as is in the same cross-lagged panel model? My preference is combining some response categories of the 5-point scale (the first two categories to one; and the last two categories to one). The measurement invariance (configural, metric) looks good after combining them. I've been trying to find some literature on this, but haven't been successful. I'd be grateful for any feedback!
Using items with different numbers of response categories across time points in longitudinal modeling
CC BY-SA 4.0
null
2023-05-23T16:06:25.890
2023-05-24T13:40:58.447
2023-05-24T13:40:58.447
388671
388671
[ "repeated-measures", "panel-data", "structural-equation-modeling", "measurement", "psychology" ]
616711
1
null
null
2
9
I am performing a quasibinomial regression, where each subject has an unfixed number of trials. So one subject may have had 5 trials while another had 90. In R the regression equation follows: glm(success_percent ~ x1 + x2 +..., data = data, family=quasibinomial, weights=trials) I am unfamiliar with how a glm handles these weights. Is a subject with 10 trials contributing twice as much to the model as a subject with 5? Or is the number of trials just used to account for within-subject variability?
Does binomial regression weight some observations more heavily than other?
CC BY-SA 4.0
null
2023-05-23T16:10:44.650
2023-05-23T16:10:44.650
null
null
368419
[ "regression", "binomial-distribution", "quasi-likelihood" ]
616713
1
null
null
0
10
I have a list/database with short lines of text. Maybe one million such lines. Every line 10-60 characters. I need to categorize them, in the simplest case there would be just two categories, e.g.: "interesting" and "not interesting/spam". Example |short text |category | |----------|--------| |call me asap |interesting | |ahahaha |spam | |... |... | |my address is Newstreet 1001 |interesting | |this is just a dummy |spam | |it is a good weather today |spam | |... |... | I need to provide a proof of concept, if this list can be successfully processed with machine learning tools. E.g. in python. - Do I understand it correct that this kind of problems can be solved by classification ML algorithms? Or there are other, better, more promising approaches? - Given, I would like to try the proof of concepts in python, is the sklearn library the tool of choice? - Can somebody provide a link to a "ML classification hello world" example for machine learning newbies? I have found these: https://towardsdatascience.com/machine-learning-with-python-classification-complete-tutorial-d2c99dc524ec https://www.activestate.com/resources/quick-reads/how-to-classify-data-in-python/ but... well, maybe there are even shorter and better and even more understandable newbie-examples? If possible, in python. I know, there were similar questions here in the past like - Sophisticated models for classifying short pieces of texts - Machine Learning for Text Classification but they are rather old and I would like to know the state of the art.
Short text-pieces analysis with machine learning
CC BY-SA 4.0
null
2023-05-23T16:23:08.327
2023-05-23T16:23:08.327
null
null
388673
[ "machine-learning", "python" ]
616714
1
616821
null
0
42
In my model, in which I'm attempting to infer which covariates affect whether a fish has an empty stomach or not (1=empty, 0=not empty), I decided to grand-mean center the variable "SL" (standard length) so that the intercept would make more sense (instead of when SL=0). However, I'm not sure how to interpret the interaction in the summary output when one of the covariates is centered. My categorical variable is "fZone" (factor Zone, my location variable). ``` center_sl = grand-mean centered standard length of each fish caught fZone = location of catch (3 levels) > table(c_neb5$fZone) Rankin West Whipray 201 436 42 c_neb5$center_sl <- scale(c_neb5$SL, scale=FALSE) mod2 <- bam(empty ~ center_sl + fZone + center_sl:fZone + ..., data = c_neb5, method = 'fREML', discrete = TRUE, family = binomial(link = "logit"), select = FALSE) ``` EDIT: Full model summary ``` > summary(mod2) Family: binomial Link function: logit Formula: empty ~ center_sl + fZone + center_sl:fZone + s(sal) + s(temp) + s(ToD) + s(fStation, bs = "re") + s(fCYR, bs = "re") + s(fStation, fCYR, bs = "re") + s(fStation, CYR.std, bs = "re") Parametric coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -1.298719 0.291203 -4.460 8.2e-06 *** center_sl -0.038851 0.011985 -3.242 0.00119 ** fZoneWest 0.122594 0.311480 0.394 0.69389 fZoneWhipray -0.327579 0.639371 -0.512 0.60841 center_sl:fZoneWest -0.002926 0.014650 -0.200 0.84169 center_sl:fZoneWhipray 0.061163 0.025891 2.362 0.01816 * --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Approximate significance of smooth terms: edf Ref.df Chi.sq p-value s(sal) 1.783e+00 2.231 1.590 0.558432 s(temp) 1.128e+00 1.236 2.972 0.134198 s(ToD) 2.112e+00 2.637 16.235 0.000755 *** s(fStation) 1.096e-04 82.000 0.000 0.619807 s(fCYR) 4.740e+00 12.000 14.165 0.009002 ** s(fCYR,fStation) 9.693e+00 237.000 11.111 0.205201 s(CYR.std,fStation) 1.258e+01 80.000 23.798 0.008646 ** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 R-sq.(adj) = 0.16 Deviance explained = 18.1% fREML = 990.1 Scale est. = 1 n = 679 ``` My interpretation is that...(Intercept)=-1.298719 means the average size fish has a exp(-1.298719)= 0.272 odds of an empty stomach; fZoneWest=0.122594 the odds of empty stomach in West compared to my ref. level (Rankin) increase by exp(0.122594)=1.130425; and center_sl:fZoneWest=-0.002926 means for every 1 unit above average in size, the odds of an empty stomach decrease by exp(-0.002926)=0.9970783, compared to my ref. level. Am I on the right track? Any advice or corrections are greatly appreciated! The data is 679 rows in size, so the best I could do was post a subset of it down below. ``` Subset of my data: example_data <- c_neb5[sample(nrow(c_neb5), 10), ] > dput(example_data) structure(list(CYR_Keyfield = c("C-2018-10-6-255", "C-2017-6-26-278", "C-2018-9-16-291", "C-2017-10-9-265", "C-2010-11-10-167", "C-2019-10-30-169", "C-2018-10-6-279", "C-2022-7-10-241", "C-2017-9-4-70", "C-2022-6-23-241" ), Species = c("Cynoscion nebulosus", "Cynoscion nebulosus", "Cynoscion nebulosus", "Cynoscion nebulosus", "Cynoscion nebulosus", "Cynoscion nebulosus", "Cynoscion nebulosus", "Cynoscion nebulosus", "Cynoscion nebulosus", "Cynoscion nebulosus"), ID = c("201810255_86", "20176278_52", "20189291_39", "201710265_100", "201011167_61", "201910169_54", "201810279_75", "20227241_46", "2017970_91", "20226241_34"), SL = c(33.58, 20.12, 50.25, 23.18, 68.72, 14.85, 73.49, 61.84, 13.26, 25.79), empty = c(0, 0, 0, 0, 0, 1, 0, 0, 1, 0), DateTime = structure(c(1538842500, 1498499220, 1537107120, 1507558920, 1289399700, 1572449400, 1538837160, 1657460040, 1504530660, 1656001620), class = c("POSIXct", "POSIXt"), tzone = ""), CYR = c(2018L, 2017L, 2018L, 2017L, 2010L, 2019L, 2018L, 2022L, 2017L, 2022L ), Month = c(10L, 6L, 9L, 10L, 11L, 10L, 10L, 7L, 9L, 6L), DoY = c(279, 177, 259, 282, 314, 303, 279, 191, 247, 174), ToD = c(12.25, 13.7833333333333, 10.2, 10.3666666666667, 9.58333333333333, 11.5, 10.7666666666667, 9.56666666666667, 9.18333333333333, 12.45), JDay = c(5129, 4662, 5109, 4767, 2242, 5518, 5129, 6502, 4732, 6485), Zone = c("Rankin", "Rankin", "Whipray", "Rankin", "West", "West", "Rankin", "Rankin", "West", "Rankin"), Station = c(255, 278, 291, 265, 167, 169, 279, 241, 70, 241), Standard_collection_station = c(0, 0, 0, 0, 0, 0, 0, 1, 1, 1), Latitude = c(25.085, 25.145, 25.118, 25.135, 25.106, 25.081, 25.133, 25.0750000309199, 25.132, 25.0750000309199), Longitude = c(-80.802, -80.809, -80.76, -80.823, -80.917, -80.893, -80.797, -80.8159999921917, -80.941, -80.8159999921917), sal = c(38.27, 41.01, 33.61, 26.75, 32, 36.18, 36.42, 40.08, 38.1, 39.07), temp = c(27.856, 32.2, 31.791, 29.512, 19.3, 28.398, 27.6679999999999, 30.243, 29.71, 29.262), fCYR = structure(c(9L, 8L, 9L, 8L, 2L, 10L, 9L, 12L, 8L, 12L), levels = c("2009", "2010", "2011", "2012", "2013", "2015", "2016", "2017", "2018", "2019", "2021", "2022" ), class = "factor"), fMonth = structure(c(8L, 4L, 7L, 8L, 9L, 8L, 8L, 5L, 7L, 4L), levels = c("1", "3", "5", "6", "7", "8", "9", "10", "11", "12"), class = "factor"), fStation = structure(c(60L, 69L, 76L, 63L, 40L, 42L, 70L, 57L, 11L, 57L), levels = c("20", "21", "22", "23", "24", "40", "54", "65", "67", "68", "70", "71", "73", "101", "105", "106", "107", "111", "112", "117", "118", "119", "122", "123", "124", "130", "133", "134", "135", "137", "143", "144", "145", "146", "147", "156", "157", "158", "159", "167", "168", "169", "171", "172", "173", "174", "175", "176", "224", "225", "226", "227", "229", "237", "239", "240", "241", "253", "254", "255", "256", "257", "265", "266", "267", "268", "269", "270", "278", "279", "280", "281", "282", "284", "290", "291", "292", "294", "301", "302", "312", "609"), class = "factor"), fZone = structure(c(1L, 1L, 3L, 1L, 2L, 2L, 1L, 1L, 2L, 1L ), levels = c("Rankin", "West", "Whipray"), class = "factor"), CYR.std = c(9L, 8L, 9L, 8L, 1L, 10L, 9L, 13L, 8L, 13L), center_sl = structure(c(-6.70160530191458, -20.1616053019146, 9.96839469808542, -17.1016053019146, 28.4383946980854, -25.4316053019146, 33.2083946980854, 21.5583946980854, -27.0216053019146, -14.4916053019146), dim = c(10L, 1L)), center_sal = structure(c(1.43373534609722, 4.17373534609722, -3.22626465390278, -10.0862646539028, -4.83626465390278, -0.656264653902781, -0.416264653902779, 3.24373534609722, 1.26373534609722, 2.23373534609722), dim = c(10L, 1L)), center_temp = structure(c(-1.51357879234165, 2.83042120765835, 2.42142120765835, 0.142421207658348, -10.0695787923417, -0.971578792341653, -1.70157879234175, 0.873421207658346, 0.340421207658348, -0.107578792341652), dim = c(10L, 1L))), row.names = c(495L, 364L, 303L, 652L, 404L, 375L, 469L, 676L, 508L, 675L), class = "data.frame") ```
Interpreting interaction between a categorical and centered continuous variable (binary response)
CC BY-SA 4.0
null
2023-05-23T16:31:54.603
2023-05-24T16:27:11.597
2023-05-24T14:16:39.177
337106
337106
[ "categorical-data", "interaction", "continuous-data", "centering" ]
616715
1
null
null
0
5
I am looking to determine if three raters have statistically different observations of the same data, which is continuous. For example, if the diameters of heart, X, for 20 patients were measured under 3 different techniques, X1, X2 and X3. Since those 3 different techniques measures the same set of subjects, which is measuring the same variable in different conditions, I am thinking to use a one-way repeated measures ANOVA to compare all three at the same time. However, I saw some posts that using simple one-way ANOVA might be more suitable. Would it be more suitable to use the repeated ANOVA? Also, is it possible to get the ICC for intra class and inter class for this case?
Using repeated-measures ANOVA to test Rater agreement
CC BY-SA 4.0
null
2023-05-23T17:12:26.397
2023-05-23T17:12:26.397
null
null
388674
[ "anova", "repeated-measures" ]
616716
2
null
616612
1
null
A discrete-time survival model with a single event type (even if repeated) can be modeled as a binomial regression with data as you seem to have already, in a person-period format. That is, you have one row of data for each individual at risk during each time period, with each row indicating the corresponding individual, the time period, the covariate values in effect during the period, and an indicator of whether or not the event happened during that period. In that format, there simply are no data values for an individual after the last observation, handling censoring as in other survival models and posing no problems so long as censoring in uninformative. Capturing the relationship between `years_since_entering` and outcome is a matter of how you choose to model that predictor. If there are only a few values of `years_since_entering` covering all cases, you might treat that as a multi-level categorical predictor and get a separate coefficient for each year. That's analogous to what you would get with a Cox model, in which the baseline hazard over time is zero between event times and can take arbitrary positive values at each event time. If you have many values of `years_since_entering` and wish to model it as a numeric predictor, you should use a flexible modeling strategy like a regression spline. If you just include it as a numeric predictor the usual way you are imposing, in a logistic regression model, a strictly linear association between that variable and the log-odds of outcome. That's not likely to hold. There might be some advantage to using a complementary log-log link in your binomial regression instead of the usual logistic link, as that directly represents what you would get in a [proportional-hazards model with times grouped](https://stats.stackexchange.com/q/429266/28500) like this. You can then incorporate the intra-individual and intra-country considerations as random effects, in principle including things like random "slopes" for covariates if there are enough data (probably only on an among-country rather than an among-individual level). If there was at most one event possible per individual you wouldn't have to include random effects per individual, but in your situation that would be important, presumably as a random intercept. For an individual with more than 1 coup attempt, you also need to think about whether and how to incorporate the prior coup attempt into the model; otherwise, you assume independence among coup attempts on the same individual, which seems unlikely.
null
CC BY-SA 4.0
null
2023-05-23T17:30:34.087
2023-05-23T17:30:34.087
null
null
28500
null
616717
1
null
null
0
10
I want to estimate the coefficient $a$ in the regression $$ P = c + aX + bY + u $$ However, I only have the variables $T = X+Y$ and $\tilde{Y} = \mu_1 + \mu_2 Y + e$. If I had the variable $Y$, then I could estimate $$ P = K + \alpha T + \beta Y + \varepsilon $$ And $a = \alpha$. If I use the proxy $\tilde{Y}$, then I add endogeneity into an otherwise valid regression. I am wondering if I run the two-stage least squares regression $$ P = K + \alpha T + \beta \tilde{Y} + \varepsilon $$ with a valid instrument for $\tilde{Y}$, if I would get the result that $a = \alpha$?
Regression with a sum of variables and a proxy
CC BY-SA 4.0
null
2023-05-23T17:36:03.023
2023-05-23T17:36:03.023
null
null
385539
[ "regression", "instrumental-variables", "measurement-error" ]