Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
613431
1
null
null
0
14
I'm reading this book, "Unperturbed by volatility", by Adel Osseiran and Florent Segonne and it says the following: > Then, if we assume that individual volatilities (sigma) and correlations (rho) are the same for all assets (and pairs), then the volatility of a portfolio N equally weighted assets is sigma_N = sigma * SQRT(1/N + (N-1)/N * rho) We see immediately that the limit of the benefit is sigma_N = sigma * SQRT(rho) when N is large. This represents the limit where idiosyncratic risk is removed by having more assets, and when correlation is high there is no benefit. It can be surprising to see SQRT(rho) a square root, as it clearly requires the average correlation to be positive. This is a property of the average correlation: the average correlation rho_N for N assets verifies rho_N >= - 1 / (N-1). I don't understand how this means that rho in the square root is always nonnegative since for every N >= 2 it holds that - 1 / (N - 1) < 0?
N asset portfolio volatility with equal volatilities and correlations
CC BY-SA 4.0
null
2023-04-19T08:35:31.263
2023-04-19T08:35:31.263
null
null
336764
[ "correlation", "variance" ]
613434
1
null
null
1
32
Let's say I have a classification model to be trained and a relatively small dataset. The data is splited in k-folds (eg. k=5), in such a way that: - A, B and C are used for training, D for testing and the trained model is evaluated on E. - B, C and D are used for training, E for testing and the trained model is evaluated on A. - C, D and E are used for training, A for testing and the trained model is evaluated on B. - D, E and A are used for training, B for testing and the trained model is evaluated on C. - E, A and B are used for training, C for testing and the trained model is evaluated on D. The final result of this experiment is a statistical summary of all k-folds. In this way, no data would be "wasted" on training only and the final result profits from a lower statistical uncertainty. Would that make sense, from a statistical point of view? I'm curious specifically about some potential bias I'm not foreseeing...
Cross-validation as strategy for training
CC BY-SA 4.0
null
2023-04-19T08:54:48.337
2023-04-19T08:54:48.337
null
null
268218
[ "cross-validation", "bias", "small-sample" ]
613436
2
null
612856
1
null
Statistical significance expresses observed effects or differences in terms of probabilities. They are an expression of the relative [precision](https://en.m.wikipedia.org/wiki/Accuracy_and_precision) of an experiment. Given two posterior distributions: if they are independent, then you can compute the convolution of the distributions, and that is the distribution of the difference. Using that distribution of the difference you can express the significance for the difference. For multiple comparisons at once, you could compute a highest density region for the joint distribution and check whether any of the lines $x=y, x=z, y=z$ are crossing the boundary. The second method will result in a larger 'p-value', because it will count not all cases for some given $\Delta_{x-y}=x-y$. in general, there is no single way to express significance.
null
CC BY-SA 4.0
null
2023-04-19T09:12:22.207
2023-04-19T09:28:44.923
2023-04-19T09:28:44.923
164061
164061
null
613438
1
null
null
0
12
I am trying to use ARIMA for hotel reservation. I have data for a year each day and I want to make predictions. But, my result is very poor I wonder why. Here I attach my data [](https://i.stack.imgur.com/ftvLx.png) And, this is how I run my model [](https://i.stack.imgur.com/CJIMo.png) I got that the best order parameter is 212. with AIC score 3460.847 *I also wonder why its so high. And, this is my result. orange color [](https://i.stack.imgur.com/Et29T.png) I have data from 280 days and want to predict 70 days. as depicted in the graph, the prediction is showing only around 170-180. So far from reality. How can I improve my model ?
Bad prediction for ARIMA timeseries
CC BY-SA 4.0
null
2023-04-19T10:09:42.457
2023-04-19T14:00:22.120
2023-04-19T14:00:22.120
53690
386040
[ "time-series", "python", "forecasting", "arima" ]
613440
1
null
null
1
41
I have a few questions regarding ranked data and hypothesis testing. Hypothesis: Car drivers are more likely to consider the car as the most beneficial transportation method for mental health than other transport method users'. Respondents were asked to rank 4 transport methods from least (1) to most (4) beneficial to mental health. Responses were as follows [](https://i.stack.imgur.com/2L3hr.png) A Fisher’s Exact Test test gives me a p=0.001 for the variables in the picture above. The Fisher's test shows that there is a significant relationship between the variables, however, that would prove a hypothesis such as: 'Mode of transport influences the perception on which transport provides the most mental health benefits.' However, I want to know if car drivers are more likely to think this way. What test should I do here? Thank you for your help. Kind regards, P.S. New image with more detail added as a response to comment below. [](https://i.stack.imgur.com/HdtCA.png)
Ranked data and hypothesis testing
CC BY-SA 4.0
null
2023-04-19T10:21:36.453
2023-04-19T12:10:41.247
2023-04-19T12:10:41.247
384981
384981
[ "hypothesis-testing", "dataset", "ranking" ]
613442
1
null
null
0
20
I am currently studying whether academic freedom (independant variable) has an impact on university rankings (independant variable). So far, my model is only composed of these two variables, as well as university fixed effects. In order to get a better estimation, I am currently looking for control variables to include in my model (the problem being that there is almost no litterature on the topic, so I have to pick them out myself instead on relying on what previous authors chose). I was wondering : what do I have to look for in order to determine whether the control variables I picked are well suited for my model ? I know that the control variables should be correlated with both my dependant and independant variable, but apart from this, I'm not sure what to look for. Are there some specific tests I can conduct ? Or should I just look at some specific statistics such as the R² or the F statistic ?
What makes a good control variable?
CC BY-SA 4.0
null
2023-04-19T10:54:52.853
2023-04-19T10:54:52.853
null
null
382870
[ "controlling-for-a-variable" ]
613444
1
null
null
0
11
Dears, I have this data below, which represents a distribution of population across 8 groups in time 1 and consequent time 2. (time1 > time2). I would like to sample using these 2 inputs, it this sampling below appropriate? data: ``` from to group 1 17.785825 24.6877200 group 2 19.024499 19.1316200 group 3 25.700736 28.7450607 group 4 18.872761 16.4958496 group 5 9.509484 6.6171471 group 6 3.732614 2.4078783 group 7 2.786355 1.4408675 group 8 2.587725 0.4738568 > colSums(data.in) from to 100 100 ``` step 1: estimate the density: `density(as.numeric(data.in[i,]))` step 2: `base:::sample(as.numeric(data.in[j, ]), N, replace=TRUE) + rnorm(N, 0, list.d[[j]]$bw)` The final outcome of weighted simulated `rate` result is not so plausible since its too wide (interval). [](https://i.stack.imgur.com/xEJzF.jpg) Full replication: ``` data.all <- structure( list( grade = 1:8, rate = c(0.001, 0.0025, 0.0063, 0.0156, 0.0391, 0.0977, 0.2441, 0.6104), from = c( 0.177858249529814, 0.190244992816841, 0.257007364840541, 0.188727613609983, 0.0950948446116216, 0.0373261406418749, 0.0278635485813682, 0.025877245367956 ), to = c( 0.246877199949458, 0.191316200481528, 0.287450607239773, 0.164958495898585, 0.0661714712058581, 0.0240787825807411, 0.0144086750749326, 0.00473856756912408 ) ), class = "data.frame", row.names = c( "group 1", "group 2", "group 3", "group 4", "group 5", "group 6", "group 7", "group 8" ) ) data.in <- data.all[c(-1,-2)] * 100 data.in # sampling group level list.d <- list(); new.df <- list(); for(i in 1:nrow(data.in)){ list.d[[i]] <- density(as.numeric(data.in[i,])) } for(j in 1:nrow(data.in)){ new.x <- base:::sample(as.numeric(data.in[j, ]), N, replace=TRUE) + rnorm(N, 0, list.d[[j]]$bw) new.df[[j]] <- data.frame(grade=j, dist.sim=new.x, nsim=1:N); if(!FALSE){ hist(new.x, prob=TRUE, col="blue") lines(density(new.x, na.rm = TRUE), col="red", lwd=2) } } rx <- do.call(rbind, new.df) rx$dist.sim <- rx$dist.sim/100 rx # plot distribution across rating groups using the simulation par(mfrow=c(2,4)) with(rx, by(rx, grade, function(x) hist(x$dist.sim, col="blue"))) # get data rx.rate <- merge(rx, data.all[c(1,2)], by="grade"); head(rx.rate) # simulation sim.rate <- with(rx.rate, by(rx.rate, list(nsim), function(x) with(x, crossprod(dist.sim, rate)))) portfolio.rate.sim <- data.frame(port.pd=t(rbind(sim.rate)), row.names = NULL) # final results par(mfrow=c(1,1)) plot(sim.pd ~ 1, data=portfolio.rate.sim, pch=19, col="blue", ylab="rate") abline(h=with(data.all, crossprod(from, rate)), col="red", lwd=3) abline(h=with(data.all, crossprod(to, rate)), col="orange", lwd=3) ```
Sampling from share transition of population distribution
CC BY-SA 4.0
null
2023-04-19T11:53:20.473
2023-04-19T11:53:20.473
null
null
23732
[ "r", "distributions", "inference", "simulation" ]
613445
1
null
null
1
21
I am writing a paper where I am comparing the change over time of an intervention and control group. I have the mean and standard deviation for both groups at baseline and the mean and standard deviation following an 5 week period. I can then calculate the mean difference over time for each of the groups, but I'm not sure how I find the new mean value for this change. I also don't have access to the raw data, only have the means. Information I have: Control group: Baseline mean, SD: 3.0, 0.9 5-week mean, SD: 2.0, 0.8 Mean difference: -1.0 SD of mean difference: ? Intervention group: Baseline mean, SD: 3.4, 0.7 5-week mean, SD: 2.0, 0.8 Mean difference: -1.4 SD of mean difference?
How to calculate standard deviation for the difference between two means?
CC BY-SA 4.0
null
2023-04-19T12:07:04.070
2023-04-19T12:07:04.070
null
null
386064
[ "mean", "standard-deviation", "descriptive-statistics", "standardized-mean-difference" ]
613446
1
null
null
1
62
I have trained a regression forest using the R package ranger. Now I would like to discuss the variable importance measures of the included features. However, I am having difficulties understanding the exact definition of the different importance measures offered by ranger. The documentation of the ranger function states the following about the argument 'importance': > Variable importance mode, one of ’none’, ’impurity’, ’impurity_corrected’, ’permutation’. The ’impurity’ measure is the Gini index for classification, the variance of the responses for regression and the sum of test statistics (see splitrule) for survival. In my case, I trained a regression forest. Here is a reproducible example including all feature importance measures offered by ranger: ``` library(ranger) ## Estimate the same model with the three different importance measures offered by ranger set.seed(1) rg.iris_1 <- ranger(Sepal.Width ~ ., data = iris, importance = 'impurity') set.seed(1) rg.iris_2 <- ranger(Sepal.Width ~ ., data = iris, importance = 'impurity_corrected') set.seed(1) rg.iris_3 <- ranger(Sepal.Width ~ ., data = iris, importance = 'permutation') ## Store the importance measures in a table varimp_table <- data.frame(impurity = rg.iris_1$variable.importance, impurity_corrected = rg.iris_2$variable.importance, perumtation = rg.iris_3$variable.importance) ``` This gives the following table: [](https://i.stack.imgur.com/NTwW7.png) And here are my questions: - How can I interpret the three columns of this table? - Are there any papers explaining what is meant by the different approaches? - What are the pro's and con's of each measure?
Making sense of the performance measures offered in Ranger
CC BY-SA 4.0
null
2023-04-19T12:19:14.083
2023-04-23T18:20:05.850
2023-04-19T18:11:31.567
297627
297627
[ "r", "variance", "random-forest", "importance", "permutation" ]
613447
1
613564
null
0
56
I have a highly seasonal data of a little more than 1000 observations, with a 67 seasonal cycle. I am using auto_arima in Python and for some reason when I perform a stepwise search I get a memory error after the stepwise search has fitted a few models. Why could this be happening? Data plot with an earlier ARIMA model with Fourier terms I’ve used- [https://imgur.com/a/ZGtJgow](https://imgur.com/a/ZGtJgow)
Auto Arima failing with highly seasonal data
CC BY-SA 4.0
null
2023-04-19T12:21:15.330
2023-04-20T13:51:01.980
2023-04-19T13:06:02.610
385599
385599
[ "time-series", "forecasting", "arima", "seasonality" ]
613448
1
null
null
0
34
I want to calculate the perplexity of my language model. To avoid underflow, I stored the log of the probabilities. Since the probabilities are between 0 and 1, the log of them is negative. So when I want to calculate the perplexity, when the number of words is even, the perplexity becomes an irrational number. On the other hand, according to its formula: $$PP(W) = (\frac{1}{p(w_1w_2\dots w_N)})^\frac{1}{N}$$, since the sum of log probabilities are greater than 1, the perplexity becomes less than 1. But I also saw that perplexity is $2^{entropy}$ so it can't be less than one. Would you please help me with understanding these problems?
Perplexity with log probabilities
CC BY-SA 4.0
null
2023-04-19T12:39:03.647
2023-04-19T12:39:03.647
null
null
373489
[ "natural-language", "language-models", "perplexity" ]
613449
1
null
null
0
14
Four samples were analyzed daily for 5 days. It is known that time influences the values of the analyses, because the samples are sensitive to the passage of time. Here is a table illustrating the results of this experiment: |Days |Sample 1 |Sample 2 |Sample 3 |Sample 4 | |----|--------|--------|--------|--------| |Day 1 |2,7 |6,6 |2,3 |2,1 | |Day 2 |3,2 |9,0 |2,2 |2,2 | |Day 3 |3,3 |10,4 |1,9 |2,1 | |Day 4 |4,1 |13,5 |2,6 |2,2 | |Day 5 |4,7 |14,8 |2,1 |2,3 | I am in doubt between Friedman's test, which I believe by considering the order of the data could be useful for analyzing the experiment where each passing day is important. On the other hand, there is the Kruskal-Wallis test that can also be used. My statistical knowledge is not advanced and I do not know which test is more appropriate to analyze this experiment.
What is the best recommendation for analyzing 4 data sets, each with 10 observations at 10 times?
CC BY-SA 4.0
null
2023-04-19T12:53:10.017
2023-04-19T13:05:57.773
2023-04-19T13:05:57.773
185831
185831
[ "anova", "experiment-design", "multiple-comparisons", "kruskal-wallis-test", "friedman-test" ]
613450
1
613504
null
2
72
I am working on problem set 2, Q4d from the [MIT 6.041](https://ocw.mit.edu/courses/6-041-probabilistic-systems-analysis-and-applied-probability-fall-2010/resources/mit6_041f10_assn02/) > Oscar has lost his dog in either forest A (with a priori probability 0.4) or in forest B (with a priori probability 0.6). On any given day, if the dog is in A and Oscar spends a day searching for it in A, the conditional probability that he will find the dog that day is 0.25. Similarly, if the dog is in B and Oscar spends a day looking for it there, the conditional probability that he will find the dog that day is 0.15. The dog cannot go from one forest to the other. Oscar can search only in the daytime, and he can travel from one forest to the other only at night. > (d) If the dog is alive and not found by the Nth day of the search, it will die that evening with probability $\frac{N}{N+2}$. Oscar has decided to look in A for the first two days. What is the probability that he will find a live dog for the first time on the second day? My conceptualisation (which is wrong): Let $F_i$ be the event the dog is found on day $i$, $S_1$ the event the dog survives the first night. We want the intersection of three events $P(F_1^c \cap S \cap F_2) = P(F_1^c)P(S)P(F_2)$ (assuming independence). The dog could be in $A$ or in $B$. Oscar only searches $A$ and it is not stated (but I think the answer assumes) the dog will not change forests. So $P(F_1^c)$ is the probability that the dog is in $A$ and we don't find it, or the dog is in $B$. Similarly, $P(F_2)$ is the probability we find the dog on day 2, i.e. it is in $A$ and we find it. Therefore: $$P(F_1^c) = P(A)P(F^c|A) + P(B)P(F^c|B) = 0.4 \cdot 0.75 + 0.6$$ $$P(S) = 1 - \frac{1}{3} = \frac{2}{3}$$ $$P(F) = P(A)P(F|A) + 0 = 0.4 \cdot 0.25 $$ $$ (F_1^c)P(S)P(F) = (0.4 \cdot 0.75 + 0.6)(\frac{2}{3})(0.4 \cdot 0.25)$$ However, the result is: > (d) In order for Oscar to find the dog, it must be in Forest A, not found on the first day, alive, and found on the second day. Note that this calculation requires conditional independence of not finding the dog on different days and the dog staying alive. $$P(\text{find live dog in A day 2}) = P(\text{in A}) \times P(\text{not find in A day 1}|\text{in A})\\ \times P(\text{alive day 2}) \times P(\text{find day 2}|\text{in A})\\ = 0.4 \times 0.75 \times (1 - \frac{1}{3}) \times 0.25 = 0.05$$ One difference is they remove $P(A)$ from the second day, this makes sense as the dog will only choose what forest to go into once. However, what I really don't understand is the omission of $P(B)$ from the first term (i.e. $0.4 \cdot 0.75$ rather than $0.4 \cdot 0.75 + 0.6$). Why is it okay to disregard this information, as the question gives no indication as to whether the dog is in $A$ or $B$?
Conditional probability - Finding a lost dog on the second, but not first day of a search
CC BY-SA 4.0
null
2023-04-19T13:00:02.557
2023-04-20T03:19:33.880
2023-04-19T23:27:03.920
282433
336682
[ "probability", "self-study", "conditional" ]
613451
2
null
613139
3
null
If the median of the differences in paired values is the statistic of interest, the one-sample sign test is the common hypothesis test that addresses this. In this case, the confidence interval about the median of the differences makes sense to calculate. I assume that for what you are doing, you are using the an overall median or historic median as the mu, if you will, the value to which you are comparing groups of values. In terms of effect size, there are a couple of options you might consider. One simple statistic is stochastic dominance. This is simply the proportion of observations (differences) greater than mu minus the proportion of observations less than mu. Simple transformations of this statistic --- but which are usually used in the unpaired two sample case --- are Vargha and Delaney's A, Cliff's delta, and the rank biserial correlation. I'm not sure if a dominance statistic can be coerced to work for your purpose. Another effect size statistic that makes sense is the difference between the median of differences and mu divided by the median absolute differences (MAD). I propose this statistic here for the the unpaired two sample case [stats.stackexchange.com/questions/497295/effect-size-calculation-for-comparison-between-medians/](https://stats.stackexchange.com/questions/497295/effect-size-calculation-for-comparison-between-medians/). This statistic can be used in the one-sample case by simply defining the denominator as the median of differences, minus mu. And the denominator as the MAD of the observed values (differences). This statistic is analogous to Cohen's d in the one sample case. And if it's tested on a normal sample, and the corrected MAD is used, as in the default in R, the result will be similar to Cohen's d. `cohenD = (Mean - Mu) / Sd` `mangiaficoD = (Median - Mu) / MAD` Of course, to the heart of the question, what cutoff of any of these effect sizes should you use as a cutoff analogous to a Cohen's d of 0.35 ? I don't know. But you might start with some toy data for paired differences, normally distributed, that would yield a Cohen's d of 0.35, and convert it to these different statistics. My proposed `mangiaficoD` = 0.35 makes sense, I think, at least when starting with a normal sample. You might have to play with other distributions and see what you think. I don't know if any of this will result in a procedure that does what you want. It might be a case of trying some of it, and seeing if it approximates what you want. Perhaps starting with some toy data. Here is some code in R for some of these statistics. You can run multiple iterations. If you don't use R, you can run the code here without installing software: [rdrr.io/snippets/](https://rdrr.io/snippets/) ``` Diff = rnorm(30, 1, 0.35) Mu = 0 library(lsr) cohensD((Diff-Mu), mu = Mu) mangiaficoD = median(Diff-Mu)/ mad(Diff) mangiaficoD Dominance = (sum(Diff>Mu) - sum(Diff<Mu))/length(Diff) Dominance library(DescTools) MedianCI(Diff, conf.level = 0.95, method = "exact") cat(median(Diff-Mu), " ", median(Diff-Mu)+mad(Diff), " ", median(Diff-Mu)-mad(Diff)) ```
null
CC BY-SA 4.0
null
2023-04-19T13:02:37.540
2023-04-19T13:02:37.540
null
null
166526
null
613452
1
613459
null
1
59
Consider the sequence $$\gamma(\tau) = \begin{cases} 1 & \text{if } |\tau| ≤ K \\ 0 & \text{if }|\tau| > K \end{cases}$$ where $K$ is a positive integer. Is $\gamma(\tau)$ an auto covariance sequence for some discrete parameter stationary process $Y_t$ ? The answer is no but I don't see any valid argument. This would correspond to a process with a Toepliz matrix with ones on the diagonal and the $K$ sub diagonals near it and I can arrange myself to define a process with such a matrix. Does this mean that if a process is stationary then it has a Toeplitz covariance matrix but the converse is not true ?
Existence of stationary process with a given ACF
CC BY-SA 4.0
null
2023-04-19T13:13:58.380
2023-04-20T01:18:51.987
2023-04-19T14:53:20.240
20519
361672
[ "time-series", "autocorrelation", "stationarity", "covariance-matrix" ]
613454
1
null
null
1
17
All existing link prediction methods (and indeed, all GNN methods in general) rely on edges being present in the input graph, and the method would be able to predict any missing edges. I've seen one paper called edgeless-GNN which explains how to perform link prediction on edgeless 'nodes', alongside an existing graph. My question is, is there a method for performing link prediction on a completely edgeless graph from scratch? Theoretically, I believe this should be possible using inductive methods. The assumption is that the network would be pre-trained on a different graph in the same domain.
Link prediction on completely edgeless graphs
CC BY-SA 4.0
null
2023-04-19T13:29:54.567
2023-04-19T13:29:54.567
null
null
386072
[ "machine-learning", "graph-neural-network" ]
613455
1
null
null
1
12
First off, I am new to predictive modeling and I appreciate any advice. BACKGROUND: I am doing a binomial elastic net where n = 54 and p = 89. This model is for predicting drug effects clinically; thus, prediction by measuring as few variables as possible would be ideal. After Elastic Net variable selection/shrinkage, 24 predictors are included in the model. Of the 24 predictors included in the model, 10 are significant predictors and contribute 96% of the total effects in the model (link to calculation below). QUESTION: Can I generate a second predictive model using only the top 10 predictors from my first elastic net? I would like to demonstrate that as few as 10 variables could be used for prediction. If so, would the second pass be better with elastic net or another model type? CALCULATIONS: I am doing my predictive modeling using JMP Pro statistical software. This software calculates the effect contributions of each variable as they describe [here](https://www.jmp.com/support/help/en/16.2/#page/jmp/assess-variable-importance.shtml?os=win&source=application#ww376802). and briefly here: Independent Resampled Inputs For each factor, Monte Carlo samples are obtained by resampling its set of observed values. Use this option when you believe that your factors are uncorrelated and that their likely values are not represented by a uniform distribution"
Can I do a second pass Elastic Net only including the most significant predictors?
CC BY-SA 4.0
null
2023-04-19T13:30:04.860
2023-04-19T13:36:51.007
2023-04-19T13:36:51.007
362671
386073
[ "regression", "feature-selection", "elastic-net" ]
613456
1
null
null
1
23
Background I am doing a project where I am comparing 5 models. It is a regression problem, where the model uses the 3D structure data of a protein and a drug molecule bound to it (specifically, the Cartesian coordinates of atoms) to predict the binding affinity of the protein-drug complex. My training data set has 3847 examples. My testing data set contains 209 examples and is a representative and diverse set of such protein-drug complexes that is used as a standard benchmark for this regression problem [1](https://pubs.acs.org/doi/full/10.1021/acs.jcim.8b00545). I am not using an 80-20 train-test split. I trained the following models, with these hyperparamters that I tuned with a on the same training set with a 10-fold CV grid search. - Linear regression (no hyperparameters) - MARS (penalty = 5, max degree = 1) - kNN regression (n_neighbors=13, p=1) - Support vector regression (kernel = 'rbf', gamma = 0.125, epsilon=0.25, C=1) - Random forest (n_estimators=2000, max features=2) The problem I was told that bigger data sets will usually improve the generalization ability of models. To test this I did the following: I defined a set of subset sizes = {384, 768, 1152, .. 3847} i.e subsets that were 10%, 20% .. 100% of the training set. For each subset size $n$, I took that $n$ examples (without replacement) from the training set, trained the above models with the same hyperparameters, tested them on the above test set and computed the RMSE. I repeated this 100 times for each subset size. In the end, I calculated the mean RMSE for each subset size [](https://i.stack.imgur.com/eRPX5.png) Fig: (a) Linear regression (b) Random forest (c) MARS (d) SVM regression (e) kNN Linear regression and RF seem to perform better as the training set approaches its full possible size. The complete opposite trend is observed for SVM and MARS, and kNN appears to perform best for an intermediate size of the training set. I don't know how to explain this pattern. One piece of information that came to mind is the composition of the training set. Here is a few examples from the training set: [](https://i.stack.imgur.com/oQcsa.png) As you can see, some proteins are over represented in this set (each of these repeated proteins is bound to different drug molecules, so a single protein-drug pair is not repeated, even if specific drugs or proteins are repeated). My guess is that in smaller subsets of the training set, the data set is more diverse and balanced, but becomes more biased as it approaches its full size, and hence some models are overfitting to these repeated examples. If that is the case, it makes sense that the linear regression model is not affected as it is a high bias model, but I can't make sense of the remaining observations. EDIT: The question previously stated that there were 3500 training examples. This was corrected.
Why are some models performing better with a larger data set, and why are some models performing worse?
CC BY-SA 4.0
null
2023-04-19T13:31:28.423
2023-04-20T04:04:32.423
2023-04-20T04:04:32.423
384812
384812
[ "regression", "overfitting" ]
613457
1
null
null
2
74
I am currently working on a dataset (count data) in which one observation corresponds to one day of monitoring at a site. The overall protocol is to monitor groups of sites along transects. Almost all transects (and by extension almost all sites) were monitored several times. Sometimes, several transects were monitored at on given date. I would like to be able to predict the abundance of the studied species according to multiple landscape and meteorological metrics. To do so, I tried to construct GLMMs (with a negative binomial distribution) using the glmmTMB function: ``` Abundance ~ environmental variables + meteorological variables + Julian day (and its quadratic effect) + year (in factor) + (1 | transect / site) + (1 | date) ``` However, my models suffer from spatial autocorrelation. I tested it with the testSpatialAutocorrelation function from the DHARMa package: ``` groupLocations = aggregate(countData[, which(colnames(countData) %in% c("x","y"))] , list(countData$xy), mean) groupLocations$xy <- unique(countData$xy) res <- simulateResiduals(myModel) res2 <- recalculateResiduals(res, countData$xy) all(unique(res2$group) == groupLocations$xy) testSpatialAutocorrelation(res2,x=groupLocations$x,y=groupLocations$y) ``` I would thus like to correct it, ideally using a covariance structure like those presented in this vignette : [https://cran.r-project.org/web/packages/glmmTMB/vignettes/covstruct.html](https://cran.r-project.org/web/packages/glmmTMB/vignettes/covstruct.html) So I tried to add `exp(pos + 0 | group)` to my model with (1) `countData$pos <- numFactor(countData$x, countData$y)`, (2) with group being a dummy variable, my sites IDs or my date IDs... Whatever the solution I choose, it doesn't seem to work (R is working indefinitely). I guess my problem might come from two main issues: - the factI don't fully understand what the "group" metric sould be, - the fact that I have several observations at the same location (but at different dates), For information, latitude (y) and longitude (x) are in Lambert 93 in my dataset but I can transfrom them in WGS84 if needed. *EDIT Here is an example of the type of data I use (but not the real ones, which are very heavy) and my code. Interestingly with this small dataset, `exp(pos + 0 |group)` seems to work whether I define `group` as a dummy variable or as my site IDs, but with different results. In both cases, the testSpatialAutocorrelation function still detect spatial autocorrelation. ``` library(DHARMa) library(glmmTMB) rm(list=ls()) ####I. Data creation#### newData <- data.frame(transect=c(rep("Transect_A",8*8),rep("Transect_B",10*3),rep("Transect_C",12))) newData$site <- c(rep(paste0("Site_A",c(1:8)),8),rep(paste0("Site_B",c(1:10)),3),paste0("Site_C",c(1:12))) newData$date <- c(rep("2001-05-03",8),rep("2001-06-25",8),rep("2002-06-04",8),rep("2002-07-15",8) ,rep("2003-04-28",8),rep("2003-05-16",8),rep("2003-06-30",8),rep("2003-07-24",8) ,rep("2001-06-02",10),rep("2002-07-15",10),rep("2003-05-08",10) ,rep("2033-05-08",12)) coordX_TA <- c(524623.6,524379.1,524379.1,524614.5,524877.1,525040.1,525185.0,525112.5) coordX_TB <- c(527023.3,526769.7,526615.8,526652.0,526941.8,527258.7,527430.8,527566.6,527620.9,527783.9) coordX_TC <- c(525329.9,525148.8,525230.3,525447.6,525710.2,525945.7,526126.8,526172.0,526090.5,525927.5,525728.3,525683.0) coordY_TA <- c(6705186,6705041,6704842,6704715,6704806,6704942,6705132,6705358) coordY_TB <- c(6703728,6703593,6703357,6703094,6702941,6703022,6703194,6703420,6703665,6703864) coordY_TC <- c(6700966,6700767,6700486,6700369,6700323,6700360,6700559,6700803,6701057,6701220,6701365,6701102) newData$X <- c(rep(coordX_TA,8),rep(coordX_TB,3),coordX_TC) newData$Y <- c(rep(coordY_TA,8),rep(coordY_TB,3),coordY_TC) newData$abundance <- c(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,0,0,0,0,0,2,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 ,0,0,7,0,0,0,0,0,0,2,0,0,0,0,0,4,2,4,0,1,12,0,0,0,0,0,4,0,0,0,0,0,0,0,2,3,0,0,0,0,0,0,1 ,0,0,0,1,0,0,0,0,0,0,1,0,9,3,6,11,4,9,12) newData$explanatoryVar <- c(rep(c(0.42899904,0.77110546,0.45891875,2.35145233,0.52373614,0.03604363,-0.20136060,3.23943293),8) ,rep(c(0.21227248,1.31673835,3.63072002,1.29581033,3.55550595,1.36627219 ,1.56734995,-0.11314501,-0.38866418,-0.18451185),3) ,c(-0.39731842,-0.47077649,-0.46944891,-0.18973198,-0.53603923,-0.38094487,0.16941656,0.60947665 ,0.08765610,0.33079765,0.09083049,2.18221584)) ####II. Model without correction#### newData$explanatoryVar_scaled <- scale(newData$explanatoryVar) myModel <- glmmTMB(abundance ~ explanatoryVar + (1|transect/site) + (1|date) , data = newData, family = "nbinom2",na.action="na.fail") summary(myModel) #test Spatial autocorrelation groupLocations = aggregate(newData[, which(colnames(newData) %in% c("X","Y"))] , list(newData$site), mean) groupLocations$site <- unique(newData$site) res <- simulateResiduals(myModel) res2 <- recalculateResiduals(res, newData$site) all(unique(res2$group) == groupLocations$site) testSpatialAutocorrelation(res2,x=groupLocations$X,y=groupLocations$Y) ####III. Model with correction#### newData$pos <- numFactor(scale(newData$X), scale(newData$Y)) # newData$group <- factor(rep(1, nrow(newData))) #group as a dummy variable newData$group <- factor(newData$site) myModelCorrection <- glmmTMB(abundance ~ explanatoryVar + (1|transect/site) + (1|date) + exp(pos + 0 | group) ,data = newData, family = "nbinom2",na.action="na.fail") summary(myModelCorrection) #test Spatial autocorrelation groupLocations = aggregate(newData[, which(colnames(newData) %in% c("X","Y"))] , list(newData$site), mean) groupLocations$site <- unique(newData$site) res <- simulateResiduals(myModelCorrection) res2 <- recalculateResiduals(res, newData$site) all(unique(res2$group) == groupLocations$site) testSpatialAutocorrelation(res2,x=groupLocations$X,y=groupLocations$Y) ```
Spatial autocorrelation correction with glmmTMB
CC BY-SA 4.0
null
2023-04-19T13:38:57.177
2023-04-21T16:20:58.230
2023-04-21T14:42:55.483
323265
323265
[ "mixed-model", "generalized-linear-model", "spatial", "glmmtmb", "spatial-correlation" ]
613458
1
null
null
1
16
I'm not sure how else to put it, but I often use the [sklearn.metrics.classification_report](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function in order to measure the performance of various classification models I'm using. Here, the term "support" is used to refer to the true number of samples for each label. In other words, it's the sum of true positives and false negatives. I also often use the number of predicted samples as well, though, in order to get a better picture. This is the sum of true positives and false positives. Is there any term like "support" that is used to refer to this sum? Other than "number of positive calls" as is shown in the Wikipedia page for [positive and negative predictive values](https://en.wikipedia.org/wiki/Positive_and_negative_predictive_values), I can't seem to find anything.
Is there a term used to refer to the total number of positive predictions?
CC BY-SA 4.0
null
2023-04-19T14:01:38.823
2023-04-19T14:01:38.823
null
null
211707
[ "classification", "terminology", "metric" ]
613459
2
null
613452
2
null
To show that such a stationary process does not exist, it suffices to show that $\gamma$ is not non-negative definite (see Definition 1.5.1 and Theorem 1.5.1 of Time Series: Theory and Methods by Brockwell and Davis), which can be shown by demonstrating the covariance matrix $\Gamma$ of $(Y_1, Y_2, Y_{K + 2})$ is not non-negative definite. By assumption, the order $3$ matrix $\Gamma$ equals to \begin{align} \begin{bmatrix} 1 & 1 & 0 \\ 1 & 1 & 1 \\ 0 & 1 & 1 \end{bmatrix}. \end{align} It is easy to verify that the determinant of this matrix is $-1$, hence it cannot be non-negative definite (here we used the fact that if a matrix is PSD, then its determinant must be non-negative). To put in a more comprehensible way, consider the linear combination $X = Y_1 - Y_2 + Y_{K + 2}$. Under the assumption, the variance of $X$ is $1 + 1 + 1 - 2 - 2 = -1 < 0$, which is absurd.
null
CC BY-SA 4.0
null
2023-04-19T14:05:22.880
2023-04-20T01:18:51.987
2023-04-20T01:18:51.987
20519
20519
null
613460
1
null
null
2
66
I want to use the Wasserstein distance from scipy.stats.wasserstein_distance to get a measure for the difference between two probability distribution. However, I do not understand how the support matters here. For example, I would have expected `stats.wasserstein_distance([0,1,0],[1,0,0])` to be 1 (as we need to move a mass of weight 1 by a distance of 1), however it is 0. Why is this? I know this is related to this question: [what does the Wasserstein distance between two distributions quantify](https://stats.stackexchange.com/questions/455532/what-does-the-wasserstein-distance-between-two-distributions-quantify). My understanding is that the Wasserstein distance is the earth-movers distance; thus, it would be that it is able to show a further distance between delta-function like probability distributions where the peaks are not aligned but further apart - eg, the difference between distributions of two variables with cardinality four [1,0,0,0] and [0,0,1,0] should be higher than between distributions [1,0,0,0] and [0,1,0,0]. The Jensen-Shannon divergence would be the , where the same in both cases. However, in the scipy implementation the Wasserstein distance is zero in both cases also. Edit: This question was closed without an explanation and despite an upvote. Thankfully, I found the answer here: [https://stackoverflow.com/questions/57402842/why-is-the-wasserstein-distance-between-0-1-and-1-0-zero](https://stackoverflow.com/questions/57402842/why-is-the-wasserstein-distance-between-0-1-and-1-0-zero) — I also asked on stackoverflow before asking here and have clarified things there, but in short, the Wasserstein distance from scipy does NOT calculate the distance between two distributions P(A) and P(B) where A and B have the same support but different probabilities for each state; rather, it calculates the difference between different samples of a single distribution. Edit 2: You can use the scipy implementation for this but need to use the weights. Summary is here: [https://stackoverflow.com/questions/76049158/wasserstein-distance-in-scipy-definition-of-support/76061410#76061410](https://stackoverflow.com/questions/76049158/wasserstein-distance-in-scipy-definition-of-support/76061410#76061410)
Wasserstein distance (support; scipy implementation)
CC BY-SA 4.0
null
2023-04-19T14:09:11.050
2023-04-21T17:36:53.393
2023-04-21T17:36:53.393
259911
259911
[ "scipy" ]
613461
1
null
null
2
36
I want to model the relationship between time spent watching video ads and purchases. There is a correlation between viewing time and purchase. The longer the viewing time, the higher the purchase intent; and the higher the purchase intent, the longer the viewing time. How can we model such a non-DAG (feedback) relationship to know the impact of the two factors on both parties?
Create a model with non-DAG relationship
CC BY-SA 4.0
null
2023-04-19T14:13:50.200
2023-04-20T19:03:32.970
2023-04-20T19:03:32.970
76484
386076
[ "causality" ]
613462
1
null
null
1
45
I am identifying a latent profile model with the Mclust package in R. After identifying an optimal number of cluster I would like to identify possible covariates and distal outcomes via logistic/linear regression. The recommended approach is to use step3 model (1) as implemented in Mplus or LatentGold. These specialised softwares implement both maximum likelihood (ML) and The Bolck–Croon–Hagenaars (BCH) approach. However non was implemented in R. I have found only examplar implementation of ML by Daniel Tompsett and Bianca L De Stavol in a presentation found online (2), but the instructions are limited in the 3th step. The ML is recommended for categorical variables, while BCH should be more suitable for continuous outcomes. Could someone provide me with guidance how to do so? Thanks - Bakk, Z., & Kuha, J. (2021). Relating latent class membership to external variables: An overview. The British journal of mathematical and statistical psychology, 74(2), 340–362. https://doi.org/10.1111/bmsp.12227 - https://www.stata.com/meeting/uk22/slides/UK22_Tompsett.pdf
Implementing bias-adjustion for step3 latent profile analysis in R
CC BY-SA 4.0
null
2023-04-19T14:39:45.307
2023-04-19T14:39:45.307
null
null
313504
[ "latent-variable", "dependent-variable", "finite-mixture-model", "bias-correction" ]
613463
2
null
613250
0
null
in principle, PB regression should also work if there are many points in the origin. Are their coordinates all exactly (0,0)? Which program did you use? In R, you could use the mcr package from CRAN, which has several options to perform PB regression. Kind regards Florian
null
CC BY-SA 4.0
null
2023-04-19T14:48:35.533
2023-04-19T14:48:35.533
null
null
386082
null
613464
1
null
null
1
8
I have a population that I have quantitatively profiled using two different technologies. The result is two data matrices that have the exact same set of individuals as rows, but different features (both in type and number) as columns. I want to know if the two technologies are actually characterizing anything distinct about the samples, or if both approaches are basically giving us the same data. What is a statistic that will capture some measure of "similarity" or "mutual information" between two differently sized matrices? Implementation in R would be ideal. I'm looking for something that would yield an extreme high score if one matrix was just a copy or subset of another, and an extreme low score if features in one matrix are independent or non-inferable from the other. Most similarity measures I've found can measure between vectors or between matrices of identical dimension, but not between an $N$x$M$ matrix and an $N$x$P$ matrix. I've considered looking at the distribution of pairwise correlations between features from one matrix to another, but most features will generally be uncorrelated even if one feature is perfectly copied from one matrix to another. Another thought was to look at the distribution of maximum feature-pairwise correlations to see if features from one dataset have any good correlate in the other (rather than on average). But what I'd really like would be a single number to quantify similarity of differently sized matrices ("similarity" of course open to interpretation).
Measure "similarity" of different feature sets on the same samples
CC BY-SA 4.0
null
2023-04-19T14:50:09.847
2023-04-19T14:50:09.847
null
null
76825
[ "matrix", "similarities" ]
613466
1
null
null
0
19
I am working on a survival analysis, which will provide an input for the decision analytic model (discrete event simulation that samples time to event from a survival curve). For that, I need shape and scale parameters and corresponding standard errors for three different competing events that can happen to people. The competing events are captured by the factor variable ` trans` . I understand correctly that the shape and scale parameter for the reference category (i.e. occurance of event 1) are the estimates in the second column (1.4185, 25.8537). The estimates for both shape and scale of trans2 and trans3 are, however, a log hazard ratio and I need to calculate the exp(est)est for reference category to get the shape/scale estimate on the natural scale (is natural scale an accurate terminology?). For instance, the shape for trans2 would be 0.96931.4185 = 1.374952. To test this, I rearranged my analysis so trans2 became a reference category and I received this as a shape estimate for the reference category. What I still do not understand is how the standard errors for shape and scale of the trans2/trans3 can be calculated converting the SE of their log hazard ratio. For instance, the output below gives me SE for shape(trans2) = 0.0535. In the rearranged my analysis, SE for the referrence category is 0.0646. I do not see how to get the SE 0.0646 using the original output. I found something about a delta method, that effectivelly multiply the estimate on a natural scale with the standard error of an estimate on the log scale. In the provided example, it would be 1.374952*0.0535, which does not give the SE of 0.0646. Last, this is an unadjusted analysis. I am wondering, how the shape and scale parameters for the weibull survival curves would be calculated in case more variables were added such as sex. To illustrate, I would be interested to extract a shape and scale for a survival curve for event2 (trans2) for women. Output 1 ``` Call: flexsurvreg(formula = Surv(time, status) ~ trans + shape(trans), data = long.mild, subset = (long.mild$condition == 0), dist = "weibull") Estimates: data mean est L95% U95% se exp(est) L95% U95% shape NA 1.4185 1.3490 1.4917 0.0364 NA NA NA scale NA 25.8537 24.6060 27.1647 0.6525 NA NA NA trans2 0.3333 0.8661 0.7277 1.0046 0.0707 2.3777 2.0702 2.7308 trans3 0.3333 1.0358 0.8741 1.1974 0.0825 2.8173 2.3968 3.3116 shape(trans2) 0.3333 -0.0312 -0.1360 0.0737 0.0535 0.9693 0.8728 1.0765 shape(trans3) 0.3333 0.1435 0.0206 0.2665 0.0627 1.1543 1.0208 1.3053 ``` Output2 (rearranged analysis with where original trans2 is a refrence value) ``` Call: flexsurvreg(formula = Surv(time, status) ~ trans + shape(trans), data = long.mild, subset = (long.mild$condition == 0), dist = "weibull") Estimates: data mean est L95% U95% se exp(est) L95% U95% shape NA 1.3750 1.2541 1.5076 0.0646 NA NA NA scale NA 61.4722 54.0142 69.9599 4.0565 NA NA NA trans2 0.3333 -0.8661 -1.0046 -0.7277 0.0707 0.4206 0.3662 0.4830 trans3 0.3333 0.1696 -0.0314 0.3706 0.1026 1.1848 0.9691 1.4486 shape(trans2) 0.3333 0.0312 -0.0737 0.1360 0.0535 1.0316 0.9289 1.1457 shape(trans3) 0.3333 0.1747 0.0296 0.3198 0.0740 1.1909 1.0301 1.3769 N = 4434, Events: 1173, Censored: 3261 Total time at risk: 66624 Log-likelihood = -5494.606, df = 6 AIC = 11001.21 ```
Interpretation of the Weibull survival analysis output, converting SE of the log hazard ratio estimate to natural scale
CC BY-SA 4.0
null
2023-04-19T14:58:06.190
2023-04-19T14:58:06.190
null
null
339191
[ "r", "survival", "weibull-distribution" ]
613467
1
null
null
0
55
There is a known positive sequence $M_n>0$ (e.g., $M_n = \log(n)$), where $M_n\to\infty$ as $n \to \infty$. If a sequence $X_n$ is $O_p(1)$, then can I claim $\Pr(|X_n| > M_n ) \to 0$ as $n \to \infty$? It seems to me that this is true because $\Pr(|X_n| > M_n ) = \Pr(|X_n|/ M_n > 1)$ and $X_n/M_n = O_p(1/M_n) = o_p(1)$. By the definition of $o_p(1)$, $\Pr(|X_n|/ M_n > 1)$ must go to zero as $n \to \infty$. Is this correct?
Compare an $O_p(1)$ with some growing numbers
CC BY-SA 4.0
null
2023-04-19T14:59:27.547
2023-04-21T13:21:46.307
2023-04-21T13:21:46.307
20519
386084
[ "probability", "convergence", "asymptotics" ]
613468
1
613470
null
2
89
I'm fitting a model with MLE using `scipy.minimize` (method BFGS). I want to have the hessian to compute its inverse and retrieve the standard error of each parameter. The result of `minimize` contains one approximation of the inverse hessian matrix in `res.hess_inv`. To cross-check with other methods, I computed the hessian inverse using statmodel `approx_hess` and then applying `np.linalg.inv`. It gives a completely different matrix (with some diagonal elements that have up to 2 order of magnitude difference) Finally, I decided to compute the hessian myself since it looks like I can't cross-check the result of the functions above : ``` def approx_hessian(x, f, eps=1e-5, only_diag=False) -> np.ndarray: """ Computes an approximation of the Hessian matrix of a function using the finite difference method. Parameters: x (array-like): The point at which to compute the Hessian matrix. f (callable): The function to compute the Hessian of. The function should take an array-like object as input and return a scalar. epsilon (float, optional): The step size for the finite difference method. Returns: hessian (ndarray): An approximation of the Hessian matrix of func at x. """ n = len(x) hessian = np.zeros((n, n)) # Compute the off-diagonal elements if not only_diag: for i in range(n): for j in range(n): if i == j or i < j: continue # the hessian is symmetrical, so we can compute only half of it p1 = x.copy() p1[i] += eps p1[j] += eps p2 = x.copy() p2[i] += eps p2[j] -= eps p3 = x.copy() p3[i] -= eps p3[j] += eps p4 = x.copy() p4[i] -= eps p4[j] -= eps hessian[i][j] = (f(p1) - f(p2) - f(p3) + f(p4)) / (4 * eps ** 2) hessian = hessian + hessian.transpose() # fill the element under the diagonal with the same numbers # compute diagonal elements for i in range(n): p_forward = x.copy() p_forward[i] += eps p_backward = x.copy() p_backward[i] -= eps hessian[i][i] = (f(p_forward) - 2 * f(x) + f(p_backward)) / (eps ** 2) return hessian ``` I test my implementation by computing manually double derivatives on a simple function having 3 parameters : ``` def test_approx_hessian(self): # example taken from https://www.allmath.com/hessian-matrix-calculator.php def fn(params: np.ndarray) -> float: x, y, z = params return 2 * y * x ** 4 + 2 * y ** 3 + 2 * x * z ** 3 def expected_hessian(p: np.ndarray) -> np.ndarray: x, y, z = p return np.array([ [24 * x ** 2 * y, 8 * x ** 3, 6 * z ** 2], [8 * x ** 3, 12 * y, 0], [6 * z ** 2, 0, 12 * x * z], ]) point = np.array([1.0, 2.0, 3.0]) hessian_expected = expected_hessian(point) hessian_approx = approx_hessian(point, fn) self.assertTrue(np.allclose(hessian_expected, hessian_approx, atol=1e-4)) ``` This test passes, so I believe my function works. However when I compute the hessian inverse using my implementation, it gives a matrix that is not only still completely different from the 2 others, but also with diagonal elements that are negative ... (while they should be variances, so the diagonal elements are supposed to be positive as far as I know) Are maths doing me a prank ? What am I doing wrong ? What function should I "trust" to have finally a standard error of my parameters ?
scipy minimize gives a hess_inv that is completely different from inv(statmodel.approx_hess)
CC BY-SA 4.0
null
2023-04-19T15:05:25.280
2023-04-19T15:23:08.047
null
null
372184
[ "standard-error", "fisher-information", "hessian" ]
613469
1
null
null
0
98
I have questions about GPT-2. What is the context window in the pre-train gpt-2 train? As I know language modeling is based on gpt-2 use prefix language modeling that is demonstrated in picture form T5 model below: [](https://i.stack.imgur.com/rrcwp.png) And the formula of gpt model indicate below : [](https://i.stack.imgur.com/hp9rO.png) So I would like to know what the number of K(context size)? that the initial first stage for training in pre-train GPT-2? If GPT is not prefix language modeling, then what is it?
What the context window size in pre-train gpt-2
CC BY-SA 4.0
null
2023-04-19T15:16:25.773
2023-04-19T15:17:17.843
2023-04-19T15:17:17.843
386083
386083
[ "language-models", "pre-training", "gpt" ]
613470
2
null
613468
1
null
You should not use the output from BFGS as the true inverse of the hessian. The reason for this is that BFGS does not approximate the hessian based on the current rate of change of the derivatives of the target function. Rather, it builds up a useful approximation by tracking the change of the first derivative at each state of the optimization algorithm. I believe it also uses some regularization to keep the step sizes in check. Note that without regularization, you can't have an invertable Hessian based on solely first order differences of the gradient if you've got less steps than parameters of your model. This means, at best, you have something like the inverse of the weighted average of the changes of the first derivative through the different iterations of your algorithm, which may be very different than the inverse of the Hessian at the MLE. At worst, you have a matrix that's heavily regularized as to make sure that your next step isn't too large, where the regularization has nothing to do with the uncertainty of the parameters of your model. Wikipedia [cautions](https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm) about this issue, but I think they understate it: > In statistical estimation problems (such as maximum likelihood or Bayesian inference), credible intervals or confidence intervals for the solution can be estimated from the inverse of the final Hessian matrix[citation needed]. However, these quantities are technically defined by the true Hessian matrix, and the BFGS approximation may not converge to the true Hessian matrix
null
CC BY-SA 4.0
null
2023-04-19T15:23:08.047
2023-04-19T15:23:08.047
null
null
76981
null
613471
1
null
null
0
30
I am attempting to run a Mixed-Effects Logistic Regression in R, using the glmer function to predict the choice (0 = reject vs. 1 = accept) of a certain offer in an experimental task on a trial-by-trial structure. The predictors (or fixed effects) include the condition (3-level factor), the reward (numeric) and the costs (numeric) of the current trial. I'm also using a random intercept for the subject. Initially, when running the full model with all interactions, I found that all predictors were highly significant, which was surprising. ``` glmer(Choice ~ Condition * Reward * Cost + (1|Subject), data = data, family = 'binomial', control=glmerControl(optimizer="bobyqa",optCtrl=list(maxfun=2e5))) ``` The model failed to converge, thus, I tried a simpler model by gradually adding more predictors to identify the problem. It seems that the model only converges and gives reasonable results if I do not include the Condition x Reward interaction. Running the model without this interaction, I get these results: ``` (Intercept) 3.15906 0.16856 18.742 < 2e-16 *** CondB -0.07714 0.08023 -0.962 0.33628 CondC -0.14820 0.19012 -0.780 0.43568 COST -0.72789 0.07212 -10.093 < 2e-16 *** REWARD 4.69425 0.07540 62.257 < 2e-16 *** CondB:COST 0.08082 0.03530 2.289 0.02206 * CondC:COST 0.11315 0.03602 3.142 0.00168 ** COST:REWARD -0.49778 0.05721 -8.701 < 2e-16 *** ``` When running the model without this interaction, I obtain results that show no convergence issues. However, adding the interaction effect causes the model to fail to converge and gives different estimates for the main effect of condition and each condition interaction (e.g., -0.79 instead of -0.07 for Condition B), while the cost and reward estimates are not changed to the same degree. ``` (Intercept) 3.92337 0.07834 50.079 < 2e-16 *** CondB -0.79188 0.06142 -12.893 < 2e-16 *** CondC -1.43633 0.07344 -19.558 < 2e-16 *** COST -0.77109 0.05469 -14.100 < 2e-16 *** REWARD 5.97786 0.06192 96.546 < 2e-16 *** CondB:COST 0.13214 0.03280 4.028 5.62e-05 *** CondC:COST 0.21043 0.03306 6.366 1.94e-10 *** COST:REWARD -0.46865 0.04644 -10.091 < 2e-16 *** CondB:REWARD -1.19364 0.07265 -16.430 < 2e-16 *** CondC:REWARD -2.28706 0.06707 -34.098 < 2e-16 *** ``` I am unsure of the problem and would appreciate any ideas or suggestions. Thank you in advance!
Adding interaction effect in glmer model leads to convergence issues and unreasonable results
CC BY-SA 4.0
null
2023-04-19T15:27:20.360
2023-04-19T15:27:20.360
null
null
386085
[ "r", "regression", "convergence" ]
613472
1
null
null
2
34
This is a theoritical question about decision trees. I believe that the question is very explicit in the title of the question. My thoughts on this question follow below. If the region we are dealing with can be separated using directions that are orthogonal to the axis, then we model such region with a decision tree that has 100% acuraccy. Some examples of such regions are the two pictures below. [](https://i.stack.imgur.com/tKBMbs.png) [](https://i.stack.imgur.com/K8aZAs.png) On the other hand, if the region we are dealing with is not separable using directions that are orthogonal to the axis, then we might assume that we CAN'T model this region with a 100% accuracy decision tree. Below I present an example of such regions. [](https://i.stack.imgur.com/tNwBts.png) Essentialy, what I am asking is: > Let's say we are given some two-dimensional region. When can we guarantee that this region we are given can be modeled by a decision tree that has 100% acuraccy? Is my thinking presented above correct? Thanks for any help in advance.
Exactly when can I model a two-dimensional region with a 100% accuracy decision tree?
CC BY-SA 4.0
null
2023-04-19T15:34:59.127
2023-04-19T16:00:27.010
2023-04-19T16:00:27.010
383130
383130
[ "self-study", "accuracy", "decision" ]
613473
1
null
null
0
151
As I know BERT model that calculates loss by compare between an original word that was masked with the predicted result for retrieve the loss and update the weight in pre-train model. But GPT-2 uses the technique predicting next word from the previous word that mean the model will calculate loss from original word with the predicted result as below as i example below. [](https://i.stack.imgur.com/ZsPo1.png) So I would like to know that my understand about calculate loss in pre-train gpt-2 that the same method in the picture above is correct or not?
How to calculate loss in pre-training gpt-2
CC BY-SA 4.0
null
2023-04-19T16:06:46.327
2023-04-19T16:07:39.887
2023-04-19T16:07:39.887
386083
386083
[ "natural-language", "loss-functions", "pre-training", "gpt" ]
613474
1
613503
null
1
61
I haven't done hypothesis testing in a long time so please bear with me. I am trying to understand if my results are statistically relevant per category and aggregated values per category and find 95% confidence interval. I have different categories of inventory, shipped and among those inventory part of inventory is damaged. How do I know if this damage is significant and if the damaged sample is big enough to make a conclusion this data is significant? I am not sure how to do it as there is only one value and and not sure how to do a testing on one number and use approaches as described in this [post](https://stats.stackexchange.com/questions/263516/how-to-calculate-confidence-intervals-for-ratios?newreg=8376a717dcfd400d96878139b72cc470). EDIT: I am adding more data as requested per comments. I want to compare if there is a significant difference between Damaged_A% vs Damaged_B% on a category level and then total. A and B stand for different type of packaging similar inventory was sent in. ``` Category Shipped_A Damaged_A Damaged_A % Shipped_B Damaged_B Damaged_B % Books 4722 65 1.38% 3400 45 1.4% Kitchen 53832 129 0.24% 27534 340 1.2% Total 58554 194 1.3% 30934 385 1.2% ```
Statistical relevance and confidence interval
CC BY-SA 4.0
null
2023-04-19T16:22:34.743
2023-04-19T22:52:30.823
2023-04-19T17:47:15.807
386090
386090
[ "statistical-significance", "confidence-interval" ]
613475
1
null
null
0
29
A traditional machine learning validation strategy is to train on some data and check performance on some holdout data. When data are time-dependent, an obvious way to proceed is to train on early periods and test on later periods, say train on data from $2000$ to $2011$ and then check performance in $2012$, $2013$ and $2014$ (those are the real years for which I am likely to have data). A technique from biostatistics is to train on a bootstrap sample of the data and then test on the original data. An advantage of this is that valuable training data are not allocated to a holdout set. However, when it comes time to bootstrap this, the way to proceed is not clear. Do we bootstrap everything and use the future to predict the past? That seems problematic. What would be the way to do this for the [panel-data](/questions/tagged/panel-data) I seem to have? [This](https://stats.stackexchange.com/q/125614/247274) seems related but not a duplicate.
Bootstrap validation for panel data machine learning
CC BY-SA 4.0
null
2023-04-19T16:34:04.977
2023-04-19T22:16:17.033
2023-04-19T22:13:57.033
247274
247274
[ "machine-learning", "predictive-models", "panel-data", "bootstrap", "validation" ]
613476
1
null
null
0
15
I currently running an agricultural field study using a 2x2x3 factorial design. I need to collect samples for measuring soil bulk density, which takes a lot of time. There are two sites, 4 replicates per treatment group, and controls added for reference, so in total there are 128 plots to collect a single sample from. This would be a large undertaking, so I would like to conduct a power analysis to determine if I can collect fewer samples, like 2 replicates per treatment group. Is it difficult to conduct power analysis for factorial designs? I found a R package `Superpower` ([link to info](https://cran.r-project.org/web/packages/Superpower/vignettes/intro_to_superpower.html)) , which I have yet to dive into. Are there other options for conducting this analysis? I can also use SAS for this purpose. In the event someone who reads this knows how to do this analysis efficiently, I'll provide the some information and desired parameters. Soil bulk density (BD) is measured as the dry weight of soil divided by the known volume. BD can vary by soil type, but for the soil I'm working on, it can vary from 1.5-1.9 grams per cubic centimeter (g/cm3). Based on [this tutorial](http://www.biostathandbook.com/power.html), I would like to be able to detect an effect size of 5%, with an alpha of 0.05 or 0.1, beta or power of 80%, and the standard deviation observed from prior data was 0.1 g/cm^3.
Minimum replications for adequate testing power with factorial design
CC BY-SA 4.0
null
2023-04-19T17:12:17.947
2023-04-19T21:08:12.280
2023-04-19T21:08:12.280
11887
386087
[ "r", "statistical-power", "fractional-factorial" ]
613477
2
null
219013
3
null
Until 5 years ago, the most popular rule of thumb was that the number of imputations should be equal to the % of missing information, but it turns out it's not a linear relationship. It's quadratic. Plus the number of imputations depends on how much random noise you can tolerate in your estimates; more imputations-->less noise. In 2020, von Hippel (that's me) published a paper explaining how to calculate the number the imputations you need given your tolerance for noise. There's a blog post here that includes links to software implementations in R, Stata, and SAS: [https://statisticalhorizons.com/how-many-imputations/](https://statisticalhorizons.com/how-many-imputations/)
null
CC BY-SA 4.0
null
2023-04-19T17:19:22.723
2023-04-19T17:19:22.723
null
null
386092
null
613478
1
null
null
1
39
When comparing differences in samples (e.g., difference in medians) between two groups, I am - adjusting group size to account for finite populations of the groups, - pooling all of the samples together sampling from the pooled data (with replacement) to create two new groups of data calculating the medians of each group, - then the difference of the medians, and - using the distribution of the resulting difference to understand the how likely the difference between groups might be observed by random chance Suppose I'm at a middle school where I give the same lecture to both class 1 and class 2 with respective class sizes of 15 and 20 students. I suspect that class 2 likes the course better since I teach that class after I have had my coffee. To assess attitude between the classes, I survey 5 students in class 1 and 10 students in class 2. The responses from class 1 are {1,2,3,4,5}. The responses from class 2 are {2,3,4,5,6,7,2,3,4,5}. I want to know if the attitude between the two classes taught by this teacher are different, say greater than a certain value x. (In this example, I happen to have ordered categorical responses--say a survey response from 1 to 7--but we can switch this to a continuous variable like grades if needed). A.C. Davidson and D.V. Hinkley's "bootstrap methods and their applications" provide methods modify sample size when bootstrapping statistics estimating a population quantity, where the population is a known, finite size (pg 92). For example, given a finite population size, we can adjust the resample size upwards to $n′$ where $n′=(n−1)/(1−n/N)$. Here, $N$ is the population size. Set up and Define the inputs: ``` import numpy as np import plotly.graph_objects as go responses_1 = [1,2,3,4,5] #median is 3 responses_2 = [2,3,4,5,6,7,2,3,4,5] #median is 4 population_size_1 = 15 population_size_2 = 20 sam_pop_ration = len(responses_1)/population_size_1 sam_pop_ration = len(responses_2)/population_size_2 ``` Approach: ``` def bootstrap_medians_pooled_approach(input_array_1, len_input_array_1, sam_pop_ration_1, \ input_array_2, len_input_array_2, samp_pop_ration_2, \ n_resamples): #sample 1 adjusted_n_1 = (len_input_array_1 - 1)/(1 - sam_pop_ratio_1) ##some considerations for having a decimal adjusted_n_1 base_adjusted_n_1 = int(adjusted_n_1) fraction_adjusted_n_1 = adjusted_n_1 - base_adjusted_n_1 #create an a array of sample 1 resample sizes ##alternate size to account for the fraction of adjustment adjusted_n_array_1 = [base_adjusted_n_1 + \ int(np.random.choice([0,1], size = 1, \ p = [1 - fraction_adjusted_n_1, fraction_adjusted_n_1)) \ for x in range(n_samples)] #sample 2 (same setup as above for sample 1) adjusted_n_2 = (len_input_array_2 - 1)/(1 - sam_pop_ratio_2) base_adjusted_n_2 = int(adjusted_n_2) fraction_adjusted_n_2 = adjusted_n_1 - base_adjusted_n_2 adjusted_n_array_2 = [base_adjusted_n_2 + \ int(np.random.choice([0,1], size = 1, \ p = [1 - fraction_adjusted_n_2, fraction_adjusted_n_2)) \ for x in range(n_samples)] pooled_array_n = np.add(adjusted_n_array_1, adjusted_n_array_2 pooled_array = input_array_1 + input_array_2 #create list of resampled medians for group 1 and group 2 medians_1 = [np.median(np.random.choice(pooled_array, size = x)) \ for x in adjusted_n_array_1] medians_2 = [np.median(np.random.choice(pooled_array, size = x)) \ for x in adjusted_n_array_2] n_resamples = 10000 bs_pool_delta = bootstrap_medians_pooled_approach(responses_1, len(responses_1), sam_pop_ratio_1,\ responses_2, len(responses_2), sam_pop_ratio_2, \ n_resamples) #visualize the distribution of deltas results fig_bsed_pool_deltas = go.Figure() fig_bsed_pool_deltas.add_trace(go.Histogram(x = bs_pool_delta) #explore the chance that the observed delta of a given delta might be observed by random chance deltas = 0.25 * x for x in range(-28,28) fig_ps_bs = go.Figure() fig_ps_bs.add_trace(go.Scatter(x = deltas, y = bsed_p_values_pool)) ``` If I wasn't adjusting the group sizes for the finite population, I would shuffle the pooled data into new groups (without replacement). By resampling with replacement, how should I interpret the results? Is it still correct to think about the `fig_bsed_pool_deltas` as the probability of observing the delta due to random error? Or is this a misapplication of the technique? One thing that bothers me is that I pool the data, but then use the original group size rather than setting the populations of each group to the sum of population_size_1 and population_size_2. Note: If I don't pool all results but rather 1) resample median values for group 1, 2) resample median values for group 2, and then 3) find the resulting difference vector, I get a much tighter distribution of the delta. I think that I would interpret this as the distribution the the calculated median, not the probability of getting the result by random chance.
bootstrap resampling technique when comparing medians with finite population size
CC BY-SA 4.0
null
2023-04-19T17:25:44.300
2023-05-06T16:12:07.637
2023-04-19T17:29:33.800
306798
306798
[ "probability", "python", "p-value", "bootstrap", "finite-population" ]
613479
1
null
null
2
24
SUMMARY How does the diagonal rescaling fit into the derivation of a multiplicative update rule for non-negative matrix factorization (NMF)? DESCRIPTION The NMF problem aims to find non-negative matrix factors $W$ and $H$ from a non-negative matrix $V$ such that $V \approx WH$. Having an additive update rule $H \leftarrow H + \eta_{H}[W^TV - W^TWH] \qquad (1)$ and a multiplicative update rule $H \leftarrow H \frac{W^TV}{W^TWH} \qquad (2)$ for $H$, the [paper](https://proceedings.neurips.cc/paper/2000/file/f9d1152547c0bde01830b7e8bd60024c-Paper.pdf) states that if > we diagonally rescale the variables and set $\eta_H=\frac{H}{W^TWH}$, then we obtain the [multiplicative] update rule for $H$ [from the additive rule]. Note that this rescaling results in a multiplicative factor with the positive component of the gradient in the denominator and the absolute value of the negative component in the numerator of the factor of the form $\theta \leftarrow \theta \frac{\nabla^-_\theta f(\theta)}{\nabla^+_\theta f(\theta)} \qquad (3)$. QUESTION - What is the diagonal rescaling exactly? And what role does it play in the derivation of the multiplicative update rule? I read a post on the derivation here but could not find any explanations about the rescaling. It would be very much helpful if anyone can explain why rescaling is necessary and the mathematical equations associated with it. - How does rescaling make a multiplicative factor with the form of Equation (3)? - Lastly, it is said that NMF factors are invariant to rescaling as the degree of freedom of diagonal rescaling is always present (c.f., here, here). How does this statement relates to the derivation here? Reference: "Algorithms for Non-negative Matrix Factorizaton" [Lee & Seung (2000)](https://proceedings.neurips.cc/paper/2000/file/f9d1152547c0bde01830b7e8bd60024c-Paper.pdf)
Understanding diagonal rescaling in multiplicative update rules for NMF
CC BY-SA 4.0
null
2023-04-19T17:56:45.393
2023-04-19T17:56:45.393
null
null
337749
[ "machine-learning", "least-squares", "unsupervised-learning", "matrix-decomposition", "non-negative-matrix-factorization" ]
613480
1
null
null
0
15
I am running a test vs. control on a new promotion - one group will get the offer and one group will not. Previously, we have only run tests on existing customers, however this time, we want to run it on new customers as well. My question is, considering I know each group of customers, do I need to account for family-wise error when analyzing the results for both tests, or should I be using a different test entirely instead of considering them as two tests
Running multiple test vs. control
CC BY-SA 4.0
null
2023-04-19T18:05:44.243
2023-04-19T18:05:44.243
null
null
364764
[ "hypothesis-testing", "anova", "t-test", "familywise-error" ]
613481
1
null
null
0
53
I have data aggregated as a histogram $$ (m_1, c_1), (m_2, c_2), \dots, (m_k, c_k) $$ where $m_1 < m_2 < \dots < m_k$ are the midpoints of the histogram bins and $c_i$ are the counts that sum to the total number of the raw datapoints $\sum_i c_i = N$. The count $c_i$ is for the points within the $[m_i-d, m_i+d)$ range. The histogram bins are evenly spaced, so $m_i + 2d = m_{i+1}$. Such histogram can be used to create a weighted kernel density $$ f(x) = \sum_i w_i K_h(x - m_i) $$ with weights $w_i = c_i / \sum_j c_j$. The problem is finding appropriate bandwidth $h$ for the kernel $K_h$. What would be the most promising way of doing this? How would the answer change if instead of a histogram I had an approximate histogram? In the approximate case, $m_i$ are the bin averages, but they are not evenly spaced. The bins do not have clear-cut boundaries† and the $c_i$ counts are proportional to the number of points surrounding the $m_i$ centers. Given the imprecise and approximate nature of such data, I would be perfectly fine with a decent rule of thumb for bandwidth selection. My main aim is possibly the least-misleading way of visualizing the histogram. † By approximate histogram with non-clear cut boundaries I mean, for example, a histogram created by taking multiple histograms as described in the first section and then merging them. This could be done by combining the closest bins between the histograms, where in different histograms the bins are centered on different values. In such a case, we would be averaging bins covering different ranges, so the averaged histogram would not have exact boundaries.
How to transform histogram to kernel density?
CC BY-SA 4.0
null
2023-04-19T18:08:17.500
2023-04-20T08:10:39.150
2023-04-20T08:10:39.150
35989
35989
[ "data-visualization", "kernel-smoothing", "histogram", "density-estimation" ]
613482
2
null
486854
0
null
I saw no one answered this so.... You ask "I want to know if I have specified the correct effect size and have I put the effect in the correct direction ": The estimate column in the output for `summary(m3)` gives you the estimated beta values. You can read these as "for each unit of the predictor (fire), the response variable (animal.present) changes by this estimate." So, in this particular example, for each unit of fire, you have a have adjusted the estimate for the beta value to a reduction of 0.7 animals (-0.7). I do not know if that is a meaningful value, but you choose the correct direction. Next time, it would be helpful if you also included the commands you used to generate the output you share.
null
CC BY-SA 4.0
null
2023-04-19T18:28:41.187
2023-04-19T18:28:41.187
null
null
332527
null
613483
1
null
null
0
16
I have a simple problem, but haven't found a solution yet. Let $Y_{i}=\sum_{a} X^{a}_{i}*\omega^{a}_{i}$, where $Y$ is something I calculate, and everything else is observed. $\sum_{a} X^{a}_{i}=1$, as this is these are the fractions of a day's time spent in activity $a$. $\omega^{a}_{i}\in[0,1]$, as this is the fraction of time spent in activity $a$ outside of the home. $Y_{i}$ is therefore the share of the day spent outside. I am interested in the difference in outcomes between two groups, $M$ and $F$. In particular, I computed estimates for: $\mathbb{E}[Y_{i}|i\in M]-\mathbb{E}[Y_{i}|i\in F]$ and I'd like to produce a summary measure that captures how much of this difference is due to the groups having different distributions of $X$'s and how much is due to them having different distributions of $\omega$'s (something like "x% of the difference is due to differences in ..."). I'm happy to do this manually, via simulations or regressions - as long as the output is straightforward to explain and interpret. Any suggestions or references? Thanks!
Decomposing difference in outcomes between two groups
CC BY-SA 4.0
null
2023-04-19T18:35:20.343
2023-04-19T18:51:09.280
2023-04-19T18:51:09.280
386094
386094
[ "group-differences" ]
613484
1
null
null
0
10
Any features which is represented as a list of 0 or more elements is what I call a `Dynamic` feature. Let us suppose an example where there are 10 Million movies and a User feature sample looks like: ``` Age Gender County Movies-Liked 1. 25 M USA [Movie-1934, Movie-135, Movie-13] 2. 12 F UK [Movie-1] 3. 63 M IND [] 4. 34 F UK [Movie-135, Movie-1] ``` In this case the feature named `Movies-Liked` is a Dynamic feature. Let's suppose there are 2 different tasks: - Merchandise (books, t-shirts etc) recommendation to user - User Classification and I want to use `Movies-Liked` as a feature in both of them. What are all the ways to use that as a feature? Using `OHE` of all the movies separately is not an option at all because One Hot Encoding of 10 Million movies is not practically possible
What are the ways to use a LIST of features which are DYNAMIC (contents) in nature?
CC BY-SA 4.0
null
2023-04-19T18:58:54.693
2023-04-20T04:43:46.753
2023-04-20T04:43:46.753
262302
262302
[ "machine-learning", "neural-networks", "feature-selection", "feature-engineering", "recommender-system" ]
613485
1
null
null
0
29
Does $ARMA(p,q)$ process need to be invertible and have a causal stationary solution to be written in $MA(\infty)$ representation? And if you write the process in terms of $Z_t$ instead of $X_t$, then you get $AR(\infty)$ form right?
Does $ARMA(p,q)$ process need to be invertible and have a causal stationary solution to be written in $MA(\infty)$ representation?
CC BY-SA 4.0
null
2023-04-19T19:14:48.693
2023-04-19T19:14:48.693
null
null
365933
[ "time-series", "arima", "stationarity", "autoregressive", "moving-average" ]
613487
1
null
null
1
20
At some point in the history of statistics, there surely was a transition from thinking of statistical measures strictly as imperfect approximations of real quantities, to thinking of them as estimates of unobservable parameters of abstract population distributions. Similar transitions, from the material to the abstract, have occurred in other fields: the shift in mathematics credited to Pythagoras, say, or the shift in philosophy attributed to Plato. To whom can we attribute the paradigm shift in inferential statistics? I don't think Bayes would qualify, given his view of probability as a belief. Graunt seems slightly early, Fisher and Pearson far too late. My inclination is toward more pure mathematicians, like Pascal, Montmort, and the Bernoullis.
Who first described a statistical estimate as an approximation of a population parameter?
CC BY-SA 4.0
null
2023-04-19T19:56:54.687
2023-04-19T19:56:54.687
null
null
298128
[ "mathematical-statistics", "inference", "frequentist", "games", "history" ]
613489
1
613825
null
5
187
I am reading [this article](https://training.cochrane.org/handbook/archive/v6.2/chapter-10#_Ref180058364) which says that "when heterogeneity is present, a confidence interval around the random-effects summary estimate is wider than a confidence interval around a fixed-effect summary estimate." This is with regards to a meta analysis approach where the outcome is a particular study's overall effect (e.g. mean treatment effect). A fixed effect model will just be a regression model that assumes differences between studies can be attributed to random error $\epsilon$. A random effects model will assume each study has a somewhat different effect, which is mean of all the study effects plus a realization from some random effect distribution. What I am really confused about is why the random effects model will give larger standard errors to, for example, study level variables used in the meta analysis regression than a fixed effect model will. I know this can happen in random effects models, but often one of the reason for using random effects models is to gain power to detect effects. Can anyone explain what is going on here? Edit: To add to the discussion in the comments, here is an example in R of a case where adding a random intercept decreases the standard error of the fixed effect, Age. ``` library(lme4) data(Orthodont,package="nlme") fixed=lm(distance~age,data=Orthodont) random=lmer(distance ~ age + (1|Subject), data=Orthodont) summary(fixed) summary(random) ``` In both cases, the fixed effect estimate is 0.66, but in the fixed effect model, the standard error for age is 0.1092, while it is 0.06161 in the random effect model.
Fixed vs. random effect meta regression
CC BY-SA 4.0
null
2023-04-19T20:11:03.573
2023-05-01T09:09:30.067
2023-04-23T15:45:23.040
352188
352188
[ "mixed-model", "meta-analysis" ]
613490
1
null
null
0
53
For sample size, in r, samplesize = if replace, nrow(x) else ceiling(0.632*nrow(x)) What I know is random forest constructs a large number of trees with random bootstrap samples from the training data. But, in r, if we have a sample size of replacement, we use all the observations. Then, it's not random. Can you please explain what you mean by random bootstrap samples if "replace"?
sample size in r for random forest
CC BY-SA 4.0
null
2023-04-19T20:53:43.687
2023-04-20T16:56:13.563
null
null
382257
[ "random-forest" ]
613491
1
null
null
0
18
Recall we have two shops that we treat like equal. We want to increase total income, and we want to evaluate some hypothesis. The global goal is to find feature, that maximizes total sum of payments, even if a count of sales is decreased, so we interested in total sum of separate sales in a period of time. Suppose, the size of customer payments is generated by fat-tailed distribution, even without mean (process with Pareto distribution), so we cannot consider mean or median of two samples, the only thing that needs to be compared is two total sums in the same period of time. How can we properly organize this type of experiment, where we need to evaluate statistical significance of the change in the aggregate value, not mean, frequency etc? Is there a generally accepted approach to solving such problems?
Hypothesis test for aggregate values
CC BY-SA 4.0
null
2023-04-19T20:54:03.580
2023-04-19T20:54:03.580
null
null
367478
[ "hypothesis-testing" ]
613493
1
null
null
1
26
I have one model, that I fit using MLE. In order to feel what could be a good initial guess for the fitting process, I try different arbitrary smartly chosen initial guess (not random) and report the final estimate for each of them, along with their associated loss (i.e., negative log-likelihood). Now that I have this bench of estimate with their loss (from each of the initial guesses), I take the loss that is minimal, and I want to compare all other estimate to it. Despite we cannot really say that this estimate $\theta^*$ is really the best of the parameter space, let's assume that (because the parameter space is huge and my problem is too complicated to find an analytical solution) When I compare the other estimates to that assumed best estimate $\theta^*$, I want to know what estimate is significantly different from $\theta^*$. Indeed, if the loss associated to $\theta^*$ is 850 and one other initial guess gives an estimate of loss 850.001, I don't want to discard it from the "good" initial guess that I want to retrieve in the end. From what I see on the Internet, performing a paired t-test on these estimate is not appropriate in the context of MLE. I saw method like the Wald Test. I wanted to perform a Wald test with restricted subset of the parameter space $\Phi$ being $\{\theta^*\}$. However, in my case I cannot retrieve the covariance matrix at the estimate in all cases, because the FIM at estimate is often not definite positive (one eigen value < 0), which lead to negative variance in the hessian inverse, which is in the end not usable. And since the Wald test needs the covariance matrix of the tested estimate, I cannot use it for about 75% of my estimates (25% still have an covariance estimate from the FIM though) Then I saw the log-likelihood ratio test, which only needs the likelihood of the 2 estimate that I have. However, I saw that this test measures the goodness of fit for different nested models: the chi-square distribution has $k$ degrees of freedom, $k$ being the difference between the number of parameter of each model. Since the models are the same in my case, I just want to compare 2 estimate from the same model, the chi-square distribution would have k=0, so it's not meant for my problem I guess. What should I do to compare 2 estimates from the same model fitted by MLE, and see if there are significantly different, when the covariance matrix cannot be estimated from the FIM everytime ? Is it even possible ? If not, what rule should I set to decide to discard or not one initial guess for my model ? (Please note that I'm quite new to data science / statistics)
How to compare 2 estimates from a model fitted by MLE when the covariance of the estimate can't be always estimated?
CC BY-SA 4.0
null
2023-04-19T21:16:16.167
2023-04-19T21:16:16.167
null
null
372184
[ "statistical-significance", "likelihood-ratio", "wald-test" ]
613494
1
null
null
1
21
I understand that it does the opposite of NORMDIST function and returns a normal cumulative distribution. But is there a formula that it uses? Is it using the Quantile function?
What does the NORM.INV function in excel actually do?
CC BY-SA 4.0
null
2023-04-19T21:25:49.797
2023-04-19T21:25:49.797
null
null
386017
[ "normal-distribution", "excel" ]
613495
2
null
613490
0
null
The code is just an if-then statement. If `replace==TRUE` then random forest just does [bootstrap](/questions/tagged/bootstrap) resampling: for each tree, it draws a sample with replacement from the data with size equal to the original sample size. If `replace==FALSE` then random forest just random sampling: for each tree, it draws a sample without replacement from the data with size equal to `ceiling(.632*nrow(x))`. The number 0.632 might seem like a "magic number" but it is approximately $1-\exp(-1)$, which is an important number in bootstrap samples. See: - Why on average does each bootstrap sample contain roughly two thirds of observations? - Random Forests out-of-bag sample size
null
CC BY-SA 4.0
null
2023-04-19T21:29:53.973
2023-04-20T16:56:13.563
2023-04-20T16:56:13.563
22311
22311
null
613498
2
null
613475
1
null
I disagree that selecting later data as a hold-out set is an obvious way to proceed (with validation) when you have panel data. This is in fact a different and more stringent way to perform model validation. In some literature, these are distinguished as external validation or temporal validation. To illustrate this point, consider that a cubic curve locally approximates a sigmoidal curve prior to the inflection point, but once you sample data over the entire extent of the curve, the quality of fit deteriorates. If we focus on cross-sectional validation, it's straightforward to conceive of the bootstrap as yet another resampling based approach to developing independent datasets that are expected to have the same internal structure. That is, if you inspect first and second moments comparing datasets that are randomly split p/(1-p) and bootstrap resamples, all univariate and bivariate statistics have the same expectations. The advantage of the bootstrap, which Frank Harrell points out in his text Regression Modeling Strategies, is that many properties of estimators are affected by small samples (feature selection using LASSO, small sample bias in logistic regression, etc.), and so bootstrapping can mitigate these problems by having the same $n$ in the training and validation sets. It is a sound question, therefore, to ask whether a temporal bootstrap can be conducted for a panel analysis that produces a similar sized $n$ in training and validation sets, which enforces the stringency of a temporal validation, but which averts the issues of reducing the effective sample size. One obvious choice is just to apply the date cutoff and then resample observations to achieve a maximal $n$, another obvious choice is to not do that because the point of the exercise is to impose a needful penalty on model complexity. The correct choice would depend on your specific analysis, and perhaps some simulations to explore the operating characteristics of either approach. Of course, the specifics of bootstrapping panel data is already addressed as per your post.
null
CC BY-SA 4.0
null
2023-04-19T22:16:17.033
2023-04-19T22:16:17.033
null
null
8013
null
613499
1
null
null
0
15
In my dataset, `working hours` is weekly working hours. If the person's employed, then working hours > 0. However, I'm not quite sure how to input the data for working hours if the person is not employed. Do I input it as "NA", or 0? In addition, I'm trying to do a logarithm transformation on the data of working hours. If I were to input data as 0 for the unemployed, how should I do the log transformation? Should I keep it as 0, or leave it as NA? Thanks!
What is the appropriate way to input the data of working hours?
CC BY-SA 4.0
null
2023-04-19T22:35:33.947
2023-04-19T22:35:33.947
null
null
336679
[ "dataset" ]
613500
1
null
null
0
17
I am doing some work modeling spatial data with Gaussian Processes. To speed things up, I am using a [compact kernel](https://arxiv.org/pdf/2006.03673.pdf). This makes it so that, over spatial extents that are > the bandwidth of my kernel, the covariance matrix is 0, which is hopefully a very large percentage of the covariance matrix. The usual GP setup: I am trying to model a function $f$, I have data $\mathbf{x}$ and I choose a point $x^*$ at which I'd like to predict $f(x^*)$ using my Gaussian Process model. Suppose that the bandwidth $\theta$ of my kernel function $K$ is learned already, or fixed. Since my kernel function has finite support, we'll say that $x'$ is "within the window of $x$" iff $K(x,x') > 0$. A point estimate for $f(x^*)$ [is given by:](https://en.wikipedia.org/wiki/Gaussian_process#Gaussian_process_prediction,_or_Kriging) $$\widehat{f(x^*)} = K(x^*, \mathbf{x}) K(\mathbf{x}, \mathbf{x})^{-1} f(\mathbf{x})$$ Now, I am having a bit of trouble comprehending the nature of the impact of observed points on predictions not within their "window" - viz. since my kernel has finite support, it would be intuitive to me that my prediction at $x^*$ would depend only on the points within the window of $x^*$. However, the inverse of the covariance matrix does not really have this property: the inverse of the covariance matrix [can't be expected to have the exact same sparsity structure of the covariance matrix](https://math.stackexchange.com/questions/471136/inverse-of-sparse-matrix-is-not-generally-sparse). So, my predicted value for $x^*$ will depend, generally, on my whole dataset including points $f(x)$ for which $f(x^*)$ was supposed to have zero covariance! How can I understand the intuition here that $f(x^*)$ depends on values observed at $f(x)$, where $x$ is not within the window of $x^*$, despite the stipulation that the covariance between $f(x^*)$ and $f(x)$ is zero?
Intuiting the relationships between points in Gaussian Process with compact kernel
CC BY-SA 4.0
null
2023-04-19T22:41:22.207
2023-04-19T22:41:22.207
null
null
143446
[ "gaussian-process" ]
613501
2
null
32439
0
null
Came across the same issue and @Chris made a great guess. See [the manual in 2000](https://stefvanbuuren.name/publications/MICE%20V1.0%20Manual%20TNO00038%202000.pdf). If you check the warning information, it reported the errors shown below. > Collinearity and instability problems occur in the impute-functions if the predictors are (almost) linearly related. For example, if one predictor can be written as a linear combination of some others, then this results in messages like Errorinsolve.qr(a):apparently singularmatrixand Lengthoflongerobjectisnotamultipleof thelengthoftheshorterobjectin:beta+rv%*% rnorm(ncol(rv)). The solution is eliminate duplicate information, for example by specifying a reduced set of predictors
null
CC BY-SA 4.0
null
2023-04-19T22:47:33.853
2023-04-19T22:48:29.007
2023-04-19T22:48:29.007
386109
386109
null
613502
1
613505
null
2
46
I am trying to perform a robust regressions using the lmrob function in R. I am getting this error Message: ``` Warning message: In lmrob.S(x, y, control = control) : S-estimated scale == 0: Probably exact fit; check your data ``` Then I see that 244 of my observations are weighted with 0, the other 355 are weighted with 1 (see plot as illustration). My criterion variable is a continuous variable ranged 1-5, but with a lot of mass at 1. In fact, any other value than the 1 was considered as an outlier in the `lmrob` model. Regardless of whether it is appropriate to finally rely on a linear regression model given these circumstances, is there an intuitive way to adjust the "threshold" to consider an observation as outlier when using this function? I have never worked with such a model or this function before, but it seems like a potential solution to me to increase the threshold in a way that prevents the model from considering half of the sample as outliers. However, the explanations in the help sections are too technical for me. Or is it more appropriate to exclude outliers before fitting the model? The reason I fitted the robust model was to compare it to the results of a normal OLS regression and a bootstrap regression. [](https://i.stack.imgur.com/PiI7T.png)
Adjust the "Threshold" in a robust regression
CC BY-SA 4.0
null
2023-04-19T22:49:57.453
2023-04-20T11:37:48.557
2023-04-20T11:37:48.557
385930
385930
[ "regression", "outliers", "robust" ]
613503
2
null
613474
0
null
If the goal is to compare A vs. B for one category only, conducting the hypothesis test is easy using a chi-square test of association, or a test of proportions. Note that the table for a chi-square test uses Damaged / Not-damaged, whereas the test of proportions, here, uses Damaged / Shipped. ``` ### Books Damaged = c(65, 45) Shipped = c(4722, 3400) prop.test(Damaged, Shipped, correct=FALSE) ### 2-sample test for equality of proportions without continuity correction ### ### data: Damaged out of Shipped ### X-squared = 0.04157, df = 1, p-value = 0.8384 ### alternative hypothesis: two.sided ### 95 percent confidence interval: ### -0.004549325 0.005609444 ### sample estimates: ### prop 1 prop 2 ### 0.01376535 0.01323529 ### Books Books = matrix(c(65, 4657, 45, 3355), nrow=2, byrow=TRUE) chisq.test(Books, correct=FALSE) ### Pearson's Chi-squared test ### ### data: Books ### X-squared = 0.04157, df = 1, p-value = 0.8384 ``` Likewise, this can be done on the totals as well. There are more sophisticated approaches, namely logistic regression, which could include all the data simultaneously, and then allow for investigating each category. Also, be sure to present the proportions, as you have done. With relatively large sample sizes, you might find significant differences that don't have practical importance because the difference in proportions is relatively small.
null
CC BY-SA 4.0
null
2023-04-19T22:52:30.823
2023-04-19T22:52:30.823
null
null
166526
null
613504
2
null
613450
1
null
Your mistake is assuming independence of $F_1^c$ and $F_2$. Intuitively, failing to find the dog on the first day increases the probability the dog is in Forest B, and therefore increases the probability the second search will also fail. Thus they cannot be independent. By Bayes, $$ P(A|F_1^c) = P(F_1^c|A)P(A)/P(F_1^c)\\ = 0.75\times 0.4/(0.4\times 0.75 + 0.6) = 1/3 $$ Using this new lower probability $$ P(F_2) = P(F_2|F_1^c)P(F_1^c)\\ = P(F_2|F_1^c,A,S)P(A|F_1^c)P(S|F_1^c)P(F_1^c)\\ = P(F_2|A,S)P(A|F_1^c)P(S)P(F_1^c)\\ = 0.25\times(1/3)\times(2/3)\times 0.9 = 0.05 $$ Gives the correct result. Their solution conditions on $A$ first, and then conditions on $F_1^c$. It may be more clear if you included the 0 term they omitted, since $P(F_2|B) = 0$ $$ P(F_2) = P(A)P(F_2|A) + P(B)P(F_2|B)\\ = P(A)P(F_1^c|A)P(F_2|A,F_1^c) + 0\\ = P(A)P(F_1^c|A)P(S|A,F_1^c)P(F_2|A,F_1^c,S)\\ = P(A)P(F_1^c|A)P(S)P(F_2|A,S) $$
null
CC BY-SA 4.0
null
2023-04-19T22:57:26.950
2023-04-20T03:19:33.880
2023-04-20T03:19:33.880
362671
282433
null
613505
2
null
613502
2
null
Robust regression methods with breakdown point close to 50% have the "exact fit property", meaning that if more then 50% of observations can be fitted with variance zero, this is the fit that will be chosen. All the other observations will then be identified as outliers, as the identification is relative to the estimated error variance. The reason why this happens here is that you have so many $y$-values equal to 1 that apparently a regression constant 1 can fit more than half of the observations perfectly. This is what the warning message indicates; discrete data like these are a problem for the S/MM robust regression estimator. Assuming that a linear regression still makes sense (which is questionable), it would be better to tune the initial S-estimator to have a lower breakdown point than 50%, i.e., to enforce that more then 50% of observations need to be taken into account. Unfortunately the documentation is not very clear on how to do this. If I understand things correctly, the parameter `tuning.chi` of `lmrob.control` (`control` argument in `lmrob`) needs to be set larger than the default of 1.54764 to achieve this. I can't tell you how precisely to choose it, but if I were you I'd try out 3 (it should be smaller than the `tuning.psi`-value of 4.685) and see what happens, maybe then look at 2.5 or 3.5 (I think one should try to achieve 25% or 20% breakdown, i.e., enforce the initial S-estimator to take into account at least 75 or 80% of observations; there is a mathematical way how to do this which is probably at least indirectly explained in the "Robust Statistics Theory and Methods" book by Maronna, Martin, and Yohai; the larger `tuning.chi`, the more observations are taken into account, but unfortunately it's not a linear relation). It'd be experimental on your behalf as I don't have the time to test this right now. Do not remove outliers before fitting. It may however be that you don't have an outlier problem (the robust outlier identification is apparently led astray by the "exact fit") and OLS-estimation may work just fine for these data.
null
CC BY-SA 4.0
null
2023-04-19T23:19:52.547
2023-04-20T09:24:17.883
2023-04-20T09:24:17.883
247165
247165
null
613506
1
null
null
1
24
I was curious if they are anything in the literature about estimating or propagating uncertainties when the desired result is from the minimization of an expression. I have several books on uncertainty propagation, but they do not talk about such problems. I suspect in the statistics literature, this problem might have a specific name I am not aware of. I do not think this is a regression, so the methods for those problems do not apply. I was thinking of changing the parameter's value until the quantity being minimized changes within a desired tolerance. This tolerance would be estimated from the error propagation of the function being minimized because the parameters are not derived from any measured quantities. The measured quantities and the unknowns are both used in the expression being minimized.
Estimating uncertainties in parameters from minimization of an expression
CC BY-SA 4.0
null
2023-04-19T23:24:40.267
2023-06-02T05:40:33.453
2023-06-02T05:40:33.453
121522
141436
[ "optimization", "measurement-error", "error-propagation" ]
613507
1
null
null
3
60
## The observational case For observational data, [Hernán & Robins](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/) (2023, p. 49) state: > In the absence of marginal randomization, [computing the average causal effect in the entire population] requires adjustment for the variables $L$ that ensure conditional exchangeability of the treated and untreated Adjustment can then be achieved via regression adjustment in an outcome model, standardization, stratification/restriction, matching or IP weighting. When using matching or weighting, for example, packages offer a multitude of methods and diagnostics to evaluate the degree of balance achieved. If our generated pseudo-population is greatly imbalanced, we will consider our estimate most surely biased and, thus, invalid. ## The experimental case In the context of randomized clinical trials, Senn states explicitly in myth number 2 of the article ["Seven myths of randomisation in clinical trials"](https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.5713) (p. 1441): > There is no requirement for baseline balance for valid inference. Likewise, in his guest blog post "[Randomisation is not about balance, nor about homogeneity but about randomness](https://errorstatistics.com/2020/04/20/s-senn-randomisation-is-not-about-balance-nor-about-homogeneity-but-about-randomness-guest-post/)", he unambiguously states: > Randomisation does not produce balance This does not affect the validity of the analysis , as well as, > Balance is a matter of precision not validity --- ## The question How does the process by which data is created alters the role of balance, from one of ensuring validity into one of ensuring precision?
Why is covariate balance fundamental for causal inference in observational data but unnecessary in experimental data?
CC BY-SA 4.0
null
2023-04-19T23:31:09.287
2023-04-23T15:34:31.563
null
null
180158
[ "experiment-design", "causality", "observational-study", "random-allocation", "covariate-balance" ]
613508
2
null
613507
2
null
## In short Balance is necessary but not sufficient for observational data, as most of the "work" is done by the causal assumptions. Whereas in experimental studies, balance is not as irrelevant as suggested by the quotes from Senn, as observed covariates have to be taken into account even if there's randomization. --- The confusion in the question usually has the implicit erroneous reasoning that in observational data: - Identification of the causal effect requires adjusting for some variables - Adjustment on $L$ will usually involve some balancing of units across levels of $L$ - Thus, the balancing itself is identifying the causal effect by inducing conditional exchangeability Balancing is a statistical property, and it cannot on its own identify a causal effect. We will always need to appeal to causal assumptions. The issue, as common to most attempts of drawing causal inferences, is unobserved confounding. ## The observational case Note that the observational case assumes already that $L$ ensures conditional exchangeability, i.e. that $L$ is a sufficient adjustment set (and thus, that there is no unmeasured confounding after adjusting for $L$). The very same property of "sufficiency" is what we are assuming. Without this causal assumption, balance does not achieve valid inferences of a causal effect. Balancing is then just a way to assess how well we are adjusting for $L$. We introduced exchangeability as an assumption. ## The experimental case Here, the design introduced exchangeability into our data. Remember that exchangeability is a property of the distribution of (potential) outcomes—not the covariates. Thus, as Senn states in the comments in the blog post: > RCTs are good at dealing with confounding, not because confounders are eliminated but because their joint effect can be estimated. Thus, when randomizing, we do not care about adjusting for confounders for validity, nor we worry about unobserved confounders—there cannot be confounders (as in "common causes of exposure and outcome") since randomization is not caused by any covariate by definition! We can still have prognostic factors, i.e. causes of the outcome, that can be unbalanced across treatment arms. And note that myth 6 of the article clearly states that one must not ignore them just because we randomized (p. 1446): > In fact, it is not logical to ignore prognostic covariates even if they are perfectly balanced once they have been observed. Valid inference depends on conditioning on what is known. It is only when the actual distribution of covariates has not been observed that it is acceptable to substitute their distribution in probability. [...] Obviously, what has not been measured cannot be put in the model, and it is here that randomisation proves valuable. However, if it was chosen to measure something because it was prognostic, then it is not reasonable to behave as if one had not seen it. We would thus include them in the model or perform post-stratification, so balance is not irrelevant in the experimental case. --- ## However Consider the critic that [Greenland, Robins & Pearl](https://projecteuclid.org/journals/statistical-science/volume-14/issue-1/Confounding-and-Collapsibility-in-Causal-Inference/10.1214/ss/1009211805.pdf) (1999, p. 35, bolding mine) raise regarding randomization: > Neither restriction nor matching prevents (although they may diminish) imbalances on unrestricted, unmatched or unmeasured covariates. In contrast, randomized treatment allocation (randomization) offers a means of dealing with confounding by covariates not explicitly accounted for by the design. It must be emphasized, however, that this solution is only probabilistic and subject to severe practical constraints. [...] Thus, even in a perfectly executed randomized trial, the no-confounding condition $\mu_{A0} = \mu_{B0}$ is not a realistic assumption for inferences about causal effects. Succesful randomization simply insures (sic) that the difference $\mu_{A0} - \mu_{B0}$ and hence the degree of confounding, has expectation zero and converges to zero under the randomization distribution; it also provides a permutation distribution for causal inferences (Fisher, 1935; Cox, 1958, Chapter 5). To which Senn would likely reply (p. 1442, bolding mine): > In fact, the probability calculation applied to a clinical trial automatically makes an allowance for the fact that groups will almost certainly be unbalanced, and if one knew that they were balanced, then the calculation that is usually performed would not be correct. Every statistician knows that you should not analyse a matched pairs design as if it were a completely randomised design. In the former, there is deliberate balancing by the device of matching; in the latter, there is not. In other words, not only despite but also because we know that in practice many hidden confounders will be unbalanced, the conventional analysis of randomised trials is valid. If we knew these factors to be balanced, then the conventional analysis would be invalid. The defense of Senn seems relatively weak, and in the rest of the section in the paper he seems to imply that given that there is no better alternative, we should be content with taking shelter in the randomization distribution. Personally, I feel this is not entirely honest or transparent to the interpretation that readers will give to the results of an RCT, but maybe that is the subject for another question altogether...
null
CC BY-SA 4.0
null
2023-04-19T23:31:09.287
2023-04-23T15:34:31.563
2023-04-23T15:34:31.563
180158
180158
null
613510
1
null
null
1
11
I've had a stats related question I've been wondering for a while, but for which I have yet to find good sources on. I'm aware of things like the Akaike Information Criterion for weighting candidate models, but afaik this doesn't account for model autocorrelation (potentially leading to double counting if extremely similar models are included). Meanwhile, there is also the approach of simply putting every model into a multiple regression against the target variable to get weights, but in my experience, especially without large numbers of observations, these weights tend to do ~strange~ things (for instance, often giving a negative weight to one of two models with high correlation to the target variable when they're correlated to each other, which seems to me to typically be overfitting). It seems to me the best possible solutions to this are to A) use something like the AiC, but account for autocorrelation somehow (maybe by multiplying by something like (.5+.5*(1-autocorrelation^2)) across all other prospective models, to halve the weight if two models are highly correlated), or B) to use the model weights given by raw multiple regression, but with some sort of prior to avoid negative weights unless it's obvious they're an actual reflection of the underlying relationship. My question is, is there any pre-existing literature on this? Moreover, do any individual members of this messageboard have insight on this? I'm somewhat of a stats novice, and understand general theoretical concepts better than their specific mathematical functionings when it comes to this area, so my apologies if I'm raising something foolish or ignoring something obvious.
General question on model weighting and averaging
CC BY-SA 4.0
null
2023-04-20T00:03:11.157
2023-04-20T00:04:30.800
2023-04-20T00:04:30.800
386112
386112
[ "multiple-regression", "predictive-models", "autocorrelation", "aic", "model-averaging" ]
613511
2
null
612829
1
null
The attribute of this critique on which I feel compelled to comment is the suggestion that comparing to the "all" as a reference resolves the issue of bias (where the bias comes from picking "white" as the reference group for comparison).  Simply put, with regards to dummy coding for categorical variables in multiple regression models, this simply is not the case.  For even when you choose to do comparisons to the grand mean, you still need a reference group to encode the categorical predictor variable of "race" into the necessary instrumental variables (dummy variables) to run the MR model. In brief, if you have 4 categories (A,B,C,D), and you have dummy variables $d_B=1$ if in category B, $d_C=1$ if in category C, and $d_D=1$ if in category $D$, and zero for all others...then you have an MR model that generates the mean differences for each category compared to the reference category A.  (I am assuming balanced design, but an adjustment can easily be made if the group sample sizes are not the same, though I will not elaborate on it here.) And, if you wish to change this to a grand-mean dummy coding, you define the dummy variables the same, but if you are in group A, then all three dummy variables are set to the value $d_B=d_C=d_D=-1$. So, the flaw with the argument that using "white" as the reference group is actually NOT resolved by using grand-mean coding, as you still must pick a reference group.  And the suggestion that picking a reference group propagates bias is not resolved. If the research question is about differences between groups, then the grand-mean comparison probably is not appropriate.  If the research question is about comparison to the grand-mean, then it would be justified.
null
CC BY-SA 4.0
null
2023-04-20T00:07:52.943
2023-04-20T00:07:52.943
null
null
199063
null
613512
1
null
null
3
22
I have time series data over an 8hour period of animals displaying behaviors. The behavior being displayed by a particular animal is notated as numbers 1-9, with a number corresponding to a given behavior. These are insects and they can only display one feeding behavior at a time. The most common behavior for each point in time (i.e. mode or most frequent behavior/number at each second from 0s to 28,800s) was calculated and this data was visualized to get a good representation of how the behavior is changing between treatments groups. After visualizing the data, it was clear that for one treatment, a particular behavior is less common at certain times compared to another treatment, which would make sense according to our original hypothesis. Are there any statistical tests that can help exclude the possibility of random chance causing the differences that we see in the exploratory figures? This is a bit of a weird dataset since it involves values that are frequencies, has a few treatments, and is also a timeseries dataset. Here is an example of the data file generated which was plotted. |time |activity | |----|--------| |14396 |5 | |14397 |5 | |14398 |5 | |14399 |5 | |14400 |5 | |14401 |12 | This file shows the most common behavior at each second. It was calculated from ~20 different samples in one treatment group. Any help or ideas of statistical methods, packages, pipelines, considerations, etc. would be appreciated! I can work in both R and python so please throw out any and every suggestion you can think! I'm not a statistician and don't know many methods beyond an anova/KW test. Thanks!
Methods for conducting hypothesis tests on frequencies/modes in a time-series?
CC BY-SA 4.0
null
2023-04-20T00:32:06.523
2023-04-20T04:52:24.457
null
null
386113
[ "r", "time-series", "hypothesis-testing", "python", "exploratory-data-analysis" ]
613515
2
null
401212
0
null
I cannot find anyone that addresses this problem for LASSO / $\ell(1)$ regression, and since the regularization term is not differentiable, it's a bit harder. I posted an answer here: [https://math.stackexchange.com/a/4682686/192065](https://math.stackexchange.com/a/4682686/192065) It shows equivalence for: - $\ell(1)$ constrained optimization - the usual LASSO regularization problem - the Lagrangian with $g(x) = \|x\|_1 - r$
null
CC BY-SA 4.0
null
2023-04-20T02:02:04.697
2023-04-20T02:02:04.697
null
null
92660
null
613516
2
null
291962
0
null
I cannot find anyone that addresses this problem for LASSO / $\ell(1)$ regression, and since the regularization term is not differentiable, it's a bit harder than the ridge /$\ell(2)$ case. I posted an answer here: [https://math.stackexchange.com/a/4682686/192065](https://math.stackexchange.com/a/4682686/192065) It shows equivalence for: - $\ell(1)$ constrained optimization (for fixed $r>0$, find $\text{arg}\min_x f(x)$ subject to $\|x\|_1 \leq r$ ) - the usual LASSO regularization problem, (for fixed $\lambda \geq 0$, find $\text{arg}\min_x f(x) + \lambda\|x\|_1$ ) - the Lagrangian with $g(x) = \|x\|_1 - r$, (for fixed $r>0$, find $\text{arg}\min_{x, \lambda \geq 0} f(x)+ \lambda( \|x\|_1 -r)$)
null
CC BY-SA 4.0
null
2023-04-20T02:03:43.467
2023-04-21T14:09:32.263
2023-04-21T14:09:32.263
92660
92660
null
613517
2
null
196788
1
null
TLDR just generalize the coupon collector techniques. Suppose you have a discrete state space Markov chain that evolves on the space of all subsets of $\{1,\ldots,k\}$ that have size between $0$ and $m$ (inclusive). This has size $\binom{k}{0} + \binom{k}{1} + \binom{k}{2} + \cdots \binom{k}{m}$. In particular notice that when $k=m$, this is $2^k$. Say time starts at $0$. $X_0$ is the empty set with probability $1$. The marginal/transition distribution of $P(X_1 = \cdot \mid X_0 = 0)$ is uniform over the $k$ singletons. $P(X_2 = \{j,k\} \mid X_1 = \{j\}) = p_k$ where $k\neq j$ and $P(X_2 = \{j\} \mid X_1 = \{j\}) = p_j$. If you write out the big ugly transition matrix, you'll see each row only has $k$ nonzero elements, because you can only do $k$ things at each time step. Using that big ugly transition matrix, you might figure the transition matrix for $|X_t|$ (the cardinality/size of $X_t$). With this you can describe the stopping time of interest: $$ n = \inf\{t : |X_t| = m\}. $$ Notice that $n \in \{m, m+1, \ldots\}$ and $$ P(n = j) = P(|X_n| = j \mid |X_{n-1}| = j-1) P(|X_{n-1}| = j-1) . $$ Both these factors could be coded up. Regarding sampling, it's faster (but more memory-intensive) to sample $X_t$ or $|X_t|$ instead of the whole multinomial enchilada. The hard part is instantiating and storing the transition matrix, but it’s very straightforward (more so for $X_t$).
null
CC BY-SA 4.0
null
2023-04-20T02:19:29.667
2023-04-20T02:55:29.060
2023-04-20T02:55:29.060
8336
8336
null
613518
1
null
null
2
43
Since absolute error plotted on a log-log plot will result in asymmetric error bars, my understanding is that this would give a misleading view of the precision of a measurement. I can plot instead the relative error, but then the values on the y axis would be z = log10(y) +/- sigma_y/y/log(10), rather than y on a log scale if that makes sense. Is there a way to get symmetric error bars for the latter? Also, on a related note, I have values that result in a relative error that is several times the value itself, e.g., y = 1.251e-5, sigma_y = 0.00074097 results in z = log10(y) ~ -4.9027 +/- 25.7. So recalculating for the relative error results in errors that are much larger than the measurement itself. What do you suggest in this situation?
Relative error on a log-log plot
CC BY-SA 4.0
null
2023-04-20T02:35:42.050
2023-04-20T22:56:21.827
2023-04-20T12:53:45.510
142054
142054
[ "data-visualization", "error", "logarithm", "error-propagation" ]
613519
2
null
99543
0
null
YES Here’s a question to ponder: if you had to make a prediction but did not have access to any features, what would you predict? For instance, if you had to predict the color of the card on the top of the deck but only got to look at the back, what would you predict? There are multiple reasonable answers to that, particularly when you get into potential costs of missing the right answer (especially severe penalty for missing high might encourage you to guess low, for instance). However, I think a reasonable answer to the card question is to predict that black and red each have a probability of $0.5$, since a standard deck of cards is split even between red and black. You would do this for every card for which I show you the back. If you collect additional information about the card, such as what the front looks like, you can start to get better predictions than guessing $0.5$ every time. If you can reliably make predictions that are more accurate than guessing $0.5$ every time, then it would seem that your features, taken as a group, are determinants of the outcome. Getting a p-value or confidence interval is harder, but I believe this to be the right philosophy. When you do an F-test of all the features in a linear model, for example, you are testing if including those features allows you to make better predictions of the conditional expected value than you would if you made the same prediction every time. The mechanics of how this is done have a justification, and that justification is different when you do a test of the significance of the features in a logistic regression, but the F-test is philosophically the same as the idea of seeing if including features allows you make better predictions than you could if you lacked them.
null
CC BY-SA 4.0
null
2023-04-20T02:54:58.443
2023-04-20T02:54:58.443
null
null
247274
null
613521
1
null
null
4
245
I found the following result used in [this paper](http://proceedings.mlr.press/v108/esfandiari20a/esfandiari20a.pdf), but it was just claimed without proof and it seems extremely strong to me, so I would like a proof, or at least a reference, of a proof. Let $D$ be probability distribution. For any $k\geq 1$, there exists another distribution $D'$ such that if $Y_1, ..., Y_k \sim D'$, then the distribution of $\max(Y_1, ..., Y_k)$ is $D$. This seems very counter intuitive to me, specially that the max of a distribution of iid random variables behaves like a Gumbel distribution for large $k$, and Gumbel obviously doesn't cover all distributions.
Is every probability distribution also the distribution of the maximum of several i.i.d. random variables?
CC BY-SA 4.0
null
2023-04-20T04:00:16.503
2023-04-22T12:49:43.987
2023-04-22T12:49:43.987
22228
181388
[ "distributions", "references", "extreme-value" ]
613522
2
null
613521
12
null
Let $D$ be a probability distribution with [CDF](https://en.wikipedia.org/wiki/Cumulative_distribution_function) $F$, and fix a positive integer $k$. Consider a new CDF $F_k(y) = F(y)^{1/k}$ (you can check that it satisfies the axioms of a CDF). Then $F_k$ defines another distribution $D^\prime$. If $Y_1,\ldots,Y_k \sim D^\prime$ (i.i.d.), then $$ \begin{aligned} P(\max\{Y_1, \ldots, Y_k\} \leq x) &= P(Y_1 \leq x, \ldots, Y_k \leq x) \\ &= F_k(x)\cdots F_k(x) \\ &= F_k(x)^k \\ &= F(x), \end{aligned} $$ so $\max\{Y_1,\ldots,Y_k\}$ has distribution $D$. (See also [How do you calculate the probability density function of the maximum of a sample of IID uniform random variables?](https://stats.stackexchange.com/questions/18433))
null
CC BY-SA 4.0
null
2023-04-20T04:28:10.897
2023-04-20T04:28:10.897
null
null
97872
null
613523
2
null
613512
1
null
If I understand correctly, you have - a set of insects, each of which were exposed to treatment - for each insect, we have time series with the value at each point is a single feeding behaviour (one of 9 possible behaviours) The response variable (a 9-level factor) corresponds nicely with the multinomial distribution. The multinomial is a generalisation of the binomial to $\ge2$ categories. If you use a multinomial as the response, you don't have to use the mode summary that you're currently using - the mode is a useful way to summarise the data, but you lose precious information when you're modelling. I'm not sure what kind of model you want, because I'm not across the research question. But, a reasonable start to explore the data could be a multinomial GAM (I believe you can fit this model using `mgcv` in R), where the response variable is the insect behaviour factor, and the predictor variable is a smooth on time + treatment (or something like that). You could compare a model where treatment is a factor, and treatment is not a factor, and see if the response probabilities change.
null
CC BY-SA 4.0
null
2023-04-20T04:52:24.457
2023-04-20T04:52:24.457
null
null
369002
null
613524
1
null
null
0
13
I was reading [this paper](https://arxiv.org/pdf/2304.08069.pdf) and saw this passage: > The architecture of these detectors has evolved from the initial two-stage [9, 26, 3] to one stage [19, 31, 1, 10, 22, 13, 36, 14, 7, 33, 11], and two detection paradigms, anchor-based [19, 22, 13, 10, 33] and anchor-free [31, 7, 36, 14, 11], have emerged. What is meant by anchor-based and anchor free two detection paradigm?
What is anchor-based and anchor free two detection paradigm in object detection?
CC BY-SA 4.0
null
2023-04-20T05:12:38.893
2023-04-20T05:12:38.893
null
null
243601
[ "neural-networks", "computer-vision", "object-detection" ]
613525
1
613527
null
4
410
I am trying to learn more about how to apply Logistic Regression in the presence of rare events. I am trying to understand what is written in this paper here: [https://gking.harvard.edu/files/0s.pdf](https://gking.harvard.edu/files/0s.pdf) If I understand correctly, in the presence of rare events, the MLE estimate for the intercept of a Logistic Regression can be updated using the following correction formula (where $tau$ represents the true proportion of events within the population): $$\hat{\beta}_0 - \ln\left[\left(1 - \frac{\tau}{\tau}\right)\cdot\left(\frac{\bar{y}}{1 - \bar{y}}\right)\right]$$ Apparently, using this corrected formula can provide more reasonable estimates for the value of $\hat{\beta}_0$ and the variance of $\hat{\beta}_0$. I had the following question: In this paper, there is no correction formula provided for the estimates of $\hat{\beta}_1$ - is there any reason for this? In the presence of rare events, is the classical MLE estimate for $\hat{\beta}_1$ somehow less affected compared to $\hat{\beta}_0$? Thanks!
Logistic Regression: Bias in Intercept vs Bias in Slope
CC BY-SA 4.0
null
2023-04-20T05:35:30.413
2023-04-20T07:10:34.847
null
null
77179
[ "regression" ]
613527
2
null
613525
9
null
To start with, you have the equation wrong. The bias correction is not $$\log\left[\left(1-\frac{\tau}{\tau}\right)\left(\frac{\bar y}{1-\bar y}\right)\right],$$ it's $$\log\left[\left(\frac{1-\tau}{\tau}\right)\left(\frac{\bar y}{1-\bar y}\right)\right].$$ This not a bias correction for rare events generally (like the Firth correction). It's a bias correction specifically logistic regression and for case-control sampling: if you oversample individuals with $y=1$ relative to those with $y=0$. And yes, this bias is only in the intercept -- a surprising and important fact. The bias being only in the intercept is unique to case-control sampling and unique to models for the odds ratio such as logistic regression. It's one of the reasons logistic regression has been so popular in epidemiology.
null
CC BY-SA 4.0
null
2023-04-20T06:41:38.680
2023-04-20T06:41:38.680
null
null
249135
null
613528
1
null
null
0
9
I have a dataset like this image[](https://i.stack.imgur.com/I3W2q.jpg) and I want to run a zero inflated negative binomial (ZINB) regression in Stata (Outcome: crime rate). In 3 age group variables I have the percent of population in each city (for example 20% of city "a" population is in age 0-20 group. Is it true to include all of the 3 age group variables in the ZINB considering they are from the same origin and value of 2 group can estimate the 3rd one?
Using several predictors from one origin to run a zero inflated negative binomial regression
CC BY-SA 4.0
null
2023-04-20T06:58:21.387
2023-04-20T07:01:35.067
2023-04-20T07:01:35.067
362671
372316
[ "regression" ]
613529
2
null
613525
4
null
Imagine an intercept-only logistic regression model $$ P(y=1) = E[y] = g^{-1}(\beta_0) $$ In such a model, $\beta_0$ transformed via the logistic function $g^{-1}$ is the mean of $y$. If we add a parameter $x$ to the model, it becomes $$ P(y=1|x) = E[y|x] = g^{-1}(\beta_0 + \beta_1 x) $$ Now $\beta_0$ would correct the [bias](https://stats.stackexchange.com/questions/13643/what-intuitively-is-bias) of the model. Recall that by bias we would mean here that $E[y] \ne E[g^{-1}(\beta_0 + \beta_1 x)]$. Logistic regression is a linear model, so let's take one step back to linear equations for a while. In a linear equation, $ax + b$ changing $a$ would shift the line vertically (see the image below [taken from here](https://2012books.lardbucket.org/books/beginning-algebra/)). [](https://i.stack.imgur.com/RCzQD.jpg) In the regression case, shifting it over the $y$ axis is used for correcting the bias since it changes the expected value of the predictions for $y$ by a constant for it to match with $E[y]$. In the base rate correction, we are correcting for the fact that the base rate in the population that you are making the predictions for differs from the sample that was used to fit your model assuming that the relationship between $x$ and $y$ is the same in both. We are not aiming to correct $\beta_1$, nor do we assume that it differs. Now imagine that we "corrected" $\beta_1$. This would lead to a completely different model! You could change the parameters to arbitrary values, but then you don't need any data at all, you can just make up the parameters and plug-in into the equation. If you assume that the relationship between $x$ and $y$ differs, then you need to fit a model on a new dataset that comes from the same population as the population you want to make the predictions for.
null
CC BY-SA 4.0
null
2023-04-20T07:00:17.930
2023-04-20T07:10:34.847
2023-04-20T07:10:34.847
35989
35989
null
613530
1
null
null
0
19
Suppose I'm testing a product by taking a sample of size $N$: if the (population) variance $\sigma^2$ is higher than $\sigma^2_\text{max}$, the product should be rejected. Then I can determine the sample variance $s^2$, and use \begin{equation} \frac{(N-1)s^2}{\sigma^2} \sim \chi^2_{N-1} \end{equation} to compute a $(1 - \alpha)$ confidence interval for the population variance $\sigma^2$: \begin{equation} \frac{(N-1)s^2}{b} \leq \sigma^2 \leq \frac{(N-1)s^2}{a}, \end{equation} with $a = \chi^2_{1-\alpha/2, N-1}$, $b = \chi^2_{\alpha/2, N-1}$. If $(N-1)s^2/a > \sigma^2_\text{max}$, I reject my product. Based on this, how can I compute the probability of false rejection? And what would be the probability of false rejection if I'd use $s^2$ directly, that is, reject if $s^2 > \sigma^2_\text{max}$?
How to compute probability of false rejection for a variance test?
CC BY-SA 4.0
null
2023-04-20T07:01:04.567
2023-04-20T07:06:27.717
2023-04-20T07:06:27.717
322995
322995
[ "hypothesis-testing", "confidence-interval", "variance" ]
613532
1
null
null
1
49
I'm studying ML estimation and I have a silly question that I'm not able to see its solution. Suppose I have an AR(1) process: $$y_t = c+ \phi y_{t-1}+ u_t, \quad u_t \overset{iid}{\sim}\hbox{Normal}(0,\sigma^2)$$ The first step is find the density of $y_1$: $$y_1 = c+ \phi y_{0}+ u_1, \quad u_1 \sim \hbox{Normal} (0,\sigma^2)$$ According to this, I have that $y_1 \sim \hbox{Normal}(c+ \phi y_{0}, \sigma^2)$, since a normal with mean zero plus a constant gives a mean translation by the constant and the variance remains unchanged. (This is my reasoning) But the Hamilton book (Time series Analysis) says that $y_1 \sim \hbox{Normal}\left( \frac{c}{1-\phi}, \frac{\sigma^2}{1-\phi^2}\right)$. The justification that the book gives is that the AR(1) given above is such that its mean is $\frac{c}{1-\phi}$ and its variance is $\frac{\sigma^2}{1-\phi^2}$, regardless of $t$. Ok, this I understand by the stationarity of AR(1), but this is inconsistent with my reasoning. Well, I'm probably making a huge mistake in my reasoning and I'd like to know where I'm going wrong.
Given an AR(1) process, find the density of the first observation
CC BY-SA 4.0
null
2023-04-20T07:40:34.427
2023-04-20T09:44:33.067
2023-04-20T09:44:33.067
362671
373088
[ "time-series", "self-study", "normal-distribution", "maximum-likelihood", "autoregressive" ]
613533
1
613542
null
2
28
Im doing a meta-analysis of randomized interventional trials to compare between two types of surgeries. The follow up period at which outcomes were reported varied between 6 weeks to 24 months. Some studies reported their results at different follow up points, whereas others reported results at one follow up point. I want to ask about the most appropriate way to pool the studies together in this situation. Should I pool the data of the longest follow up available or should I rather subgroup the studies each study data according to the follow up point and report the pooled effect of each subgroup? I performed both analyses. However, they were not matching. The first analysis revealed a significant difference between the two arms, while the second showed no significant difference in all subgroups. The I2 of both analyses was 0%.
Pooling the data of longest follow up period vs. performing a subgroup analysis
CC BY-SA 4.0
null
2023-04-20T08:39:15.337
2023-04-20T10:11:16.630
null
null
361100
[ "meta-analysis", "heterogeneity" ]
613535
1
null
null
0
13
I am currently working on a project where I have a dataset of scatter-points in space. What I am doing is that I fit Gaussian Mixture Models (GMM) on the data set and use the model to sample 'artificial' data points for simulation purposes. What I would like to do now is find a metric to see how well the GMM reproduces the original data. For this purpose, I thought about comparing the original scatter-data with the artificial data. Is there some sort of 'similarity' measure between two data sets? Thank you for your help!
Similarity of two sets of scatter data
CC BY-SA 4.0
null
2023-04-20T09:04:14.457
2023-04-20T09:04:14.457
null
null
386132
[ "normal-distribution", "gaussian-mixture-distribution", "similarities" ]
613537
1
null
null
1
16
I am currently investigating the moderating role of mindfulness in the association between academic distress and suicide ideation with depression as a control variable. I finished doing the multiple regression analysis where all the IVs, control variable, and interaction term was regressed into suicide ideation in this particular format: Depression Beta coefficinet Academic Distress Beta coefficient Mindfulness Beta coefficient Academic Distress X Mindfulness Beta coefficient My supervisor asked me to convert the score into Cohen's f to find the exact effect size of each predictor included in the analysis. My question is: Can someone guide me on how to transform each IV into their appropriate Cohen's f? Is there any specific guide on how to do it? I tried finding Youtube videos for it. However, they only teach me how to do the effect size as a whole regression, not per IVs included in the regression.
How to find Cohen's f for multiple regression with interaction included?
CC BY-SA 4.0
null
2023-04-20T09:44:59.913
2023-04-20T09:44:59.913
null
null
250118
[ "multiple-regression", "interaction", "effect-size" ]
613538
1
null
null
1
10
I have a classification project and I want to compare three models: Random Forest, SVM and Logistic Regression. Random Forest are tree based algorithms wheras, SVM is a distance based model and LR is probabilistic model. The dataset for SVM and LR needs to be standardized but it doesn't have to be for RF. So my question is about what is the best approach to compare the models' performances: - Standardize the dataset for all three models and thus the reuslts can be copmpared because it is the same dataset. - Standardize the dataset for LR and SVM but not RF, but this way the 3 models aren't trained on the same dasetset. Thank you.
Stardardization for Random Forest, SVM and Logistic Regression
CC BY-SA 4.0
null
2023-04-20T09:55:14.143
2023-04-20T09:55:14.143
null
null
386140
[ "classification", "random-forest", "svm", "standardization", "data-preprocessing" ]
613539
1
null
null
1
21
I have started to learn statistics and I came accross those notations: $c_{d-1,1-\alpha}$ $z_{1-\alpha/2}$ the first one is the $\alpha$-th order quantile of the $\chi_2$ distribution and the second one is the $\alpha$-th order quantile of the normal (gaussian) distribution. What is the explanation behind those notations?
Explanation for quantile notations $c_{d-1,1-\alpha}$ $z_{1-\alpha/2}$
CC BY-SA 4.0
null
2023-04-20T09:55:53.590
2023-04-20T09:55:53.590
null
null
384407
[ "self-study" ]
613540
1
null
null
0
24
Redundancy analysis proceeds by multivariate regression of a response matrix Y on explanatory matrix X. The fitted Y values are then submitted to PCA, where the two axes of greatest explanation can be used for plotting the relationship of the axes to both response and explanatory variables. If these two axes are orthogonal (result of PCA) why then is it said from authoritative sources that e.g. plotted vectors that are at 90 degrees apart are uncorrelated? Should it not be as in the following example: (0,1), (1,0) are uncorrelated due to orthogonality of axes (1,1), (-1,1) are correlated (despite 90 degree angle) Furthermore, I have seen the relative strength of correlation between two vectors be calculated as the dot product. Should the amount of variation explained on each axis be considered? For example with axis 1 variation explained = 25%, axis 2 variation explain = 5%,should it not be that: (1,0) and (1,1) are much more correlated than (0,1) and (1,1) Thanks for any clarification.
Redundancy analysis triplot interpretation
CC BY-SA 4.0
null
2023-04-20T09:57:52.853
2023-04-20T09:57:52.853
null
null
351536
[ "interpretation", "redundancy-analysis" ]
613541
1
null
null
1
18
I have been using a pre whitenening technique to remove autocorrelation from my time series model which has a variable that are autocorrelating with itself over time. But I wondered how I could mathematically prove that my data is now decorrelated and if someone has any research paper that explains it or anything similar? Any help is apprecitiated
Mathematically prove the data has been decorrelated
CC BY-SA 4.0
null
2023-04-20T09:58:29.100
2023-04-20T09:58:29.100
null
null
385918
[ "mathematical-statistics", "autocorrelation" ]
613542
2
null
613533
1
null
Whether you pool the studies with differing follow-ups or perform a subgroup analysis is dependent on reasoning within the subject matter. If you believe that the outcome differs meaningfully between a short follow-up and a long follow-up, you shouldn't pool these studies together, but instead perform subgroup analyses. If you believe the effect of the exposure on the outcome is the same, regardless of your range of follow-ups, you could pool the results. What you believe (or experienced individuals in the field believe) happens to the outcome at different follow-ups impacts whether you pool or not. You can also perform a meta-regression to determine the effect of follow-up on the variation in the estimates of the trials.
null
CC BY-SA 4.0
null
2023-04-20T10:11:16.630
2023-04-20T10:11:16.630
null
null
385890
null
613543
1
null
null
5
395
I was studying parameter estimation from [Sheldon Ross' probability and statistics book](https://www.amazon.it/Introduction-Probability-Statistics-Engineers-Scientists/dp/0123948118). Here the task of parameter estimation is described as follows: [](https://i.stack.imgur.com/gBSnd.png) Is this task the same of density estimation in machine learning contexts? [Mathematics for Machine Learning book](https://mml-book.github.io) describes density estimation as follows: [](https://i.stack.imgur.com/jYKJO.png) My question arises from the fact that a probability density function is fully described by its parameters (e.g. a Gaussian or normal distribution is a density function which is fully described by its mean $\mu$ and its variance $\sigma^2$).
Is density estimation the same as parameter estimation?
CC BY-SA 4.0
null
2023-04-20T10:25:12.723
2023-04-21T12:45:44.260
2023-04-21T12:45:44.260
53690
368299
[ "machine-learning", "nonparametric", "terminology", "density-estimation" ]
613544
1
613754
null
0
35
I apologise for no doubt a basic question that I am having difficulty intuiting. I have data with a time component that is measured at irregular intervals (but the same irregular intervals for everyone). In analysing this I could consider time as categorical (which I would normally do) but I have started using splines a little more, so am beginning to explore treating time more as continuous. As it turns out in this case, constraining the 'effect' of time to be linear fits better than using a spline. My question is, why does emmeans give the same p value for every contrast of time - even though the magnitude of the estimates can be quite different? This is the p value for the slope, right (i.e. the same as the coefficient in the model), so is it a case that it doesn't matter what times you compare if you are treating time as linear, the p value is constrained to be the same? Am I correct in thinking the only way to obtain contrasts that may differ in statistical significance is to freely estimate the adjusted outcome means by treating time as categorical? Thanks. ``` > emmeans(mod, ~ time, at = list(time = c(0, 1, 6, 12))) |> + regrid() |> pairs(reverse = T, adjust = "none") |> + summary(infer = T) contrast estimate SE df lower.CL upper.CL t.ratio p.value time1 - time0 -0.00565 0.00925 30.1 -0.0245 0.0132 -0.611 0.5458 time6 - time0 -0.03390 0.05548 30.0 -0.1472 0.0794 -0.611 0.5458 time6 - time1 -0.02825 0.04623 30.0 -0.1227 0.0662 -0.611 0.5458 time12 - time0 -0.06779 0.11096 30.3 -0.2943 0.1587 -0.611 0.5458 time12 - time1 -0.06214 0.10171 30.1 -0.2698 0.1455 -0.611 0.5458 time12 - time6 -0.03390 0.05548 30.0 -0.1472 0.0794 -0.611 0.5458 ```
emmeans contrasts of linear slope
CC BY-SA 4.0
null
2023-04-20T10:31:48.977
2023-04-22T04:50:30.053
null
null
108644
[ "r", "regression", "lsmeans" ]
613545
1
null
null
0
20
Usually when making a time series model the training data is prepared in a format akin to: |TimeStamp |Actual | |---------|------| |2023-01-07 00:00 |80 | |2023-01-07 01:00 |110 | |2023-01-07 02:00 |115 | |2023-01-07 03:00 |95 | |... | | That is for each time stamp we have some actual values (for the outcome and any additional variables). But then, when making forecasts, we use forecast data (of these same variables) since actual data is not available. So wouldn't it make more sense to use forecast data when training the model as this is what the model will be using during forecasting? So a format more akin to: |TickDate |TimeStamp |Horizon (hours) |Forecast |Actual | |--------|---------|---------------|--------|------| |2023-01-01 |2023-01-07 00:00 |168 |100 |80 | |2023-01-01 |2023-01-07 01:00 |169 |90 |80 | |... | | | | | |2023-01-02 |2023-01-07 00:00 |144 |130 |110 | |2023-01-02 |2023-01-07 01:00 |145 |115 |110 | |... | | | | | In this case the time stamps are repeated multiple times with the same actual, but with different forecasts for each time we made a forecast (TickDate) for this time stamp. This structure essentially holds the same information for the outcome (but not for the variables) so it might work just as well if not better than the first option. Additionally if we are making multistep forecasts we can include all of these for each TickDate and have a model which can make multistep forecasts directly. Does this makes sense? When I've tried this approach on a problem the second structure performed much worse than the first option, and I am wondering if I am missing something, as to why the first option is much more prevalent in the literature. I should also mention that I've tried this approach on a couple of machine learning models and not time series models (ARIMA et al.), which do (maybe) require data as in the first table.
Multistep forecasting using a model trained on forecast data
CC BY-SA 4.0
null
2023-04-20T10:32:57.537
2023-04-20T14:38:44.633
2023-04-20T14:38:44.633
143489
143489
[ "time-series", "forecasting" ]
613546
2
null
613543
6
null
I understand this argument and can buy it as being technically true. However, the goal of language is to communicate ideas, and statistics has decided that “density estimation”, for better or for worse, refers to doing density estimation with minimal assumptions about the density as to keep from being restricted to a particular parametric family. Perhaps this means that the use of English words is not perfect. However, you are likely to elicit confusion (or at least strange looks) in statistics circles if you deviate from the established terminology.
null
CC BY-SA 4.0
null
2023-04-20T10:37:36.147
2023-04-20T10:37:36.147
null
null
247274
null
613547
2
null
613543
6
null
No, it's not the same. [Density estimation](https://en.wikipedia.org/wiki/Density_estimation) is about estimating the distribution of the data. This can be achieved with a parametric model, for example, fitting a Gaussian mixture to the data. In such a case, to find the distribution means to estimate its parameters since the distribution is defined by its parameters. But there are also non-parametric approaches to density estimation, like [Kernel density estimation](https://stats.stackexchange.com/questions/244012/can-you-explain-parzen-window-kernel-density-estimation-in-laymans-terms/244023#244023), using histograms, etc where we are not estimating any parameters but rather finding the density itself.
null
CC BY-SA 4.0
null
2023-04-20T10:47:26.420
2023-04-20T10:47:26.420
null
null
35989
null
613548
1
null
null
1
9
I have a vector variable with 50 time periods. I believe the vector variable changes over time but also has a static component. I am looking for a model similar to an AR(1) model that can provide me with coefficients to indicate how the variable changes over time, as well as residuals to indicate volatility over time. I have considered a VAR model, but my variable has a high number of dimensions, making interpretation difficult. Does anyone have any suggestions or advice? Thank you in advance!
Can we use vector as the variable in an autoregressive model?
CC BY-SA 4.0
null
2023-04-20T10:49:30.817
2023-04-20T10:49:30.817
null
null
386143
[ "regression", "time-series", "vector-autoregression" ]
613550
1
null
null
0
29
I want to understand the difference between PCA (principal component analysis) and NMF (non-negative matrix factorization) in terms of the explained variability. When we apply PCA into high-dimensinal data (suppose 100) then the largest PC explain the highest axis of variability, similarily second largest PC explain the second highest axis of variabiity in the data and so on. So if we keep 10 or 20 PC in then final analysis then it certainly caputure the all the variability of the data. Now in the case of NMF when we increase the number of factors then what the factors basically learns from the data. Does it learn the axis of variability or it learn the some key components(structures) on the data? When we should say in the case of NMF that these number of factors (or loading) is enough to explain all the axes of variability in the data. You can assume that the features are non-normally distributed.
Difference between PCA and NMF in terms of explaining variability?
CC BY-SA 4.0
null
2023-04-20T11:46:19.447
2023-04-20T11:46:19.447
null
null
251125
[ "non-negative-matrix-factorization" ]
613551
1
null
null
1
43
I have an outlier detection code snippet in R which I need to convert into python. Folloing is the code snippet- ``` tso(ts(monVolumes[[repli]]$vol[which(!is.na(monVolumes[[repli]]$vol))]/10^9,frequency=1), tsmethod ="arima",types = c("LS","TC","IO","AO") ,maxit.iloop = 50, maxit.oloop = 50)), error=function(e) TRUE, finally = FALSE) ``` So I know that there are various packages in python which can be used for outlier detection such as PyOD, alibi-detect etc. However, I am very much a beginner and dont quite understand the maths used in these packages. Is there any equivalent package in python that can do the job for me and whose paramters and structure resemble the tso function of the forecast package in R as much as possible. What I have tried till now- I tried to install the python port of tsoutliers package but it is not available on PyPI (Python Package Index) and it throws up errors whenever I try to install it any other way. NOTE- I know how ARIMA modeling works but I have no clue what maxit.iloop and maxit.oloop does in the above snippet.
An equivalent package in python for the 'tsoutliers' package in R
CC BY-SA 4.0
null
2023-04-20T11:47:46.677
2023-04-20T17:02:55.167
2023-04-20T17:02:55.167
362671
386146
[ "r", "python" ]
613552
1
null
null
1
28
I have a set of a hundred people. For each person there is a table (Time, Point X, Point Y). The number of rows in each person's table is different (about 1500-2000 rows). I want to divide these people into groups, using the data in the tables. Can you tell me what clustering algorithm I can use in this case, or what direction to take in general? I understand how clustering algorithms work when there is one table with objects, but what to do when each object has its own table? The meaning of each table: a person draws a square, using a special device and as a result we get a table with the coordinates of the point of the square and the time when each point was put. Using coordinates of each table you can draw this square. [](https://i.stack.imgur.com/zScdk.png)
Machine Learning - clusterization (which algorithm to choose?)
CC BY-SA 4.0
null
2023-04-20T11:59:04.557
2023-04-20T11:59:04.557
null
null
386148
[ "machine-learning", "clustering" ]
613555
1
613556
null
1
27
I'm working on a reliability problem, and I've run into a snag. I have a population of 300 widgets. From that population, I took a random sample of 10 and 1 turned out to be bad. What I'd like to know (and plot) is what fraction of the remaining 290 should I expect to be flawed and (separately) the probability of finding n or more bad units as I keep sampling. I've tried fooling around with `phiper`, but I clearly don't understand what I need to be doing (since I don't know how much of the total population is flawed to begin with). Any help would be very much appreciated.
Hypergeometric CDF in R
CC BY-SA 4.0
null
2023-04-19T23:17:20.317
2023-04-20T12:30:51.940
null
null
341796
[ "r", "distributions", "reliability" ]