Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
616129
|
2
| null |
616128
|
2
| null |
A [geodesic between two nodes of a graph](https://en.wikipedia.org/wiki/Distance_(graph_theory)) is the length of the shortest path between the two nodes. And the shortest path between 2 and 5 goes from 4 across 6 to 5, not directly from 4 to 5. So the geodesic between 2 and 5 has indeed length 20, not 23.
As you write, the diameter is the longest geodesic between all pairs of nodes. In this case, all other geodesics are shorter than the one between 2 and 5.
| null |
CC BY-SA 4.0
| null |
2023-05-17T12:09:53.833
|
2023-05-17T12:09:53.833
| null | null |
1352
| null |
616131
|
2
| null |
615951
|
0
| null |
Even when you combine your multiple Likert items into Likert scales after the items/questions "are grouped around themes," you are unlikely to have data that can be considered continuous interval-scaled data appropriate for classic ANOVA.
If you have ordinal data, why not use [ordinal regression](https://stats.oarc.ucla.edu/r/dae/ordinal-logistic-regression/)? As Frank Harrell explains in Chapter 13 of [Regression Modeling Strategies](https://hbiostat.org/rmsc/ordinal.html), a proportional-odds logistic regression is a "generalization of Wilcoxon-Mann-Whitney-Kruskal-Wallis-Spearman" that allows for incorporating more complex designs with covariates.
If all you have for each individual is 1 pre-lession set of answers and 1 post-session set of answers, then it seems that you could model the post-pre differences in your Likert scales, in a generalization of a paired t-test. You could include the `theme` as a predictor of the Likert-scale post-pre differences, allowing for a combined analysis of all data (which is generally preferable).
One way to think about the within-individual correlations among responses to the various `themes` is to recognize that corresponding multivariate (in the sense of multiple-outcome) ANOVA designs end up with the same point estimates as for modeling each outcome separately; it's the standard errors of the estimates that need adjustment for correlations. In your situation, after you fit the proportional-odds model, you could generate a corrected variance-covariance matrix for the coefficient estimates by bootstrap resampling of the data by individual and using the empirical variance-covariance matrix of coefficient estimates over the multiple resamples.
| null |
CC BY-SA 4.0
| null |
2023-05-17T12:29:04.510
|
2023-05-17T12:29:04.510
| null | null |
28500
| null |
616132
|
2
| null |
616079
|
0
| null |
You can recover the ATE from conditional estimates. Assume for now you have a randomized experiment so $Y^1, Y^0 \perp A$, where $A$ is some treatment. You also have data on a stratifying variable $L$. For example, $Y$ could be risk of death within 30 days of treatment, $A$ is drug (placebo or new treatment), and $L$ is age ($L=0$ is young, $L=1$ is old).
The ATE, is $$ ATE = E[Y\mid A=1] - E[Y\mid A=0] \>.$$
Note also that from the law of iterated expectation
$$ E[Y \mid A] = E[Y \mid A, L=0]P(L=0) + E[Y \mid A, L=1]P(L=1) \>. $$
Hence, we can estimate causal contrasts simply by weighting the strata specific estimates. Here is an example in R. First, I'll generate some of the data
```
N <- 5000
L <- rbinom(N, 1, 0.3)
A <- rbinom(N, 1, 0.4 + 0.2*L)
Y <- rbinom(N, 1, 0.4 + 0.2*A - 0.1*L)
```
Note that $Y^1, Y^0 \perp A\mid L$ so in this example a difference in marginal means doesn't even give us the ATE so we have to estimate the conditional quantities and weight them to estimate the ATE (which is 0.2 in this case).
```
# First, estimate the weights
p_weights = prop.table(xtabs(~L))
#conditional estimates
smry <- aggregate(Y ~ A+L, FUN = mean)
# Estimates in treatment
conditional_E_y_1 = smry[(smry$A==1), ]$Y
E_y_1 = conditional_E_y_1 %*% p_weights
# Estimates in control
conditional_E_y_0 = smry[(smry$A==0), ]$Y
E_y_0 = conditional_E_y_0 %*% p_weights
ate = E_y_1 - E_y_0
ate
#> [,1]
#> [1,] 0.2173959
```
Created on 2023-05-17 with [reprex v2.0.2](https://reprex.tidyverse.org)
That's pretty close. If we replicate this calculation, generating new data from the same code each time, you can see it will center over the true ATE of 0.2 where as the marginal difference will be biased.
[](https://i.stack.imgur.com/GnoC6.png)
In general, we can do this using regression as well. Computing a "marginal effect" is akin to weighting. I'll let you look that up if you're interested.
| null |
CC BY-SA 4.0
| null |
2023-05-17T12:33:47.870
|
2023-05-17T12:33:47.870
| null | null |
111259
| null |
616133
|
2
| null |
616066
|
2
| null |
Standard ANOVA requires a continuous outcome. You don't have that. You are evaluating binary dead/alive event outcomes, with not all individuals necessarily experiencing an event over the period of observation ("right-censored" event times). That calls for some type of survival analysis.
With only 3 observation times, however, the continuous-time survival analysis represented by methods like Kaplan-Meier curves don't work so well. "Survival analysis" in this situation can be set up simply as a binomial regression (e.g., logistic regression) that evaluates the probability of death within each time period as a function of the drug treatment.
| null |
CC BY-SA 4.0
| null |
2023-05-17T12:36:53.180
|
2023-05-17T12:36:53.180
| null | null |
28500
| null |
616134
|
1
| null | null |
0
|
15
|
I have an assignment of variable selection in survival analysis which I have done and on the basis of IBS (Integrated Brier Scores) I have concluded my proposed technique works well than the previous (LASSO and RSF) methods. Now I am asked to include some plots to show our approach works better than others. What are the best ways to graphically show my results plz?
I have included only the following self-explanatory plot.
[](https://i.stack.imgur.com/0W1AZ.png)
Now, I want to include some boxplots or some other graphical ways to show my method works better. Please let me know what might be the best possible ways.
|
What's best way to plot results of variable selection in Survival Analysis
|
CC BY-SA 4.0
| null |
2023-05-17T12:53:23.383
|
2023-05-17T12:53:23.383
| null | null |
388181
|
[
"survival",
"random-forest"
] |
616135
|
1
|
616139
| null |
3
|
61
|
I am working on a research project for my Masters in Public Health which compares suicide rates across different time periods. I have done all my analysis in R and from what I can tell everything is fine, but my supervisor has suggested that my adjusted model outputting narrower CI's than my crude model is somehow suspicious. I don't think I have a deep enough understanding of CI's to engage him directly, but I'm pretty sure I've seen other studies where the results look this way.
Is this something I should be worried about? What would determine whether adjusting for confounders would narrow or widen CI's for estimates?
For reference: the models are fitted as below.
`crude_model = glm.nb(n ~ year+offset(log(population)), control = glm.control(maxit = 100), data = data2)`
`adjusted_model = glm.nb(n ~ year+Age_Group+Sex+Day_of_Week+offset(log(population)), control = glm.control(maxit = 100), data = data2)`
|
Should adjusted models produce narrower CIs than crude ones? What does it depend on?
|
CC BY-SA 4.0
| null |
2023-05-17T13:13:00.210
|
2023-05-18T13:23:43.257
|
2023-05-17T13:13:44.407
|
388182
|
388182
|
[
"confidence-interval",
"negative-binomial-distribution"
] |
616136
|
1
| null | null |
0
|
18
|
Dear All,
I have a data set with 10 continuous variables and 11 categorical variables. The ANOA test (for six continuous variables) and the Kruskal-Wallis test (for four continuous variables) were applied to the BMI levels (with three levels). Chi-square test was applied to categorical variables versus BMI levels. (so we have 6 anova tests, 4 Kruskal–Wallis tests, 10 chi square tests). Then Multiple comparison tests were performed on variables with significant p-values. How should the Pvalues for this project be adjusted? Forexample, If I wish to utilize the Bonferroni correction, how many tests should I alter and how should they be adjusted?
|
Adjusting the P-values-Need some advice
|
CC BY-SA 4.0
| null |
2023-05-17T13:19:45.100
|
2023-05-17T13:19:45.100
| null | null |
383609
|
[
"anova"
] |
616137
|
1
| null | null |
0
|
5
|
I have a problem finding missing SD from a study
Please refer to the image below
[](https://i.stack.imgur.com/w90GE.jpg)
Note:
What is provided?
- Each timepoint's mean
- p-value for paired-test between (post) and (pre) score
What is NOT provided?
- t-value
- 95% CI
- Standard Error
Is there anywhere I could find the SD for each timepoint?
Thanks in advance :D
|
Finding missing SD using paired-test
|
CC BY-SA 4.0
| null |
2023-05-17T13:24:29.773
|
2023-05-17T13:24:29.773
| null | null |
388184
|
[
"standard-deviation",
"missing-data",
"paired-data"
] |
616138
|
1
| null | null |
0
|
11
|
I am tackling a multi-label classification problem and I want to choose a SVM model maximising the AUC. I am not sure if AUC can be used in this case and if yes it is sufficient just to change the parameter probability to probability=True.
This is the python syntax:
```
svm = SVC(kernel='rbf', probability=True)
```
|
AUC for Multi-Label Classification using SVM
|
CC BY-SA 4.0
| null |
2023-05-17T13:29:21.253
|
2023-05-17T13:30:25.520
|
2023-05-17T13:30:25.520
|
388185
|
388185
|
[
"classification",
"svm",
"roc",
"auc",
"multilabel"
] |
616139
|
2
| null |
616135
|
5
| null |
One of the most important reasons to add covariates into a regression model is to explain residual variation in the outcome, and so increase precision in parameter estimates from the model.
So if the covariates "Age Group" and "Day of Week" help to explain residual variance in the outcome measure then your confidence intervals could be smaller than in the crude model.
Consider for example a paired vs an unpaired test. We know that we should expect more precise results (smaller confidence intervals) when we can explain variance using pairing, and you can think of this as if the pairing factor is a covariate being added to a regression model.
Here's a quick simulation of a parameter estimate (for the effect of `x` on `y`) becoming more precise when a covariate (`z`) is added to a model:
```
z <- rnorm(1000)
x <- rnorm(1000)
y = x + z + rnorm(1000)
m1 = lm(y ~ x)
m2 = lm(y ~ x+z)
modelsummary::modelplot(list("Crude"=m1,"Adjusted"=m2),coef_omit = c(-2))
```
[](https://i.stack.imgur.com/NfZ4H.png)
On the other hand, as @whuber points out in the comments, it's possible that adding a covariate will increase the standard errors. Here is a similar situation, but now the covariate `z` affects the predictor `x` but has no independent effect on `y`. If we control for `z` in our regression estimating the effect of `x` on `y` then the precision will be lower:
```
z <- rnorm(1000)
x <- rnorm(1000) + z
y = x + rnorm(1000)
m3 = lm(y ~ x)
m4 = lm(y ~ x+z)
modelsummary::modelplot(list("Crude"=m3,"Adjusted"=m4),coef_omit = c(-2))
```
[](https://i.stack.imgur.com/5q1FL.png)
In your case I think the former situation is more likely (although that's down to your understanding of the theory). Adding age group and day of week seems likely to explain variation in suicide rate, but can't really be confounders or explanatory factors for the `year` variable. So I would guess you'd get more precise estimates of the year effect from the second model.
| null |
CC BY-SA 4.0
| null |
2023-05-17T13:30:43.640
|
2023-05-18T13:23:43.257
|
2023-05-18T13:23:43.257
|
68149
|
68149
| null |
616140
|
1
| null | null |
1
|
12
|
My dependent variable is a raw score at some test that has a likert scale (ordinal data). My two independent variables are sex (male or female) and amyloid status (AB+ or AB-). I want to test the main effects of both IVs on the DV and also their interaction. The distribution of the dependent variable is not normal so I want to use a non-parametric test. I think I would need a non-parametric equivalent to a two-way ANOVA. Which test would suit my research question?
|
Non-parametric test with two independent variables
|
CC BY-SA 4.0
| null |
2023-05-17T13:38:28.317
|
2023-05-17T13:46:10.497
|
2023-05-17T13:46:10.497
|
388188
|
388188
|
[
"nonparametric"
] |
616143
|
2
| null |
384909
|
1
| null |
I rewrote the above in Julia using JuMP. If you find an error, it is more than appreciated to comment on it :)
```
using JuMP
# using Ipopt
using HiGHS
using LinearAlgebra
using CSV, DataFrames
dataset = CSV.read(download("http://freakonometrics.free.fr/rent98_00.txt"), DataFrame)
first(dataset,3)
"""
Quantile regression using JuMP, linear programming setup
"""
function quantile_reg_lp(X, y, tau=0.3, fit_intercept=true; opt=HiGHS.Optimizer)
# GLPK.Optimizer, Ipopt.Optimizer, HiGHS.Optimizer,
if fit_intercept
X = [ones(size(X,1)) X]
else
X = X
end
n, k = size(X)
# equality constraings = LHS
# __ Aeq:
# X: intercepts & data points - positive weights
# -X: intercept & data points - negative weights
# I: error - positive
# -I: error - negative
Aeq = [X -X I(n) -I(n)]
# __ beq: equality constraints = RHS
beq = y
# __ goal function - intercept & data points have 0 weights
# positive error has tau weight, negative error has 1-tau weight
c = [zeros(2*k,1); tau .* ones(n,1); (1-tau) .* ones(n,1)]
nA, kA = size(Aeq)
index_x = 1:kA
index_constraints = 1:nA
# _ modeling
quant_model = Model()
set_optimizer(quant_model, HiGHS.Optimizer)
set_silent(quant_model)
if opt == HiGHS.Optimizer # only for HiGHS.Optimizer
# set_attribute(quant_model, "max_iter", 1000)
end
# _ performance tips
set_string_names_on_creation(quant_model, false)
# @expression(quant_model, x[1] + x[2] + x[3])
#
@variable(quant_model, x[index_x] >= 0)
@objective(quant_model, Min, sum(c[i]*x[i] for i in index_x) )
@constraint(quant_model, constraint[j in index_constraints], sum( Aeq[j,i]*x[i] for i in index_x ) == beq[j] )
# @constraint(quant_model, bound, x >= 0)
JuMP.optimize!(quant_model)
# _ recover beta
theta = [JuMP.value(x[i]) for i in index_x]
beta = theta[1:k] .- theta[k+1:2*k]
return beta
end
beta = quantile_reg_lp(Matrix(dataset[!,[:area, :yearc]]), dataset[!,:rent_euro]; opt=HiGHS.Optimizer)
```
Three betas are:
```
3-element Vector{Float64}:
-5542.503252380954
3.9781347222222223
2.887233673469389
```
| null |
CC BY-SA 4.0
| null |
2023-05-17T14:01:57.340
|
2023-05-17T14:01:57.340
| null | null |
388190
| null |
616144
|
1
| null | null |
1
|
35
|
I am aiming to reproduce the issue (using Python) described in the answer here [Is normality testing 'essentially useless'?](https://stats.stackexchange.com/questions/2492/is-normality-testing-essentially-useless)
It states that the assumption of normality is more likely to be violated when the sample size of the studied distribution is large (n>1000) and the Shapiro-Wilk test for normality will become more sensitive to outliers. In other words, the Shapiro-Wilk test will more often extract the p-value as < 0.05 (rejecting the null hypothesis for normality of the distribution) on repetitive simulations of almost normally distributed data.
However, when I tried with Python implementation, there is almost never p<0.05 independently of the sample size I used to generate the distribution. I checked the implementation of the Shapiro-Wilk test both in Python (scipy.stats) and R (stats package) and they use the same algorithm from the paper ALGORITHM AS R94 APPL. STATIST. (1995) VOL. 44, NO. 4.
Why do I not obtain similar results with scipy? I attach my code below.
```
import pandas as pd
import numpy as np
distributions = []
for _ in range(100):
tmp_dist =[shapiro(np.concatenate((np.random.normal(0, 1, 10), [1, 0, 2, 0, 1])))[1],
shapiro(np.concatenate((np.random.normal(0, 1, 100), [1, 0, 2, 0, 1])))[1],
shapiro(np.concatenate((np.random.normal(0, 1, 1000), [1, 0, 2, 0, 1])))[1],
shapiro(np.concatenate((np.random.normal(0, 1, 5000), [1, 0, 2, 0, 1])))[1],
shapiro(np.concatenate((np.random.normal(0, 1, 20000), [1, 0, 2, 0, 1])))[1]]
distributions.append(tmp_dist)
df = pd.DataFrame(distributions, columns = ['n10','n100','n1000', 'n5000', 'n20000'])
```
|
Are there differences for the Shapiro tests in Python and R?
|
CC BY-SA 4.0
| null |
2023-05-17T14:08:55.677
|
2023-05-17T15:14:31.700
|
2023-05-17T15:14:31.700
|
44269
|
388166
|
[
"r",
"normal-distribution",
"python",
"shapiro-wilk-test"
] |
616145
|
1
| null | null |
1
|
22
|
The problem: Suppose I have an ordered sample of $n$ observations (e.g. a playlist of songs) that are ranked according to some latent (unobserved) feature, for each of which $p$ covariates are recorded. I want to build a model that is able to rank a new $m$-sample consistently with the original sample, based on their covariates.
All I can think of is to fit a logistic regression model with $2p$ variables on the probability $\pi_{ij}$ that observation $i$ is ranked higher than observation $j < i$, but I am not sure if this is a good way to go.
Are there any well-established techniques for this out there? I have attempted to search for relevant literature, but I am unsure about the specific terminology used to describe this type of problem, and I couldn't find anything about it.
I would also appreciate solutions to alternative and similar problems, as there may exist more suitable formulations to my problem.
|
Statistical methods for ranking units
|
CC BY-SA 4.0
| null |
2023-05-17T14:17:23.730
|
2023-05-17T14:17:23.730
| null | null |
294645
|
[
"regression",
"generalized-linear-model",
"linear-model",
"ranking"
] |
616146
|
2
| null |
261752
|
3
| null |
Several answers here ignore the request in the question "I'm looking for a reference that explains different types of charts with respect to stats/math. I want more theory than process." In particular, books by Few, Knaflic and Yau are often weak (and even sometimes quite incorrect) in linking their discussion to statistical principles.
Antony Unwin's Graphical Data Analysis with R
[see publisher's site](https://www.routledge.com/Graphical-Data-Analysis-with-R/Unwin/p/book/9781498715232) has much more on the logic of statistical graphics than many of the books mentioned in other answers. Its use of R as a vehicle need not be a disadvantage to people who mostly or wholly use some other software. I am one such person but I have found the discussion rich and challenging. Even I when disagree with the author on some details, it is worth working out why. This is a 2015 book: while I suspect that the R details may be a little out-of-date, otherwise it wears very well and bears repeated consultation and reflection.
| null |
CC BY-SA 4.0
| null |
2023-05-17T15:16:41.343
|
2023-05-18T19:56:57.247
|
2023-05-18T19:56:57.247
|
22047
|
22047
| null |
616147
|
1
| null | null |
0
|
8
|
I want to model number of plants in 12 different areas.
Each area is divided into smaller plots of equal size for all areas. The number of plots between areas vary (31 for the smallest area, 500 for the largest).
For each plot I have counted number of plants.
I also measure max canopy height for each plot.
I have a lot of plots with zero plants (a total of 2260 data points from a total of 2700), but also a few very high values (several hundered). Also, my data is overdispersed, for some areas, very overdispersed.
Because of this, I use a zero inflated negative binomial model:
PlantModel <-zeroinfl(Count ~ Area*CanopyHeight, data = datCount, dist = negbin)
When I run this, I get an error message:
In value[3L] :
system is computationally singular: reciprocal condition number = 2.72546e-38FALSE
I noted that others also have had similar problems and have tried to find ways to fix my data, but most of the problems seem to be because of a complicated model or too small datasets, which I do not have.
One thing that worked was to put the 12 Areas into broader categories, i.e 4 AreaTypes. This computes. However, I am really interested in each area and differences between them.
(As a sidenote: PlantModel <- glm.nb(Count ~ Area*CanopyHeight, data = datCount, link = "log") works fine)
Does someone have an idea what I could do to make the model run?
I am not a statistician, so please keep explanations as simple as possible.
|
Error when using R zeroinfl on negative binomial count data model
|
CC BY-SA 4.0
| null |
2023-05-17T15:27:47.723
|
2023-05-17T15:27:47.723
| null | null |
388194
|
[
"negative-binomial-distribution",
"count-data",
"zero-inflation"
] |
616148
|
2
| null |
429831
|
0
| null |
This makes sense to me. We want our classifications to be correct, but we might be rooting for a particular outcome. Consider a medical test. We want that test to give the correct answer of healthy or sick, but we also want that correct answer to be that we are healthy.
| null |
CC BY-SA 4.0
| null |
2023-05-17T15:28:56.147
|
2023-05-17T15:28:56.147
| null | null |
247274
| null |
616150
|
2
| null |
616117
|
0
| null |
>
Suppose $\mathbf X = (X_1, X_2, \ldots, X_n)$ is a random vector with $n$ components is normally distributed with mean $\mu$ and standard deviation $\sigma$.
If you mean that $X_1,\ldots,X_n$ are jointly normal then the definition of joint normality answer your question. And in that case the expected value is an $n$-component vector and the variance is an $n\times n$ matrix.
Joint normality means every linear combination $a_1X_1+\cdots+a_nX_n$ is normally distributed, where $a_1,\ldots,a_n$ are not random.
In particular $X_1$ is the special case where $a_1=1$ and $a_2=\cdots=a_n=0.$
| null |
CC BY-SA 4.0
| null |
2023-05-17T15:58:41.470
|
2023-05-17T15:58:41.470
| null | null |
5176
| null |
616152
|
1
| null | null |
2
|
51
|
It is commonly suggested that if you are having trouble getting your lme4, Frequentist mixed-effects model to converge, you can either (a) simplify and drop random effects in the model, or (b) pivot to Bayesian mixed-effects models using brms ([https://m-clark.github.io/posts/2020-03-16-convergence/](https://m-clark.github.io/posts/2020-03-16-convergence/)).
I often accept this as true that Bayesian mixed-effects models can estimate complex models with maximal random effects (Barr et al., 2013) that Frequentist models cannot. However, I am unclear on the reasons for why this is, specifically why Bayesian models can estimate more complex models than Frequentist? Is it primarily due to the prior regularizing the random effects and "biasing" them away from the boundaries, so that you don't get weird aberrancies like correlations of 1/-1 like you sometimes see in Frequentist lme4?
|
Why are Bayesian mixed-effects models (e.g., brms) more able to estimate complex models than Frequentist mixed models (e.g., lme4)?
|
CC BY-SA 4.0
| null |
2023-05-17T16:58:12.553
|
2023-05-17T18:37:28.747
| null | null |
241198
|
[
"r",
"bayesian",
"mixed-model",
"multilevel-analysis",
"frequentist"
] |
616154
|
1
| null | null |
1
|
25
|
Recently I've been plotting a lot of data, and often I find myself using a moving average to smooth out values that oscillate or otherwise fluctuate a lot. However, the problem with this is that it removes the edges of the series, so in case of e.g. up-to-date time-series data I no longer have a view of the most recent development (say I do a 30-day smoothing, and now ~2 weeks are gone from the beginning and end of the series).
Here's some data I'm looking at, for example:
[](https://i.stack.imgur.com/a5cr6.png)
Here there's a very clear ~monthly oscillation that I'd like to smooth out, but preferably without losing the last weeks or month of data. So, as the question says, are there any good methods of doing this? And if so, what are they and how do they work?
|
Is there a statistically sound method of smoothing a data series without removing the edges?
|
CC BY-SA 4.0
| null |
2023-05-17T17:29:45.227
|
2023-05-17T17:29:45.227
| null | null |
387234
|
[
"time-series",
"smoothing",
"moving-average"
] |
616155
|
1
| null | null |
2
|
36
|
Here is the dataset for repeated measures:
```
library(lme4)
library(nlme)
d=read.delim("http://dnett.github.io/S510/RepeatedMeasures.txt")
d$Program = factor(d$Program)
d$Subj = factor(d$Subj)
d$Timef = factor(d$Time)
```
I have built an unstructured model using gls function and I am trying to reproduce the results using a random slope (or random slope + intercept) model fitted by R functions lmer or lme. Here is the code and output:
gls function:
```
> d.gls <- gls(Strength ~ Program * Timef, data = d,
+ correlation = corSymm(form =~ 1 | Subj),
+ weight = varIdent(form = ~ 1 | Timef))
> getVarCov(d.gls)
Marginal variance covariance matrix
[,1] [,2] [,3] [,4] [,5] [,6] [,7]
[1,] 8.7801 8.7571 8.9656 8.1984 8.6781 8.2203 8.4169
[2,] 8.7571 9.4730 9.4631 8.5686 9.2012 8.7307 8.6875
[3,] 8.9656 9.4631 10.7080 9.9266 10.6660 10.0700 10.2140
[4,] 8.1984 8.5686 9.9266 10.0770 10.6000 9.8987 10.0430
[5,] 8.6781 9.2012 10.6660 10.6000 12.0950 11.3440 11.3640
[6,] 8.2203 8.7307 10.0700 9.8987 11.3440 11.7560 11.6500
[7,] 8.4169 8.6875 10.2140 10.0430 11.3640 11.6500 12.7100
Standard Deviations: 2.9631 3.0778 3.2723 3.1745 3.4778 3.4287 3.5651
> logLik(d.gls)
'log Lik.' -617.4479 (df=49)
```
lmer function:
```
> d.lm <- lmer(Strength ~ Program * Timef +(0+Timef|Subj),d,control = lmerControl(check.nobs.vs.nRE = "ignore"))
Warning message:
In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge with max|grad| = 0.00271822 (tol = 0.002, component 1)
> #VarCorr(d.lm)
> as.matrix(Matrix::bdiag(VarCorr(d.lm)))
Timef2 Timef4 Timef6 Timef8 Timef10 Timef12 Timef14
Timef2 8.534639 8.756711 8.965454 8.198261 8.677920 8.220413 8.416961
Timef4 8.756711 9.227384 9.462754 8.568308 9.201094 8.730927 8.687653
Timef6 8.965454 9.462754 10.462654 9.926419 10.665922 10.070205 10.213872
Timef8 8.198261 8.568308 9.926419 9.832025 10.599504 9.898893 10.043531
Timef10 8.677920 9.201094 10.665922 10.599504 11.849795 11.344650 11.364012
Timef12 8.220413 8.730927 10.070205 9.898893 11.344650 11.511176 11.650515
Timef14 8.416961 8.687653 10.213872 10.043531 11.364012 11.650515 12.465186
> logLik(d.lm)
'log Lik.' -617.4479 (df=50)
```
lme function is super slow and fails to converge. My questions are:
- Considering that gls function does not produce a warning whereas lmer function does, do we trust the gls result rather than lmer result?
- Are the two models by gls and lmer are the same unstructured model ? If not, why their log likelihood is the same (-617)? If yes, why the variance-covariance estimates are different?
- Is it possible to reproduce the gls unstructured model using lme/lmer function?
UPDATE
I was able to generate the results using `lme()` through a different optimizer "optim":
```
> d.lme <- lme(Strength ~ Program * Timef, random = ~ -1+Timef | Subj, d,control = lmeControl(maxIter = 50, msMaxIter = 50, msVerbose = TRUE,opt='optim'))
initial value 1202.821021
iter 10 value 1202.809062
iter 20 value 1202.789065
final value 1202.784342
converged
> #VarCorr(d.lme)
> getVarCov(d.lme)
Random effects variance covariance matrix
Timef2 Timef4 Timef6 Timef8 Timef10 Timef12 Timef14
Timef2 8.5759 8.7549 8.9628 8.1949 8.6764 8.2180 8.4140
Timef4 8.7549 9.2691 9.4601 8.5649 9.1991 8.7284 8.6841
Timef6 8.9628 9.4601 10.5030 9.9229 10.6630 10.0680 10.2100
Timef8 8.1949 8.5649 9.9229 9.8725 10.5970 9.8965 10.0400
Timef10 8.6764 9.1991 10.6630 10.5970 11.8910 11.3420 11.3600
Timef12 8.2180 8.7284 10.0680 9.8965 11.3420 11.5510 11.6480
Timef14 8.4140 8.6841 10.2100 10.0400 11.3600 11.6480 12.5060
Standard Deviations: 2.9285 3.0445 3.2408 3.1421 3.4483 3.3987 3.5363
> logLik(d.lme)
'log Lik.' -617.4481 (df=50)
```
|
Unstructured model vs random slope model for repeated measures based on R functions lmer, lme and gls
|
CC BY-SA 4.0
| null |
2023-05-17T17:33:19.857
|
2023-05-19T15:40:48.003
|
2023-05-17T20:27:30.560
|
173339
|
173339
|
[
"mixed-model",
"lme4-nlme",
"repeated-measures",
"generalized-least-squares"
] |
616156
|
1
| null | null |
1
|
40
|
I plotted data from two groups using `geom_smooth` in `ggplot2`. The default method `loess` was used for the lines and confidence interval was also added. Am I right by assuming if the lines and confidence intervals do not overlap there is a significant difference? For example, in the following graph, is it correct that there is significant difference between the two groups at age 3, 4 but not 5, 2nd grade and 4th grade?
[](https://i.stack.imgur.com/3D9kn.png)
|
Interpretation of confidence interval using geom_smooth in ggplot2
|
CC BY-SA 4.0
| null |
2023-05-17T17:25:23.707
|
2023-05-17T17:40:49.700
| null | null |
229981
|
[
"r",
"ggplot2",
"regression",
"data-visualization",
"confidence-interval"
] |
616157
|
1
| null | null |
4
|
169
|
I've a large dataset including a response `bmk`, a continuous predictor `delay`, a `group` factor (n=2, 0 and 1), and a random effect `medu` (n=85).
I split the whole dataset (`dat`) into two subdatasets (`dat0` and `dat1`) based on the group factor.
Then, I run the `m0` and `m1` gam using `bs='fs'` separately, applied on `dat0` and `dat1`, respectively.
And then, I run the `m2` gam on the whole dataset, applying `bs='fs'` for each `by=group` factor.
The smooth of group=1 (red) is exactly the same between `m1` and `m2`, but why is the smooth of group=0 (blue) different between `m0` and `m2`?
Models:
```
m0 <- bam(bmk ~ s(delay, medu, bs="fs", m=2),
data = dat0, method = 'fREML',
family = inverse.gaussian(link="identity"),
discrete = TRUE)
m1 <- bam(bmk ~ s(delay, medu, bs="fs", m=2),
data = dat1, method = 'fREML',
family = inverse.gaussian(link="identity"),
discrete = TRUE)
m2 <- bam(bmk ~ group + s(delay, medu, bs="fs", by=group, m=2),
data = dat, method = 'fREML',
family = inverse.gaussian(link="identity"),
discrete = TRUE)
```
Plots:
```
par(mfrow = c(1,3), cex = 1.1)
plot_smooth(m0, view="delay", rm.ranef=FALSE, n.grid = 50,
xlim=c(0,90), ylim=c(11.5,14.5), main = "m0",
col=c("blue"))
plot_smooth(m1, view="delay", rm.ranef=FALSE, n.grid = 50,
xlim=c(0,90), ylim=c(11.5,14.5), main = "m1",
col=c("red"))
plot_smooth(m2, view="delay", plot_all="group", rm.ranef=FALSE,
n.grid = 50, col=c("blue","red"),
xlim=c(0,90), ylim=c(11.5,14.5), main = "m2")
```
[](https://i.stack.imgur.com/g3ZQe.jpg)
---
Thanks Gavin for these relevant explanations/hypotheses.
Some precision about the data:
`bmk`: the outcome, a biomarker known to vary with `delay`.
`group`: factor, two different conditions of blood sampling (0 and 1), for which I'd like to compare the bmk~delay smoothed relationships and quantify the difference over `delay`.
`medu`: factor, n=85 medical units from which `bmk` may have different levels of results, which could vary in a non-linear way over `delay` (that's why I chose `bs='fs'` random smooth for `medu`).
Note that the number (n=85) and type of `medu` is strictly identical for `group`=0 (`dat0`), `group`=1 (`dat1`), and `group`=0+1 (`dat`); however, the number of `bmk` results by `medu` is different between `group`=0 and `group`=1, their sums being the number of `bmk` from `group`=0+1 (see below the counts provided in `n_medu` data).
n_medu data:
```
n_medu <-
structure(list(medu = structure(1:85, .Label = c("21110", "21134",
"21149", "21175", "21187", "21194", "21195", "21266", "21294",
"21357", "21551", "21555", "21773", "24022", "24024", "24102",
"24105", "24106", "24107", "24108", "24109", "24112", "24114",
"24116", "24121", "24122", "24132", "24142", "24147", "24148",
"24153", "24161", "24162", "24530", "24803", "24812", "24816",
"24820", "24827", "24886", "24887", "31023", "31302", "31304",
"31321", "31736", "31800", "33026", "33027", "33028", "33031",
"33071", "33090", "33091", "33107", "33116", "33128", "33149",
"33180", "33223", "33251", "33261", "33341", "33510", "33516",
"33821", "33911", "34024", "34104", "34131", "34188", "36027",
"36028", "36029", "36103", "36108", "36109", "36110", "36119",
"36140", "36173", "36313", "36326", "36500", "36724"), class = "factor"),
nb_dat0 = c(8L, 5946L, 1970L, 40L, 1033L, 2422L, 45L, 557L,
60L, 50L, 396L, 45L, 71L, 684L, 39L, 15L, 1328L, 485L, 46L,
18L, 22L, 6350L, 29L, 20L, 4009L, 677L, 762L, 3737L, 37L,
321L, 1185L, 1295L, 1779L, 180L, 1572L, 18L, 24L, 15L, 89L,
64L, 25L, 120L, 308L, 525L, 103L, 55L, 5434L, 85L, 31L, 171L,
26L, 11L, 126L, 9L, 5768L, 891L, 1121L, 1220L, 239L, 30L,
1846L, 10L, 54L, 29L, 107L, 140L, 59L, 33L, 819L, 20L, 432L,
836L, 237L, 54L, 8786L, 623L, 513L, 8604L, 20L, 9670L, 40L,
300L, 110L, 60L, 10L), nb_dat1 = c(22L, 8009L, 3253L, 50L,
3726L, 2521L, 215L, 539L, 154L, 16L, 109L, 12L, 119L, 240L,
46L, 21L, 1138L, 653L, 56L, 26L, 27L, 7738L, 22L, 16L, 8806L,
140L, 280L, 4296L, 14L, 96L, 1471L, 3078L, 162L, 40L, 1943L,
29L, 59L, 18L, 17L, 27L, 8L, 60L, 133L, 123L, 76L, 40L, 3616L,
84L, 48L, 215L, 22L, 23L, 283L, 33L, 6369L, 818L, 1987L,
809L, 564L, 19L, 1167L, 30L, 52L, 7L, 97L, 166L, 31L, 21L,
691L, 14L, 80L, 885L, 315L, 29L, 6339L, 345L, 489L, 6922L,
10L, 10033L, 21L, 61L, 52L, 85L, 30L), nb_dat = c(30L, 13955L,
5223L, 90L, 4759L, 4943L, 260L, 1096L, 214L, 66L, 505L, 57L,
190L, 924L, 85L, 36L, 2466L, 1138L, 102L, 44L, 49L, 14088L,
51L, 36L, 12815L, 817L, 1042L, 8033L, 51L, 417L, 2656L, 4373L,
1941L, 220L, 3515L, 47L, 83L, 33L, 106L, 91L, 33L, 180L,
441L, 648L, 179L, 95L, 9050L, 169L, 79L, 386L, 48L, 34L,
409L, 42L, 12137L, 1709L, 3108L, 2029L, 803L, 49L, 3013L,
40L, 106L, 36L, 204L, 306L, 90L, 54L, 1510L, 34L, 512L, 1721L,
552L, 83L, 15125L, 968L, 1002L, 15526L, 30L, 19703L, 61L,
361L, 162L, 145L, 40L)), class = c("tbl_df", "tbl", "data.frame"
), row.names = c(NA, -85L))
```
Summaries of the 3 `plot_smooth` show the same `medu` reference level ('36140'), which is the one with the higher number of `bmk` results (n=19703, as shown on the sorted `n_medu$n_dat` column).
Therefore, a priori the number, type and/or reference level of `medu` are not the cause of the problem.
```
Summary: # plot_smooth on dat0
* delay : numeric predictor; with 50 values ranging from 0.000000 to 90.000000.
* medu : factor; set to the value(s): 36140.
Summary: # plot_smooth on dat1
* delay : numeric predictor; with 50 values ranging from 0.000000 to 90.000000.
* medu : factor; set to the value(s): 36140.
Summary: # plot_smooth on dat
* group : factor; set to the value(s): 0, 1.
* delay : numeric predictor; with 50 values ranging from 0.000000 to 90.000000.
* medu : factor; set to the value(s): 36140.
```
Rather suspecting a distribution-related issue, I finally solved the problem with evenly spaced knots (intuitively but somewhat unexpectedly, in any case without being able to demonstrate it).
My conclusion is that knots should be evenly spaced when a `bs='fs'` random smooth effect is wanted for each `by=` factor specifically, within a common smooth term.
I assume this nested model to be similar to the `model I` from [Pedersen et al](https://peerj.com/articles/6876/) (i.e., no global shared trend but group-level trends and different smoothness (individual penalties)), or at least closer to `model I` than `model GI`, isn't it?
Models with evenly spaced knots:
```
# group=0
dat0_knots <- list(delay = seq(min(dat0$delay), max(dat0$delay),
length = 10))
m3 <- bam(bmk ~
s(delay, medu, bs="fs", k=10, m=2),
data = dat0, method = 'fREML', family =
inverse.gaussian(link="identity"), control = ctrl,
discrete = TRUE, knots = dat0_knots)
m3_fit <- plot_smooth(m3, view="delay", rm.ranef=FALSE,
n.grid = 50, xlim=c(0,90),
ylim=c(11.5,14.5),
main = "m3\n(k=10 evenly spaced)",
col=c("blue"));summary()
# group=1
dat1_knots <- list(delay = seq(min(dat1$delay), max(dat1$delay),
length = 10))
m4 <- bam(bmk ~
s(delay, medu, bs="fs", k=10, m=2),
data = dat1, method = 'fREML',
family = inverse.gaussian(link="identity"),
control = ctrl, discrete = TRUE, knots = dat1_knots)
m4_fit <- plot_smooth(m4, view="delay", rm.ranef=FALSE,
n.grid = 50, xlim=c(0,90), ylim=c(11.5,14.5),
main = "m3\n(k=10 evenly spaced)",
col=c("red"));summary()
# group=0 & group=1
dat_knots <- list(delay = seq(min(dat$delay), max(dat$delay),
length = 10))
m5 <- bam(bmk ~
group +
s(delay, medu, bs="fs", k=10, by=group, m=2),
data = dat, method = 'fREML',
family = inverse.gaussian(link="identity"),
control = ctrl, discrete = TRUE, knots = dat_knots)
m5_fit <- plot_smooth(m5, view="delay", plot_all="group",
rm.ranef=FALSE, n.grid = 50,
col=c("blue","red"),
xlim=c(0,90), ylim=c(11.5,14.5),
main = "m5\n(k=10 evenly spaced)");summary()
# plots
par(mfrow = c(1,3), cex = 1.1, xpd=NA)
plot_smooth(m3, view="delay", rm.ranef=FALSE, n.grid = 50,
xlim=c(0,90), ylim=c(11.5,14.5),
main = "m3\n(k=10 evenly spaced)", col=c("blue"))
plot_smooth(m4, view="delay", rm.ranef=FALSE, n.grid = 50,
xlim=c(0,90), ylim=c(11.5,14.5),
main = "m4\n(k=10 evenly spaced)", col=c("red"))
plot_smooth(m5, view="delay", plot_all="group", rm.ranef=FALSE,
n.grid = 50, col=c("blue","red"),
xlim=c(0,90), ylim=c(11.5,14.5),
main = "m5\n(k=10 evenly spaced)")
abline(h=min(m3_fit[["fv"]][["fit"]]),
col=adjustcolor("blue", alpha=0.5), lty = "dashed")
abline(h=max(m3_fit[["fv"]][["fit"]]),
col=adjustcolor("blue", alpha=0.5), lty = "dashed")
abline(h=min(m4_fit[["fv"]][["fit"]]),
col=adjustcolor("red", alpha=0.5), lty = "dashed")
abline(h=max(m4_fit[["fv"]][["fit"]]),
col=adjustcolor("red", alpha=0.5), lty = "dashed")
```
Plots of smooths:
[](https://i.stack.imgur.com/KCaMx.jpg)
Below are the `plot_diff` (with and without `sim.ci`) I was looking for:
```
par(mfrow = c(1,2), cex = 1.1)
plot_diff <- plot_diff(m5, view = "delay",
comp=list(group=c('1', '0')), ylim=c(-0.5,2),
rm.ranef=FALSE, sim.ci = FALSE,
main = "m5\n(k=10 evenly spaced)\nsim.ci=FALSE")
plot_diff_sim.ci <- plot_diff(m5, view = "delay",
comp=list(group=c('1', '0')), ylim=c(-0.5,2),
rm.ranef=FALSE, sim.ci = TRUE,
main = "m5\n(k=10 evenly spaced)\nsim.ci=TRUE")
```
Plots of smooths difference:
[](https://i.stack.imgur.com/q8BmQ.jpg)
---
Indeed, my `m2` and `m5` models have both two separate sets of smooths, one per group, each with their own smoothing parameters.
However, it does not explain the initial issue that is the discordance between the separate smooth of group=0 `m0` and the smooth of the same group=0 in the `by=group` `m2` from the whole data.
Furthermore, it does not explain why this discrepancy disappears when the location of knots is fixed in the 3 cases (`m3`, `m4`, `m5`).
The two `by=group` models `m2` and `m5` are not similar (see below). I would be tempted to prefer `m5` (knots spaced evenly) since it corresponds exactly to the superposition of individual `m3` and `m4`.
However, its AIC is higher, and the `compareML` function gives `m2` preferentially (the lowest AIC).
So, which `by=group` model is the most reliable?
Models m2 and m5
```
# m2
m2 <- bam(bmk ~
group +
s(delay, medu, bs="fs", k=10, by=group, m=2),
data = dat, method = 'fREML', family = inverse.gaussian(link="identity"), control = ctrl, discrete = TRUE)
AIC(m2) # AIC = 979297.2 (deviance explained = 9.8%)
# m5: knots spaced evenly
dat_knots <- list(delay = seq(min(dat$delay), max(dat$delay), length = 10))
m5 <- bam(bmk ~
group +
s(delay, medu, bs="fs", k=10, by=group, m=2),
data = dat, method = 'fREML', family = inverse.gaussian(link="identity"), control = ctrl, discrete = TRUE, knots = dat_knots)
AIC(m5) # AIC = 979406.6 (deviance explained = 9.83%)
> compareML(m2,m5)
m2: bmk ~ group + s(delay, medu, bs = "fs", k = 10, by = group, m = 2)
m5: bmk ~ group + s(delay, medu, bs = "fs", k = 10, by = group, m = 2) # knots spaced evenly
Model m2 preferred: lower fREML score (60.692), and equal df (0.000).
-----
Model Score Edf Difference Df
1 m5 -182452.8 8
2 m2 -182513.5 8 60.692 0.000
AIC difference: -109.39, model m2 has lower AIC.
```
Plots
[](https://i.stack.imgur.com/ncTK5.jpg)
Test of Gavin's proposals
Using `knots=list(delay=sort(unique(dat0$delay)))` in each separate group slightly improved the issue by making `group=0` smooth slightly more nonlinear (`m6`) as compared to `m0`. Note that `delay` ranges from 4 to 90 min, i.e., 86 unique integer values. However, the initial discrepancy issue tends to disappear even further (i.e., the nonlinearity of group=0 tends to increase) using `knots=list(delay=seq(4,90,2))`, but it tends to reappear using other sequences of locations (e.g., `knots=list(delay=seq(4,90,4))` or `knots=list(delay=seq(4,90,10))`). This is certainly due to the non continuous distribution of delay, especially the isolated subgroup at delay below 8 and above 15 min (see `gratia::appraise(m6)` below).
[](https://i.stack.imgur.com/Pmy22.jpg)
```
gratia::appraise(m6)
```
[](https://i.stack.imgur.com/8MtJy.jpg)
The solution seems to be, indeed, to add `xt=list(bs="cr")` within the smooth term of the three models (`m9`, `m10`, `m11`) without specifying the number of `k=` nor `knots=`, which gives the best deviance explained (`summary(m11)`: 9.84%):
[](https://i.stack.imgur.com/9BoFi.jpg)
|
How 'by' factor works with 'fs' random smooth in gam?
|
CC BY-SA 4.0
| null |
2023-05-17T18:04:11.900
|
2023-05-24T06:33:31.230
|
2023-05-24T06:33:31.230
|
307344
|
307344
|
[
"r",
"mixed-model",
"generalized-additive-model",
"mgcv",
"smoothing"
] |
616158
|
1
| null | null |
1
|
21
|
I have "0=incorrect" and "1=correct" coded questionnaire, including 28 questions. There are missing data and I imputed the missing data with the Multiple Imputation method. After having multiple imputations from the mice library, I want to get the IRT parameter estimates as a pool estimate in R. I can get parameter estimates from each 5 imputed datasets separately but I want to have a pooled estimations in one step.
Here is my code:
```
# only questions from the column 4 to 31, 28 questions
data_for_imputation <- MYDATA[, 4:31]
data_for_imputation[, 1:28] <- lapply(data_for_imputation[, 1:28], factor)
# I have created 5 different datasets without missing data with this command
imputation_of_data <- mice(data_for_imputation, m = 5, maxit = 10, method = 'logreg', printFlag = FALSE)
# extracting 5 imputed datasets as data.frame
imputed_data_alldatasets <- complete(imputation_of_data, action = "long", include = FALSE)
imputed_data_alldatasets[,3:30] <- as.data.frame(lapply(imputed_data_alldatasets[,3:30], function(x) as.numeric(as.character(x))))
Models <- with(imputation_of_data, tam.mml(imputed_data_alldatasets[,3:30]))
```
Up to here everything seems working. With the with() function, I carried out IRT analyses to every datasets in the data.frame and I get a 'mira' object which normally the pool() function requires in mice. This 'mira' object includes all the analyses of 5 imputed datasets as a list:
```
summary(pool(Models))
```
When I try to pool the estimations, it gives this error "Error:
Error: No tidy method for objects of class tam.mml In addition: Warning message: In get.dfcom(object, dfcom) : Infinite sample size assumed.
I have been trying all the different variations for days. The pool function at the end unfortunately doesn't work and doesn't give me the expected pooled estimations. I am maybe trying something that doesn't exist or work in R. Any other method or way to get IRT item parameters from imputed datasets would be very helpful.
Thanks a lot in advance.
|
IRT anaylsis with imputed datasets in 'mice' library and pooling the item parameters
|
CC BY-SA 4.0
| null |
2023-05-17T18:05:56.997
|
2023-05-17T18:05:56.997
| null | null |
354227
|
[
"r",
"missing-data",
"multiple-imputation",
"item-response-theory",
"mice"
] |
616159
|
1
| null | null |
1
|
33
|
I am a Secondary Math teacher who is interested in creating effective learning in groups. My desire is to create a system/program that maximizes student Test performance, but I am not exactly sure where to start.
My idea thus far:
I have $n$ students, each of which has probability $p_i$ of passing their next Test. The probability is objectively and subjectively determined by the following metrics (subject to change):
- Homework average, $h$, $(0\leq h \leq 100)$
- Number of absences, $a$, $(0\leq a \leq 90)$, and $k$ is the number of class days thus far.
- Prior yearly average in math class, $m$, $(0\leq m \leq 100)$
- Current yearly Test average, $c$, $(0\leq c \leq 100)$
- Efficacy score (subjective, explained below), $e$, $(0\leq e \leq 100)$
Each metric would have its own weight (10%, 15%, 10%, 30%, and 35%, respectively) contributing to students' overall probability.
$$p_i=0.1\left(\frac{h}{100}\right) + 0.15\left(1-\frac{a}{k}\right) + 0.1\left(\frac{m}{100}\right) + 0.3\left(\frac{c}{100}\right) + 0.35\left(\frac{e}{100}\right)$$
Initially, groups of size 2, 3, or 4 students would be selected at random. At the end of every few class meetings, I would give a subjective "efficacy score" to each student based on how well they worked in their assigned group (based on communication, quality/quantity of work completed, etc.)
What I think I'm looking for:
A system that attempts to maximize the average probability of a student passing the next assessment. It should make predictions on who will work well together based on updated input from the above metrics.
I have beginner-level coding experience (Python and Java) and a whole month off this summer to work on this. I would appreciate being pointed in the right direction on how to make a system like this come to fruition, or if you have any ideas or constructive comments, those would be appreciated too.
|
Optimizing Student Test Performance
|
CC BY-SA 4.0
| null |
2023-05-17T18:16:46.000
|
2023-05-17T21:01:08.727
|
2023-05-17T21:01:08.727
|
388200
|
388200
|
[
"machine-learning",
"mathematical-statistics",
"inference",
"optimization"
] |
616160
|
2
| null |
616152
|
2
| null |
When you “estimate” a Bayesian model most often what you do is you sample from the posterior distribution. Posterior is, by Bayes theorem, basically a product of the priors and the likelihood. If you have very little data, or it does not provide much valuable information, the posterior would be dominated by the priors. In extreme cases, you would be sampling just from the priors. If you are willing to accept your guesses about the parameters as “estimates”, then you can estimate the parameters of any model with no effort.
I am not saying that to ridicule the Bayesian approach, I'm a great fan of it. What I'm trying to say is that plugging in some model to the MCMC algorithm is the easiest part and your job is far from done at this stage. The least you need to do after it is to check if the results make sense, e.g. are not completely dominated by the priors (are completely “random”).
| null |
CC BY-SA 4.0
| null |
2023-05-17T18:24:50.273
|
2023-05-17T18:24:50.273
| null | null |
35989
| null |
616161
|
2
| null |
615471
|
1
| null |
Not sure what you mean by a biased sample exactly, but we often have the issue of selection bias, which can result from conditioning on a common cause of the intervention and outcome (or generally a variable conditioning on which would unblock a non-causal path between intervention and outcome). For example, let's assume that we run a randomized trial but many patients are missing outcome data. If now we limit our analysis to the subset of patients with non-missing outcome data, i.e., condition on patients having complete data for the outcome, we may end up with a non-causal estimate of the effect of the intervention on the outcome.
| null |
CC BY-SA 4.0
| null |
2023-05-17T18:35:29.827
|
2023-05-17T18:35:29.827
| null | null |
197219
| null |
616162
|
2
| null |
616152
|
1
| null |
Random effects are used to capture correlations in the data, namely, within the same level of the corresponding grouping factors. The parameters that quantify the strength of these correlations are the variance components (i.e., the variances and covariances between the random effects). It is often the case in real data that these correlations are rather small in magnitude. Hence, some of these variance components are practically zero, which is on the boundary of their corresponding parameter space. This is one of the main reasons why you experience convergence problems under the frequentist approach. Under the Bayesian approach, you typically specify a prior for the variance-covariance matrix of the random effects that provides some "information" not present in the data.
| null |
CC BY-SA 4.0
| null |
2023-05-17T18:37:28.747
|
2023-05-17T18:37:28.747
| null | null |
219012
| null |
616163
|
2
| null |
181773
|
0
| null |
I disagree with the statement that there are no ways to identify outliers for categorical variables. In the same way you do it for numerical ones -- out of 1000 data points - in 99.8% cases fuel price is under 3usd/l and in 0.02% cases fuel price is $40/L. Statistically you want to remove the 0.02% from your data so your model accuracy is higher. For numerical vars the rule is around the interquartile range where outliers lie outside (Q1-1.5IQR ; Q3+1.5IQR).
In the same way we can consider outliers for categorical data - only we look at the lower end of frequency percenatge. You have 99.8% cases that says Fuel price="low" and 0.02% cases where Fuel price="high". You would remove the 0.02% from your training data. One way of doing it is to just set a threshold of up to 5% which would apply on non-missing data. The other would be to sort your var by ascending freq, number code your categories from small to large, leaving -- say 10 points between each --, and then use the same method as for the numerical outliers, only range would be <Q1-1.5IQR since anything above Q3 is high freq and you don't want eliminated.
| null |
CC BY-SA 4.0
| null |
2023-05-17T18:42:41.183
|
2023-05-17T18:42:41.183
| null | null |
388204
| null |
616165
|
1
| null | null |
0
|
12
|
I am trying to find the mean of a sample from time-series of variable 'A', consisting of all 'A' values that occured when the concurrent Variable 'B' met some condition.
I know that the A measurments have an uncertainty of ± 1.0 kph, and the B measurments have an uncertainty of ± 0.5 cm. I need to propogate this uncertainty across the two variables.
My idea is to take my time-series array, and adjust each A value by ± a random amount ≤ 1.0, and each B value by ± a random amount ≤ 0.5
I then select all rows in the "randomized" array where B meets the required condition, and calculate the mean of the associated A values
I repeat this process for 1000 or so iterations, and calculate the overall mean and standard deviation of the 1000 "means".
Is this a valid approach? Would this be considered a kind of Monte-Carlo simulation? Or bootstrapping?
Is there a better approach I should use to estimate the uncertainty?
|
How to estimate Mean with Uncertainty of a selected sample from variable A, that occured when variable B met some condition?
|
CC BY-SA 4.0
| null |
2023-05-17T18:54:22.013
|
2023-05-17T19:29:32.333
|
2023-05-17T19:29:32.333
|
353706
|
353706
|
[
"bootstrap",
"monte-carlo",
"measurement-error",
"uncertainty",
"error-propagation"
] |
616167
|
1
| null | null |
1
|
15
|
I am trying to quantify the cumulative difference between two time series using dynamic time warping. For example, here I calculated and plotted the DTW distance between Speaker rating (pink line) and the Listener rating (blue line):
[](https://i.stack.imgur.com/G3WDa.png)
I am struggling with how DTW can account for polarity (or positive/negative direction) when comparing the two time series. I would like to know how much the Speaker rating was overconfident (i.e., bigger) compared to the Listener rating throughout. As you can see in the plot, the blue line is mostly above the pink one, except in the beginning-- so I would expect to see a negative overconfidence value here.
I appreciate any help or input!
|
Dynamic Time Warping with polarity?
|
CC BY-SA 4.0
| null |
2023-05-17T19:34:39.293
|
2023-05-17T19:34:39.293
| null | null |
388207
|
[
"time-series",
"rating",
"dynamic-time-warping"
] |
616168
|
1
| null | null |
0
|
21
|
Suppose I have a collection of time series for a number of subjects, say $y_{ij}$ are measurements for subject $i$ at time $t_{ij}$. The times are not uniformly sampled and each subject may have a different number of measurements.
I'm interested in ways to estimate the time-varying probability density on $y$, call it $p(y,t)$. Are there are relatively straightforward ways to estimate this from my scattered data? Would consider both parametric and nonparametric methods (time dependent KDE...?). Can I get a density from a mixed effects model on the data (NLME) or is that heading down the wrong path?
Images below give an example. The scatter plot has a LOWESS line to show the average over time. The second plot shows the individual time series for each subject ($N\approx 1200$)
[](https://i.stack.imgur.com/TOdb2.png)
[](https://i.stack.imgur.com/6KTOn.png)
|
Density estimation for time series data
|
CC BY-SA 4.0
| null |
2023-05-17T19:45:42.730
|
2023-05-17T19:45:42.730
| null | null |
28114
|
[
"time-series",
"density-estimation"
] |
616169
|
1
| null | null |
0
|
20
|
I would like to build a GLMER with a logit link for a multilevel logistic regression. I plan to utilize random slope and random intercept for this.
If I use age and sex of a patient as independent variables for level 1 and region (nominal) for level 2 with an outcome of malaria contraction, what would the formula look like?
Looking for the Logit(odds) = B00 + (B10 + u1j)*xij + u0j kind of formula.
Thank you!
|
GLMER Formula Help
|
CC BY-SA 4.0
| null |
2023-05-17T20:06:25.510
|
2023-05-17T21:20:01.930
|
2023-05-17T21:20:01.930
|
387441
|
387441
|
[
"logistic",
"mixed-model",
"multiple-regression",
"multilevel-analysis"
] |
616171
|
2
| null |
329102
|
0
| null |
Along the lines of Jeppe's answer and my maths are correct, I disagree with the answers saying you can compare the performance of a model on two datasets using F-score.
Imagine these two cases:
```
Case 1:
PredN PredP
N 20000 5000
P 200 300
Case 2:
PredN PredP
N 50000 5000
P 200 300
```
These two confusion matrices have the same F-score:
```
Precision = TP/PP = 300/(300+5000) = 0.05660377358
Recall = TP/P = 300/(200+300) = 0.6
F-score = 2*Precision*Recall/(Precision+Recall) = 0.1034482759
```
However, in case 1, 5k ot of 20k negative classes are wrongly classified as positive. Meanwhile, in case 2, 5k out of 50k are misclassified. In some cases, misclassifying a 10% of the negatives is not the same as misclassifying 25% of them.
In fact, imagine we had only 5000 negative cases and predicted all-positive (useless model), we'd still get that same Fscore.
This of course depends on your goal and costs of misclassifying each class. If the false positive rate is not a problem in your case, then this could not be a problem.
I like using the harmonic mean of recall and specificity instead although it does not seem as common.
| null |
CC BY-SA 4.0
| null |
2023-05-17T20:36:15.920
|
2023-05-17T20:36:15.920
| null | null |
386354
| null |
616173
|
1
| null | null |
12
|
1160
|
I am trying to become more familiar with time series analysis. I am reading through [Dangers and uses of cross-correlation in analyzing time series
in perception, performance, movement, and neuroscience:
The importance of constructing transfer function
autoregressive models](https://link.springer.com/article/10.3758/s13428-015-0611-2) by Dean and Dunsmuir, and they mention time series that are "individually autocorrelated". What does this term mean? I am familiar with computing the autocorrelation function of a time series as a function of time lag, but I am not sure what it means for a time series to itself be autocorrelated. Is there just some criteria you check with the autocorrelation function to say whether or not the whole time series is autocorrelated?
|
What does it mean for a time series to be autocorrelated?
|
CC BY-SA 4.0
| null |
2023-05-17T20:53:33.420
|
2023-05-19T08:51:44.953
|
2023-05-17T21:03:08.347
|
388210
|
388210
|
[
"time-series",
"autocorrelation",
"cross-correlation"
] |
616174
|
1
| null | null |
1
|
23
|
I am working with SGHMC (Stochastic Gradient Hamiltonian Monte Carlo) models.
I found an implimentation of the algorithm in pytorch [here](https://github.com/ruqizhang/csgmcmc). The part of the code that represents momentum variable update (not a complete update):
```
def update_params(lr,epoch):
for p in net.parameters():
if not hasattr(p,'buf'):
p.buf = torch.zeros(p.size()).cuda(device_id)
d_p = p.grad.data
d_p.add_(weight_decay, p.data)
...
```
d_p here are the gradients of the loss wrt to NN weights and p are the actual parameters. I understand that potential energy is a sum of the log likelihood and log prior. To get the gradient of the potential energy we need to take gradient of this sum. Gradient of the loss (CrossEntopy in my case) is basically the gradient of log likelihood, but I don't see where is that second term that comes from the gradient of log prior.
We do add_(weight_decay, p.data), but this term to me seems like an ad-hoc weight decay term (if there was no weight decay this term would be zero). But regardless of the weight decay there should be a term proportional to p that represents log prior (in case of the isotropic Gaussian prior). Could you please tell me where I am mistaken?
Thank you.
|
Prior term in SGHMC implementation
|
CC BY-SA 4.0
| null |
2023-05-17T21:33:07.033
|
2023-05-17T21:33:07.033
| null | null |
383886
|
[
"bayesian-network",
"variational-bayes",
"hamiltonian-monte-carlo"
] |
616175
|
1
| null | null |
1
|
9
|
Let's set up a toy problem. Say I make a vat of soup every week. Different kinds of soup, different volume each week (e.g. 50 gallons, 46 gallons, 10 gallons, etc.) and sometimes there are flies in the soup, which is understandably gross but stay with me. The question is: are there certain soups that are more likely to collect flies than others? Could we claim with 95% confidence that some soups are different than others and which ones?
Unfortunately no other information is given other than total volume of the soup and # of flies found. Typical defect rate approaches don't quite work here since we're dealing with defects per volume. Poisson regression isn't a great fit for this sample data set either. How should I approach this question? Some toy data is below.
gallons <- c(9,14,10,18,36,44,51,47,48,78,35,53,95,133,108,22,94,97)
flies <- c(0,0,0,0,3,0,2,0,0,10,2,5,9,13,9,0,0,0)
|
Comparing Defects per Volume
|
CC BY-SA 4.0
| null |
2023-05-17T21:34:37.150
|
2023-05-17T21:34:37.150
| null | null |
388211
|
[
"count-data",
"spatial",
"zero-inflation"
] |
616176
|
2
| null |
616173
|
13
| null |
Take the time series without the first observation, $X_2, \dots, X_T$, and the time series without the last observation, $X_1, \dots, X_{T-1}$. You have two vectors of length $T-1$. Calculate their correlation. The result is the lag 1 autocorrelation.
Similarly, you can calculate the lag 2 autocorrelation, which is the correlation between $X_3, \dots, X_T$ and $X_1, \dots, X_{T-2}$, and more generally any lag $k$ autocorrelation.
A series is "autocorrelated" if any one of these is "large enough". Of course, all the sample autocorrelations will typically be nonzero, so one usually checks if any one is significantly larger or smaller than zero.
More information can be found [here](https://otexts.com/fpp3/acf.html) or [here](https://dfep.netlify.app/sec-arima.html#sec-autoreggresion).
| null |
CC BY-SA 4.0
| null |
2023-05-17T21:34:51.583
|
2023-05-17T21:34:51.583
| null | null |
1352
| null |
616177
|
2
| null |
294926
|
2
| null |
>
Bias based on my understanding, represents the error because of using a simple classifer(eg: linear) to capture a complex non-linear decision boundary. So I expected OLS estimator to have high bias and low variance.
G-M Theorem states that OLS estimator is unbiased if the true data-generating process is linear in observables. So OLS is not guaranteed to be unbiased if you already presume that the true data-generating process is "a complex non-linear decision boundary".
| null |
CC BY-SA 4.0
| null |
2023-05-17T21:35:42.940
|
2023-05-17T21:35:42.940
| null | null |
331421
| null |
616178
|
1
| null | null |
1
|
31
|
I have the following plot, which appears to have two pretty distinct trends.
[](https://i.stack.imgur.com/hDbNR.jpg)
The plots shown [here](https://stats.stackexchange.com/questions/33078/data-has-two-trends-how-to-extract-independent-trendlines) and [here](https://stats.stackexchange.com/questions/412315/outlier-detection-for-bivariate-bimodal-distributions) depict similar situations. Is there a name for data that have two or more distinct trends, particularly when the distinct trends cannot be attributed to different factors?
|
Is there a name for data that, when plotted, seem to have two (or more) distinct trends?
|
CC BY-SA 4.0
| null |
2023-05-17T21:49:49.793
|
2023-05-17T22:23:50.813
|
2023-05-17T22:23:50.813
|
315722
|
315722
|
[
"regression",
"linear",
"trend",
"scatterplot"
] |
616179
|
2
| null |
616178
|
1
| null |
First off, a hexagonal plot would help you to visualize this plot density much better.
To me, there is only one trend, the seemingly flat line showing the expected response as a function of the predictor. There is also an incredibly powerful issue of heteroscedasticity where points within the central portion of the predictors distribution have a MUCH larger variance.
Another way to view it is that the bivariate density of these two variables has a mode that's distributed compactly along the orthogonal (vertical and horizontal) axes of the plane. The "mode" of a distribution need not be a finite set of points, but could be said to comprise an area. But to my first point, I can't be totally sure of it when the points are swarming like squid in a black cloud of ink!
| null |
CC BY-SA 4.0
| null |
2023-05-17T21:58:34.997
|
2023-05-17T21:58:34.997
| null | null |
8013
| null |
616180
|
2
| null |
615892
|
1
| null |
McCullagh and Nelder show in Section 13.3.1 of "Generalized Linear Models" (Second Edition; Chapman & Hall/CRC, 1989) that an exponential survival model is equivalent to a Poisson model with an offset, like you show. If the Poisson model is correct, there should be no inherent bias other than what's already the case for [fitting a finite data set by maximum likelihood](https://en.wikipedia.org/wiki/Maximum_likelihood_estimation#Second-order_efficiency_after_correction_for_bias).
Several of the references you cite note additional bias problems (often important in practice) that arise in count models with things like over-dispersion. The risk of informative censoring can't be forgotten in the context of survival analysis. And I haven't completely thought through the implications of finite-sample bias if there are different numbers of events among treatment groups.
Nevertheless, there is a very close connection between a Cox model and a Poisson model with piecewise-constant baseline hazards. See these [course notes](https://grodri.github.io/glms/notes/c7s4) by Rodríguez. That should help to explain your observations.
| null |
CC BY-SA 4.0
| null |
2023-05-17T22:32:10.813
|
2023-05-17T22:32:10.813
| null | null |
28500
| null |
616181
|
1
| null | null |
1
|
15
|
I am using the Blavaan R package to fit bayesian path analysis models. The output includes an R-squared value. It has come to my attention that there are problems with using R-squared for bayesian analysis i.e., it is a point estimate and can be greater than 1. I would like to know if Blavaan calculates it in a way that accounts for this.
|
How is R-squared calculated in the "Blavaan" R package and is it appropriate to use/report in bayesian analysis?
|
CC BY-SA 4.0
| null |
2023-05-17T22:32:32.020
|
2023-05-17T22:32:32.020
| null | null |
388214
|
[
"bayesian",
"r-squared"
] |
616182
|
1
| null | null |
0
|
6
|
I have data with one I(0) and seven I(1) variables. I found by Johansen procedure that maximum cointegration rank is 1 (applied to I(1) variables).
Now I try to understand, how can i add I(0) variable into VECM?
|
VECM model with one I(0) and seven I(1) variables. How to implement it?
|
CC BY-SA 4.0
| null |
2023-05-17T22:37:08.187
|
2023-05-17T22:37:08.187
| null | null |
361080
|
[
"time-series",
"vector-error-correction-model"
] |
616183
|
1
| null | null |
1
|
25
|
I have fitted a regression tree on my data and would like to demonstrate that it is a good model. Are there any standard goodness of fit test or index for a regression tree?
I understand that I can calculate the Confusion Matrix or Gini's index etc. to assess performance of each node but are there any way to assess the fit of the whole tree?
|
Goodness of fit test/index for a regression tree
|
CC BY-SA 4.0
| null |
2023-05-17T22:42:45.233
|
2023-05-30T23:40:22.057
|
2023-05-30T10:48:32.773
|
247274
|
388212
|
[
"machine-learning",
"cart",
"goodness-of-fit",
"model-evaluation"
] |
616184
|
2
| null |
58230
|
0
| null |
The first step in computing the SD is to compute the difference between each value and the mean of those values. You don't know the true mean of the population; all you know is the mean of your sample. Except for the rare cases where the sample mean happens to equal the population mean, the values will tend to be closer to the sample mean than to the true population mean. So the sum of the square of those differences will be smaller (and can't be larger) than what it would have been had you used the true population mean in the first step.
To make up for the underestimation of the sum-of-squares, when calculating the average squared difference (the variance), you need to divide by a value smaller than n. Why is the correct denominator n-1? If you knew the sample mean, and all but one of the values, you could calculate what that last value must be. Statisticians say there are n-1 *degrees of freedom.
| null |
CC BY-SA 4.0
| null |
2023-05-17T22:45:37.703
|
2023-05-19T21:54:51.420
|
2023-05-19T21:54:51.420
|
25
|
25
| null |
616185
|
1
| null | null |
0
|
6
|
At t = 0 my two variables x und y do not correlate. Due to an event, at t = 1 x und y suddenly correlate....
Is there a way to measure the the disorder of the two variables at t = 0, so that I can say that due to the specific event, the data becomes more "ordered". Something like skewness, entropy, ...?
|
How to measure time-dependant inhomogenity?
|
CC BY-SA 4.0
| null |
2023-05-17T22:55:12.187
|
2023-05-17T22:56:15.577
|
2023-05-17T22:56:15.577
|
322065
|
322065
|
[
"time-series",
"correlation"
] |
616186
|
1
| null | null |
1
|
15
|
I first ran a regression with my dependent variable and covariates:
reg dv treatment covar1 covar2 covar4 covar5 covar6
The n=738
Then a used psmatch2 to generate new treatment/control groups according to propensity score:
psmatch2 treatment covar1 covar2 covar4 covar5 covar6, out(dv) logit
I then ran a regression with my new treatment group:
reg dv treatment covar1 covar2 covar4 covar5 covar6 [fweight=_weight]
The n=998 for this regression.
Why did the n increase after psmatch2?
|
How does the psmatch2 command increase sample size in Stata?
|
CC BY-SA 4.0
| null |
2023-05-17T23:12:04.430
|
2023-05-17T23:12:04.430
| null | null |
379777
|
[
"stata",
"propensity-scores"
] |
616187
|
1
| null | null |
2
|
44
|
I am conducting a study to estimate the effect of Medicaid expansion on the uninsured rate using a classic Difference-in-Differences (DID) design with two-way fixed effects (twfe) model. My mathematical model is as follows:
$$
UNINS_{ist} = \alpha_s + \delta_t + \beta EXPANSION_{ist} + \epsilon_{ist}
$$
In this model:
$UNINS_{ist}$ is a binary variable indicating whether an individual in the survey is uninsured (1) or insured (0) in state $s$ and year $t$. $\alpha_s$ represents state fixed effects, capturing time-invariant differences across states. $\delta_t$ represents time fixed effects, capturing common time trends across all states. $\beta$ is the parameter of interest, representing the causal effect of Medicaid expansion on the uninsured rate. $EXPANSION_{ist}$ is a binary treatment variable that equals 1 for states that adopted Medicaid expansion and 0 for states that did not. $\epsilon_{ist}$ is the error term accounting for unobserved factors and random variation.
I have data from the American Community Survey (ACS) for the years 2011 to 2019, which consists of repeated cross-sectional data. Here are the top 15 observations of my dataset:
[](https://i.stack.imgur.com/XvQiq.png)
To estimate this model, I am using the `feols` command from the `fixest` package in R. However, when running the `feols` command with the following code:
```
reg1 = feols(UNINS ~ expansion | ST + YEAR , data = Data) # state and year fixed effect
```
I encounter the following error message:
```
Error: in feols(UNINS ~ expansion | StateN, data = Data):
The only variable 'expansion' is collinear with the fixed effects. In such circumstances, the estimation
is void.
```
I've done the same thing using `stata` running the code
```
reghdfe UNINS expansion , absorb(ST YEAR) cluster(ST)
```
eventhough I got the regression result I got the following error
```
note: expansion is probably collinear with the fixed effects (all partialled-out values are close to z
> ero; tol = 1.0e-09)
(MWFE estimator converged in 4 iterations)
note: expansion omitted because of collinearity
```
I am seeking guidance on how to address this issue and estimate the TWFE model.
I suspect that could be due to the fact that my data is repeated cross-sections and maybe by aggregating the variable by calculating mean of each variable at state and year might help. But, this might be problematic with the categorical variable like race or education level.
|
Classic TWFE model: treatment is collinear with the fixed effect
|
CC BY-SA 4.0
| null |
2023-05-17T23:40:26.283
|
2023-05-19T16:02:09.067
|
2023-05-19T03:03:58.787
|
246835
|
388217
|
[
"r",
"stata",
"multicollinearity",
"fixed-effects-model",
"difference-in-difference"
] |
616188
|
1
| null | null |
1
|
8
|
I estimated a hurdle negative binomial regression model with zero-truncated negative binomial model as the count component in R using the pscl package. I wish to present elasticities for the count component and I am having a hard time figuring out the relation for the elasticity estimates from the regression coefficients because of the over-dispersion parameter involved. Can anyone assist me on this please?
E[y|y>0] = e^(βx+δ)/(1-(θ/(θ+e^(βx+δ)))^θ)
If I proceed I get to elasticity estimates, I am getting negative solutions which do not make sense. I read some papers which just present elasticities using the same approach as that for poisson or NB model for zero-truncated part but I am not sure if they are correct. Can anyone please point a way forward?
|
Elasticity estimates for zero-truncated negative binomial part in the hurdle model
|
CC BY-SA 4.0
| null |
2023-05-18T00:00:44.197
|
2023-05-18T00:00:44.197
| null | null |
388219
|
[
"expected-value",
"regression-coefficients",
"zero-inflation",
"derivative",
"elasticity"
] |
616189
|
1
| null | null |
1
|
11
|
I am working with raw data from a multiplex Luminex assay ( not by commercial kit); I have found that some duplicates have a high %CV. I would like to know if you can advise me on dealing with this data (I can't remeasure these samples).
I describe some of the solutions I have found:
1.- Choose a cutoff CV and eliminate the duplicates with a higher CV. Some commercial kits recommend a CV of less than 10% to less than 30%.
2.- Consider samples with a high CV, as long as their result is not an outlier, i.e., consistency of the result with a high CV compared to the rest of the data obtained for other biological samples (assess biological consistency ).
I can't entirely agree with the second solution. I think the first approach is more appropriate, but I am new to analyzing this type of data; I am used to working with more accurate results because they do not have the associated biological factor.
I welcome any suggestions.
|
How to deal with duplicates with high coefficients of variation
|
CC BY-SA 4.0
| null |
2023-05-18T01:02:02.587
|
2023-05-18T01:02:02.587
| null | null |
383047
|
[
"coefficient-of-variation"
] |
616190
|
1
| null | null |
1
|
25
|
I am a physician looking for help with research. I am looking to learn the best way to analyze my data. I thought it would be simple enough, but after speaking to a colleague and doing my own research, it seems a little more complicated.
Basically, the project entails me taking a about 100 patients and looking at their resected tumor. Now, either the tumor will be positive for gene X or negative for that gene. So, you have two groups, gene X positive, and gene X negative. Then what I am going to do is look at the medical imaging that was done of that tumor, say a CT scan. I will take that image for each patient and run it through a high-thruput software that will give me variables about the image beyond what I could see with the naked eye. Say this software gives me numeric values for 150 variables for each tumor, ideally, I would just look at each variable one by one and see if there is a difference in the variable between tumors positive for gene X and those that are negative for gene X. For ex, for each image I will have a numeric value for variable 1, I can then take the average across all variable 1s for the subset of gene X positive and compare it to the variable 1 average for the gene X negative group and see if there is any statistical difference, and so on and so forth for the 150 variables. The big idea being which of these medical image variables can help me predict which tumor will gene X positive or negative.
I think the issue with doing a T test for each variable is that in doing my research, looking at each variable individually will yield false positives since I am looking at so many variables. To correct for the high alpha, I looked into Benjamini-Hochberg Method, but in speaking with my colleague he suggested I analyze the data with something like a regression model but I do not have much experience with that and I am unsure if that is really right for my data. Hence, I looking for some help. Would it be so bad to do a T test for each of the 150 variables and see which is significantly different or not, what else can I do?
|
Comparing two groups with hundreds of variables?
|
CC BY-SA 4.0
| null |
2023-05-18T01:19:45.933
|
2023-05-18T01:19:45.933
| null | null |
388222
|
[
"regression",
"t-test"
] |
616191
|
2
| null |
616173
|
9
| null |
#### "Auto-correlation" is correlation "with the self" at different points in time
The prefix "auto" means self or same (from the Greek "autós") so "auto-correlation" refers to correlation of a variable with itself, when observed at different times. If you have a set of value $X_1,...X_T$ that are measurements of the same essential quantity at different times then the correlation between these would be referred to as "auto-correlation". If there is non-zero correlation between values at different times then we would say that they are "auto-correlated".
I don't really agree with the other answer here. If there is any non-zero correlation at all for a time-series of values then they are auto-correlated, though of course the true correlation value may be unknown and may need to be inferred from observation (and so in that sense a low observed sample correlation may suggest that the true auto-correlation is zero). Moreover, while we often model auto-correlation as a function of time-lags between variables, that is just one way to do it, so that is a non-essential aspect of the concept.
| null |
CC BY-SA 4.0
| null |
2023-05-18T02:05:27.003
|
2023-05-18T02:05:27.003
| null | null |
173082
| null |
616193
|
1
| null | null |
2
|
30
|
I have 2 new conversion webpages to test. Leadership recommends 80% traffic to existing conversion webpage and 10% each to the 2 new conversion webpages (non-negotiable)
How do I calculate sample size? I understand the fundamentals of sample size calculation - alpha, power, baseline conversion rate and minimum detectable effect needed but have done it for a 50/50 distribution and 1 treatment only..
I am unsure how to calculate for 2 treatment and uneven distribution. Can anyone guide me or refer me to a calculator or formula?
|
Calculating sample size when you have 2 treatments and uneven distribution
|
CC BY-SA 4.0
| null |
2023-05-18T02:28:56.380
|
2023-05-20T11:56:21.863
| null | null |
388223
|
[
"experiment-design",
"sample-size",
"statistical-power",
"ab-test"
] |
616194
|
1
| null | null |
1
|
28
|
$f_X(x)$ gives value of the probability density function of random variable $X$ at point $x$. I am not sure how to wrap my head around $f_{Y|X}(y|x)$; is $Y|X$ still a random variable (sorry for the very basic question, I have not been able to find an answer on Google)?
This confusion comes from me trying to figure out $E_{Y|X}(h) = \int hf_{Y|X}(y|x)dy$. I don't understand why there is no $dx$ in the integral.
|
How to interpret $f_{Y|X}(y|x)$ in the integral of conditional expectation?
|
CC BY-SA 4.0
| null |
2023-05-18T03:07:20.957
|
2023-05-18T03:07:20.957
| null | null |
388225
|
[
"probability",
"density-function",
"conditional-expectation"
] |
616195
|
2
| null |
187410
|
1
| null |
Certainly! Let's consider a simple example to illustrate how the Laplace distribution can be used to add privacy-preserving noise in differential privacy.
Suppose we have a dataset of individuals' ages, and we want to calculate the average age while ensuring privacy. In differential privacy, we need to add noise to the computation of the average age to protect individual privacy.
- Calculating the average age without privacy: Let's assume we have a dataset of ages: [25, 30, 35, 40, 45]. The average age without privacy would simply be the sum of all ages divided by the number of individuals:
Average age = (25 + 30 + 35 + 40 + 45) / 5 = 35.
- Adding Laplace noise for privacy preservation: In differential privacy, we want to add noise to the computation to protect individual ages while still providing useful statistical information. We can use the Laplace distribution to generate the noise to be added.
Let's say we choose a privacy parameter, epsilon (ε), to quantify the desired level of privacy. A smaller ε value corresponds to stronger privacy guarantees. The privacy budget determines the amount of noise to be added.
For example, if we set ε = 0.5, we can calculate the amount of noise to be added using the sensitivity of the query. Sensitivity refers to the maximum change in the query's output caused by the addition or removal of an individual's data.
In this case, the sensitivity of calculating the average age is 1, as the maximum change in the average age can occur when an individual's age is added or removed.
The noise can be sampled from the Laplace distribution with a scale parameter (b) determined by the sensitivity and privacy budget:
Noise = Laplace(scale = sensitivity / epsilon) = Laplace(scale = 1 / 0.5) = Laplace(scale = 2).
Let's say we draw a random sample from the Laplace distribution, and it gives us a noise value of -0.7.
- Adding noise to the average age: To preserve privacy, we add the noise to the computed average age:
Noisy average age = Average age + Noise = 35 + (-0.7) = 34.3.
The noisy average age is the result that is released or used for analysis. It includes the privacy-preserving noise, making it difficult to determine the exact age of any individual in the dataset.
By adding Laplace noise, differential privacy provides a level of privacy protection while still allowing for useful statistical calculations. The amount of noise added depends on the privacy parameter (ε) and the sensitivity of the query, ensuring privacy guarantees in the analysis of sensitive data.
| null |
CC BY-SA 4.0
| null |
2023-05-18T03:58:23.540
|
2023-05-18T03:58:23.540
| null | null |
388227
| null |
616196
|
2
| null |
615968
|
2
| null |
With all the given conditions, $\{\xi_n\}$ is not uniformly integrable.
First, because $\{\xi_n\}$ is a martingale and $\sup_n E[\xi_n] = 1 < \infty$, by the theorem you cited (i.e., Theorem 35.5 in Probability and Measure by Patrick Billingsley), $\xi_n$ converges to some r.v. $\eta$ with probability $1$. And because $\xi_n$ are positive, $\eta$ must be non-negative with probability $1$. This implies that $\{\sqrt{\xi_n}\}$ converges to $\sqrt{\eta}$ with probability $1$.
On the other hand, $\sup_n E[(\sqrt{\xi_n})^2] = \sup_n E[\xi_n] = 1 < \infty$ implies that $\{\sqrt{\xi}_n\}$ is uniformly integrable, which together with $\sqrt{\xi_n}$ converges to $\sqrt{\eta}$ with probability $1$ imply that (see the cited theorem $^\dagger$ after the answer):
\begin{align}
E[\sqrt{\eta}] = \lim_{n \to \infty} E[\sqrt{\xi}_n] =
\prod_{k = 1}^\infty E[\sqrt{Y_k}] = 0.
\end{align}
This means $\sqrt{\eta}$, hence $\eta$, is $0$ with probability $1$.
But this then leads to the conclusion that $\{\xi_n\}$ is not uniformly integrable. Otherwise, we would conclude by the Theorem at the end of this answer again that $\lim_{n \to \infty} E[\xi_n] = E[\eta] = 0$, which contradicts with $E[\xi_n] = 1$ for all $n$.
---
$\dagger$ This is Theorem 25.12 in Probability and Measure by Patrick Billingsley.
>
If $X_n \overset{d}{\to} X$ and the $X_n$ are uniformly integrable, then $X$ is integrable and
\begin{align}
E[X_n] \to E[X].
\end{align}
| null |
CC BY-SA 4.0
| null |
2023-05-18T04:04:08.873
|
2023-05-18T04:04:08.873
| null | null |
20519
| null |
616198
|
1
| null | null |
1
|
33
|
I need to limit the output value to [0,1], but using tanh or sigmoid activation function after the last layer of convolution or full linkage will result in a return gradient of 0, (falling in the level of the activation function), but also do not need to let the value of the sum of 1 (softmax), is there any good way, thanks. I've used laynorm to limit the output size, but it still doesn't work, very hard to train.
|
How to limit the neural network regression output between 0 and 1 without sigmoid or tanh?
|
CC BY-SA 4.0
| null |
2023-05-18T07:04:36.387
|
2023-05-18T07:04:36.387
| null | null |
388234
|
[
"neural-networks"
] |
616199
|
1
| null | null |
1
|
10
|
I am working on training a Seq2Seq Variational Autoencoder (VAE) model using healthcare data. In my dataset, I have features that exhibit varying levels of variance across patients. For instance, blood glucose values are highly variant for each patient, while HbA1c levels (which reflect an average of blood glucose over around three months) are less variant.
My issue is that the VAE model is not accurately reconstructing the highly variant blood glucose feature, whereas it performs well on reconstructing the less variant HbA1c feature. To address this, I want to assign a higher weight to the blood glucose reconstruction loss, which is calculated using gamma Negative Log-Likelihood (NLL), compared to the HbA1c reconstruction loss.
Is it appropriate to weight the VAE's NLL reconstruction loss for each feature independently?
|
Weighting feature-specific reconstruction loss in Seq2Seq VAE
|
CC BY-SA 4.0
| null |
2023-05-18T07:46:13.920
|
2023-05-18T07:46:13.920
| null | null |
388235
|
[
"machine-learning",
"neural-networks",
"autoencoders",
"seq2seq"
] |
616200
|
1
| null | null |
0
|
13
|
I have two models to fit a set of categorical features.
One uses an encoding followed by a Kernel Density Estimation (with cross-validated bandwidth search) to make a continuous distribution. I am aware that this might be a stupid idea, but bear with it.
One models the distribution directly using a probability table (with a floor on the probability of unseen combinations of features).
I would like to compare the accuracy of fit for these models using cross-validation. Is there any direct way to compare the likelihoods from the continuous probability density model with the likelihoods from the discrete probability model? If not, is there a logical prior I could use to allow this comparison to occur?
|
Comparing models with transformation from discrete to continuous
|
CC BY-SA 4.0
| null |
2023-05-18T07:57:44.363
|
2023-05-18T07:57:44.363
| null | null |
287256
|
[
"probability",
"cross-validation",
"kernel-smoothing"
] |
616201
|
1
| null | null |
1
|
31
|
I have a block in a CNN that splits the input channel-wise in half, and one half goes through a regular 3x3 2d convolutional layer, and the other goes through a dilated 3x3 2d convolutional layer. After each is a batch normalization, then an activation function. They are both then concatenated again channel-wise before going through another 2d convolutional layer. Surely, however, when concatenating them there are two different distributions in the data, since the batch normalization was performed on each half separately?
Should batch normalization be performed after they're concatenated instead?
|
Batch normalization before or after channel-wise concatenation?
|
CC BY-SA 4.0
| null |
2023-05-18T08:18:26.937
|
2023-05-18T08:18:26.937
| null | null |
387360
|
[
"neural-networks",
"conv-neural-network",
"normalization"
] |
616202
|
1
| null | null |
5
|
81
|
I am trying to inspect a circular time series (a long time series of angular measures in 0-360°). The main aim would be to identify abrupt changes in the time series, but as a start I would like to plot it and visually inspect it. What is the best way? I am aware of the Fisher & Lee 1994 paper, but I found it difficult to implement in `R`.
REFERENCE
Fisher, N. I., and A. Lee. "Time series analysis of circular data." Journal of the Royal Statistical Society: Series B (Methodological) 56.2 (1994): 327-339.
|
How to plot angular time series?
|
CC BY-SA 4.0
| null |
2023-05-18T08:27:10.690
|
2023-05-18T15:15:49.603
|
2023-05-18T15:15:49.603
|
919
|
186851
|
[
"r",
"time-series",
"data-visualization",
"circular-statistics"
] |
616204
|
1
| null | null |
1
|
9
|
I have fitted a regression tree on my training data and would like to demonstrate that it is a good model. For now, I am doing that by calculating the RMSE between the actual values of the dependent variable and the corresponding model outputs. The dependent variable is a continuous variable.
Are there any standard Goodness of fit tests that would be better in this scenario? Are there any other metrics that should also be looked into to assess the performance of my model?
Thanks in advance!
|
Are there any standard Goodness of fit tests for regression trees, where the output is a continuous variable?
|
CC BY-SA 4.0
| null |
2023-05-18T09:37:56.083
|
2023-05-18T09:37:56.083
| null | null |
388212
|
[
"cart",
"goodness-of-fit"
] |
616205
|
1
| null | null |
2
|
7
|
We have a gold standard test and a new test, and want to compare them prospectively, in a diagnostic accuracy study. I want to work out the sample size required. I have done some reading, and there is a calculator online:
[https://turkjemergmed.com/calculator](https://turkjemergmed.com/calculator)
However, this asks you to enter the sensitivity and specificity of the new test. I am confused, because the outcome of the diagnostic accuracy study will presumably be sensitivity and specificity as well, presented in a 2x2 contingency table. So the whole exercise seems circular.
Secondly, how do I go about finding the confidence interval of sens/spec? I found another equation online:
[https://www.ncbi.nlm.nih.gov/books/NBK305683/](https://www.ncbi.nlm.nih.gov/books/NBK305683/)
But it feels like I am just apply random calculations to my numbers, and I am not sure if it is appropriate.
Many thanks for your advice.
Johnny
|
Sample size - Diagnostic Accuracy Study
|
CC BY-SA 4.0
| null |
2023-05-18T09:38:43.110
|
2023-05-18T09:38:43.110
| null | null |
388244
|
[
"sample-size"
] |
616206
|
1
| null | null |
0
|
20
|
I am doing an analysis of a population of animals that died during the first year of their life. I am comparing them a) to all animals that died, ever, after how-ever long life; and b) animals that did not die and were not born before the earliest birth date in the first-year-death cohort. The data is all categorical and I am trying to tease out differences in the first-year-death cohort as opposed to the two populations (a and b) using things like survival analysis, logistic regression and hypergeometric testing.
Question: what kind of statistics is this? I am comparing the "sample" (the first-year-death cohort) to two populations, but I am not looking for information on the population, so it's not inferential. But given that I am using regression, Kaplan-Meier and hypergeometric testing, it is not "Descriptive Statistics", is it? I need to find a qualifier for the analysis before presenting it and I would like to be precise. Statisticians, could someone help me to be accurate, please? Thank you very much.
|
Correct qualifier (terminology) for this kind of statistics
|
CC BY-SA 4.0
| null |
2023-05-18T09:42:48.213
|
2023-05-18T09:42:48.213
| null | null |
271121
|
[
"terminology"
] |
616207
|
2
| null |
525481
|
1
| null |
I am going to challenge your question a little bit. When we do things like correlation or regression, it is usually because we have a sample of data and we want to use it to learn something about the whole population outside of the sample.
But in your case, the population outside of the sample would be "people who did not take the survey". Does it make sense to ask how people who didn't take the survey, found out about the survey?
If you are interested in the relationship between age and how they found out about the survey only for the people who took the survey, you have all the data already so you don't need to do correlation or regression. You can calculate the exact breakdown between groups for each age, or show the exact distribution of ages for each group - whichever makes more sense for your research.
| null |
CC BY-SA 4.0
| null |
2023-05-18T09:48:04.570
|
2023-05-18T09:48:04.570
| null | null |
388246
| null |
616208
|
1
| null | null |
0
|
18
|
I am trying to understand and implement Sieve bootstrap (maybe also known s a parametric or model based bootstrap) for time series, where the bootstrap samples are sampled from centered fitted residuals. The actual centering step is not clear to me, what does it actually do, and why is it there?
The method is introduced in: Peter Bühlmann. "Sieve bootstrap for time series." Bernoulli 3 (2) 123 - 148, June 1997. [https://projecteuclid.org/journals/bernoulli/volume-3/issue-2/Sieve-bootstrap-for-time-series/bj/1177526726.full](https://projecteuclid.org/journals/bernoulli/volume-3/issue-2/Sieve-bootstrap-for-time-series/bj/1177526726.full)
The part that describe the centering is between eq. 2.2 and 2.3:
[](https://i.stack.imgur.com/mcyWz.png)
where p is model order of the AR model and n is the length of the time serie
|
How and why center residuals when doing time-series bootstrap?
|
CC BY-SA 4.0
| null |
2023-05-18T09:49:52.677
|
2023-05-18T09:49:52.677
| null | null |
53084
|
[
"time-series",
"forecasting",
"bootstrap",
"residuals",
"autoregressive"
] |
616209
|
2
| null |
616157
|
3
| null |
You missed some important output from `plot_smooth()` and some critical understanding about what the function is doing. The reason I was confused absent knowledge about what `plot_smooth()` was doing is that the models you specified have up to 85 smooths (depending on how the levels of `medu` fall into the `group`s) and yet the plots showed a single smooth per `group`. This doesn't make sense — why just one smooth? What does that smooth represent for each group? — unless you understand what `plot_smooth()` is doing.
Looking at the printed output that `plot_smooth()` sends to the console explains what must be going on.
Here is a reproducible example using the `simdat` data set from itsadug:
```
library("mgcv")
library("itsadug")
library("dplyr")
data(simdat)
# Model with random effect and interactions:
m3 <- bam(Y ~ Group + s(Time, Subject, by = Group, bs='fs', m=2, k=5),
data = simdat)
df1 <- simdat |>
filter(Group == "Adults")
m1 <- bam(Y ~ s(Time, Subject, bs='fs', m=2, k=5),
data = df1)
df2 <- simdat |>
filter(Group == "Children")
m2 <- bam(Y ~ s(Time, Subject, bs='fs', m=2, k=5),
data = df2)
op <- par(mfrow = c(1,3), cex = 1.1)
plot_smooth(m1, view="Time", rm.ranef=FALSE, n.grid = 50, main = "Adults",
col=c("blue"))
plot_smooth(m2, view="Time", rm.ranef=FALSE, n.grid = 50, main = "Children",
col=c("red"))
plot_smooth(m3, view="Time", plot_all="Group", rm.ranef=FALSE, n.grid = 50,
col=c("blue","red"), main = "Both")
par(op)
```
This produces:
[](https://i.stack.imgur.com/3FVwA.png)
where the effects show are clearly different, more so than in your actual example.
What's going on?
Look at the output from `plot_smooth()` that is printed to the console:
```
Summary:
* Time : numeric predictor; with 50 values ranging from 0.000000 to 2000.000000.
* Subject : factor; set to the value(s): a01. # <--- here!
Summary:
* Time : numeric predictor; with 50 values ranging from 0.000000 to 2000.000000.
* Subject : factor; set to the value(s): c01. # <--- here!
Summary:
* Group : factor; set to the value(s): Adults, Children.
* Time : numeric predictor; with 50 values ranging from 0.000000 to 2000.000000.
* Subject : factor; set to the value(s): a01. # <--- here!
```
Note what is states about the subjects chosen for the three plots. In `m1` the subject `a01` was chosen, while in `m2` the subject `c01` was chosen. For `m3` we are back to `a01`. This explains why one of the curves matches across the three plots but the other doesn't. The reason what different subjects is because of your subsetting the data - subject `a01` in this example is only in one of the Child or Adult groups, for obvious reasons. Hence, when you are plotting these smooths for the individual models there is no way that it could show the same subject across all three models/plots.
In your case it is reasonable to assume that the same medical unit is present in both groups, but it is also reasonable to assume that the default/reference level of the `medu` factor in the data used to fit the subset models is different. `gam()` and `bam()` drop empty levels on factors, which, combined with the previous point will mean than different `medu` levels from the full data set become the reference level in the subsets of data, and hence you see the behaviour that you ask about; the `medu` level in the two data subset plots is different and hence one of the smooths in the combined model/plot will not match with one of the two plots for the data subset models.
The solution could be to specify the `medu` level you want to show in all plots, assuming that one level is present in all groups?
But I would ask you why you want this plot? Why do you want to focus on a specific subject? What are you trying to show for these smooths? If it is to show some kind of average effect in the two groups, conditioning on a single subject would be a bit weird. If you explain a bit more about why you are doing what you are doing, I can suggest an alternative way to proceed.
| null |
CC BY-SA 4.0
| null |
2023-05-18T10:00:06.877
|
2023-05-18T10:00:06.877
| null | null |
1390
| null |
616210
|
2
| null |
616103
|
1
| null |
>
the difference between train loss and val loss and why val_loss is as low as it is from the start and does it make any sense.
Yes, it can make sense. Neural networks, if they have a large complexity, can always fit the training data if given enough time to train. That doesn't mean that some meaningful pattern has been learned that works on other data as well.
Your graph is also not different from the typical graphs with curves of training loss and validation loss. (See several here: [How to know if model is overfitting or underfitting?](https://stats.stackexchange.com/questions/355774/how-to-know-if-model-is-overfitting-or-underfitting))
Typically you have:
- training loss continuously decreasing with increasing model complexity
- validation loss initially decreasing (due to less underfitting), subsequently increasing (due to more overfitting)
Your curves look the same, but it is just that the overfitting starts right-away (or just after the first step, which does reduce the validation loss slightly) and you have little of the part where the training decreases the bias and underfitting. Your model directly starts to overfit.
A similar early overfitting is in this graph where the validation error starts increasing after the cubic polynomial (3rd point along the x-axis in the graph with mean squared error):
[](https://i.stack.imgur.com/vIvHn.png)
From question [Overfitting, but why is the training deviance dropping?](https://stats.stackexchange.com/questions/552798)
If you believe that your features should allow a simple model that can do the fitting, then you probably need to debug and adapt your model.
The question* that was previously added as duplicate is different, in the body text, and is about the case when even the training loss doesn't decrease. However, the answers given to it are very general and can help you with the troubleshooting of your network.
*[What should I do when my neural network doesn't learn?](https://stats.stackexchange.com/questions/352036/)
| null |
CC BY-SA 4.0
| null |
2023-05-18T10:16:24.123
|
2023-05-18T10:35:22.830
|
2023-05-18T10:35:22.830
|
164061
|
164061
| null |
616212
|
1
|
616327
| null |
0
|
16
|
I am calculating the weights for a survey, but I don't know what to do with my selection probabilities.
Let's say I want to ask people from some country what their opinion is about several subjects.
I have created a panel where I have asked people to partake in. I got SRS sampled adresses that have the same age and residence distribution as the country itself.
The first time I asked over 40.000 people and 3.000 wanted to join the panel.
The second time I asked around 20.000 people. but asked the 18-34 year olds two times more, because they don't respond that well and 4.000 people in total wanted to join.
The third time I asked around 8.000 people and asked the 18-34 year olds five times more. Around 2.500 wanted to join in total.
Now if I asks my panel something, they can refuse to answer. My final sample size is the responded people from my panel.
Are the chances of getting in the panel relevant for calculating the weights? If so, how do I calculate them?
I feel like the selection probability is the chance of being asked to answer a survey, but that is 100% for everyone in the panel.
I have information about the country itself and their people's age, gender, education and residence. This is also known for everyone in the panel.
There is some correlation between responding to a survey and living in specific places in said country.
Can somebody help me with this mess?
|
R: Selection probabilities with panel users (complex survey data)
|
CC BY-SA 4.0
| null |
2023-05-18T10:46:08.060
|
2023-05-19T13:33:53.623
|
2023-05-18T11:37:38.343
|
388249
|
388249
|
[
"r",
"survey",
"weights"
] |
616213
|
1
| null | null |
0
|
32
|
I basically have the same question as [this](https://stats.stackexchange.com/questions/137118/testing-a-regression-coefficient-against-1-rather-than-0) and [this](https://stats.stackexchange.com/questions/111559/test-model-coefficient-regression-slope-against-some-value) thread.
In short: I want to test in R if the slope of a linear model equals 1 (β = 1). I know how to do it for β = 0, but the answers from the aforementioned threads don't help me, but rather confuse me more. The suggested solutions were using offset or specifying a different formula in Rs lm() function.
Perhaps a short explanation for non-statisticians or mathematically well-versed people would help both me and other people trying to find an answer to this on the internet.
|
R: Linear regression testing for β = 1
|
CC BY-SA 4.0
| null |
2023-05-18T10:43:31.767
|
2023-05-18T13:17:51.683
| null | null | null |
[
"r",
"regression"
] |
616214
|
1
| null | null |
1
|
28
|
I am generating from a standard AR(1) process. Lets assume a decay time lag $\tau=100$ and unit time steps of $\Delta t=1$, so $\phi=\exp(-1 / (\tau/\Delta t))=0.99$. The predicted autovariance is $\sigma^2 = \frac{1}{1 - \phi^2}$.
Then the AR is defined as $y_{i+1} = \phi \times y_i + \epsilon$, where $\epsilon \sim \mathrm{Normal}(0,1)$.
Now I want to investigate the change after K steps: $$y_{i+K}-y_i$$
I find empirically that the distribution is normal with variance:
$$\sigma(K)^2 = (\tau/2 + 1) \times (1 - \exp(K/\tau)^2)$$
Can this be derived analytically?
I suspect that K steps incur $\tau$ standard deviations of variance $\sigma^2$, but why half of that, and why the offset?
More generally for Gaussian processes: How can this variance be derived from the integral over the power spectral density?
This question is different from autocorrelation/autocovariance, which looks at $y_{i+K} \times y_i$.
---
Here is my python code:
```
import numpy as np
def generate_AR(N = 1000, sigma = 1.0, c=0, dt=1., tau=100):
W = np.random.normal(0, sigma, size=N)
# correlation: set to < 1, otherwise not stationary!
phi = np.exp(-1. / (tau / dt))
y = np.empty(N)
y[0] = W[0]
for i in range(1, N):
y[i] = c + phi * y[i-1] + W[i]
return y
tau = 100
for i, dN in enumerate([2, 4, 10, 40, 100]):
dy = np.empty(10000)
for i in range(10000):
y = generate_AR(N=dN, tau=tau, dt=1)
dy[i] = y[-1] - y[0]
print('tau:', tau, 'steps:', (dN-1), 'variance:', np.var(dy))
```
|
Variance of change after $K$ steps in AR(1) model
|
CC BY-SA 4.0
| null |
2023-05-18T10:58:31.980
|
2023-05-19T01:56:34.997
|
2023-05-19T01:56:34.997
|
20519
|
9496
|
[
"time-series",
"gaussian-process",
"autoregressive"
] |
616215
|
2
| null |
616213
|
1
| null |
By default to check the significance of B
We are comparing if the variable is statistically different from 0.
Thus we construct the following hypothesis.
H0: B = 0
Ha: B ≠ 0
tstat = (B - 0)/std_error \
If we want to test for B=1 then we adjust tstat as below
H0: B = 1
Ha: B ≠ 1
Then compute tstat as following
tstat = (B - 1)/std_error
if abs(tstat) > 2.58 then pv <= 0.01
if abs(tstat) > 1.96 then pv <= 0.05
if abs(tstat) > 1.66 then pv <= 0.1
To do that in R you can do that manually by extracting the coef, std-error from the estimated model and plugging in the formula.
or you can use `car` package.
```
library(car)
#fit the model
regMod <- lm(y~x)
# Test if coefficient is equal to 1
car::linearHypothesis(regMod, "x = 1")
```
if the p-value is significant then you reject the null hypothesis of x=1
| null |
CC BY-SA 4.0
| null |
2023-05-18T11:10:11.833
|
2023-05-18T13:17:51.683
|
2023-05-18T13:17:51.683
|
22047
|
334759
| null |
616216
|
1
| null | null |
0
|
80
|
[](https://i.stack.imgur.com/BVExI.png)
[](https://i.stack.imgur.com/hEq1p.png)
I have a graph like this above and I used the tab_model function in library sjPlot to report the results of my model. My Model is random intercept and random slope model. I am quite new with advance level of stats like this. My question is about odd ratio. Odd ratio of 1.40 (p < 0.001***, significant), which means that for a 1 unit increase in centred instance we expect to see a 1.40 increase in the odds of success being 1. That is my interpretation, as far as I understood but since it is a random slope model, each individual must have a odds ratio of its own because the slope of each of them are different. It can be hard to explain here without a data but maybe if someone can provide another similar example or link to a source which explains something similar, I would appreciate.
|
odd ratio interpretation in R with a random slope model
|
CC BY-SA 4.0
| null |
2023-05-18T10:34:52.977
|
2023-05-18T15:35:59.130
|
2023-05-18T15:35:59.130
|
11887
|
388274
|
[
"r",
"logistic",
"odds-ratio"
] |
616217
|
1
| null | null |
0
|
17
|
I am working on a simulation study. I need to simulate data with a specific matrix of correlations and I would like to introduce some noise (a confounder). So for example I simulate V1 and V2 with a r=.6 and I would like to introduce a reduction of 50% in that correlation. How can I do that? Here is my script for the simulation but I do not know now how I can modify V1 and V2 to reduce their correlation to .3 for example.
Thank you so much in advance.
```
require("MASS")
require ("psych")
set.seed(1)
cov.matone <- matrix(c(1, .6,
.6,1 ), nrow = 2)
dfone <- data.frame(MASS::mvrnorm(n = 10000,
mu = c(0, 0),
Sigma = cov.matone))
cor(dfone)
```
|
Introduce noise in a simulation
|
CC BY-SA 4.0
| null |
2023-05-18T08:20:04.377
|
2023-05-18T11:15:32.457
| null | null |
383524
|
[
"r",
"simulation"
] |
616219
|
2
| null |
616214
|
0
| null |
By repeated substitution, we obtain
$$
y_{i+K} = \phi y_{i+K-1} + \epsilon_{i+K} = \phi^2 y_{i+K-2} + \phi \epsilon_{i+K-1} + \epsilon_{i+K} = \ldots
= \phi^K y_i + \sum_{j=0}^{K-1} \phi^j \epsilon_{i+K-j}
$$
We thus have
$$
\mathrm{Var}(y_{i+K}-y_i) = \mathrm{Var}\left((\phi^K -1) y_i + \sum_{j=0}^{K-1} \phi^j \epsilon_{i+K-j}\right)
$$
We know that $\mathrm{Var}(y_i)=\sigma^2$, $\mathrm{Var}(\epsilon_{i+K-j})=1$ for all $j$, and also that $y_i$ is uncorrelated with $\epsilon_{i+1},\ldots,\epsilon_{i+K}$. Putting everything together gives
$$
\mathrm{Var}(y_{i+K}-y_i) = (1-\phi^K)^2 \sigma^2 + \sum_{j=0}^{K-1} \phi^{2j} = (1-\phi^K)^2 \sigma^2 + \frac{1-\phi^{2K}}{1-\phi^2}
$$
| null |
CC BY-SA 4.0
| null |
2023-05-18T11:33:08.533
|
2023-05-18T11:33:08.533
| null | null |
238285
| null |
616220
|
1
| null | null |
1
|
55
|
I'm working on a project on time series multi-step ahead forecasting in Python.
I have a time series, and I apply an ARMA model on it (statsmodels SARIMAX library). I know that ARMA models, as many other models, when forecasting tomorrow value give as output the estimate of the conditional expected value of the process for tomorrow, i.e. an estimate of the mean of the underlying process for tomorrow based on past values.
I also know that tomorrow value derives from past values and the shock (error) of tomorrow, which comes from a Gaussian distribution with mean = 0, as all other errors (errors are i.i.d.):
$\epsilon_t \sim \mathcal{N} (0, \sigma^2)$
When fitting the ARMA model on the training set, I'm estimating the ARMA parameters of the true model via maximization of the likelihood. And along the estimated parameters I obtain their confidence interval.
Since my parameters have a confidence interval, I expect the forecasted mean for tomorrow to have its own confidence interval: I'm estimating the expected value of the process with uncertain parameters, so I don't know if the estimated mean is the exact mean I'll see tomorrow, indeed I can't be sure of this, hence a confidence interval.
I don't know the formula for calculating this confidence interval. But let's move on.
Now I want to calculate the prediction interval, which is not the same thing as the confidence interval for the mean: the prediction interval combines the CI with the error variance, although I don't know the formula.
I expected statsmodels ARIMA function to give me the prediction interval, but the interval given in the summary seems to be the confidence interval for the mean.
However, as this [github issue](https://github.com/statsmodels/statsmodels/issues/8230) reports:
>
In SARIMAX, we have not implemented a procedure to incorporate the uncertainty associated with estimating the parameters of the model. [...] Ultimately, the intervals produced by either SARIMAX (python) or Arima (R) don't fit either of the definitions above. In some sense they are more like the "Prediction interval" term, because they do take into account the uncertainty arising from the error term (unlike the "Confidence interval" as described above). But it is not an exact match because they don't take into account parameter estimation uncertainty.
So not only the statsmodels interval is incomplete, but it's also misleading (since it seems to be the CI for the mean).
At this point then, I would like to calculate the true prediction intervals by myself.
Looking online and in some books (like [https://otexts.com/fpp3/prediction-intervals.html](https://otexts.com/fpp3/prediction-intervals.html)) I see that the prediction interval is calculated with the estimated standard deviation (standard error) of the forecast distribution.
Every step of the forecast (in a multi-step ahead forecast setting) has its own estimated standard deviation. Fine. The book cited above says that the estimated standard deviation for tomorrow is calculated as the RMSE of the past residuals adjusted by a coefficient.
But as we said, shouldn't this formula take into account the confidence interval for the mean?
Moreover, since the book is only taking errors into account, why calculating the RMSE of the past residuals if the errors are i.i.d. and their variance is known (by the gaussian assumption)?
$Var(e_{t+1})=Var(\mathsf{X}_{t+1}-\mathsf{\hat{X}}_{t+1})=Var(\epsilon_{t+1})=\sigma^2$
Why doesn't the book use the variance of the error distribution?
The book also says:
>
For multi-step forecasts, a more complicated method of calculation is required. These calculations assume that the residuals are uncorrelated.
And a little after it explains how to create prediction intervals with bootstrapped past residuals. So is there no closed formula for the prediction intervals for multi-step ahead? And also, why still the CI of the mean is not taken into account in the bootstrapping method?
|
How to calculate Prediction Intervals for time series forecasting with CI
|
CC BY-SA 4.0
| null |
2023-05-18T11:47:47.953
|
2023-05-18T12:54:15.170
|
2023-05-18T12:54:15.170
|
367382
|
367382
|
[
"time-series",
"python",
"arima",
"prediction-interval",
"statsmodels"
] |
616222
|
2
| null |
616214
|
2
| null |
Just using the following two facts:
- $\operatorname{Var}(y_{i + K} - y_i) = \operatorname{Var}(y_{i + K}) + \operatorname{Var}(y_i) - 2\operatorname{Cov}(y_{i + k}, y_i) =
2\gamma(0) - 2\gamma(K)$, where $\gamma(h)$ is the autocovariance function of $\{y_t\}$.
- For an AR(1) process with white noise variance $\sigma^2$, its autocovariance function is given by:
\begin{align}
\gamma(h) = \frac{\sigma^2\phi^h}{1 - \phi^2}.
\end{align}
In your case, $\sigma = 1$, hence $\operatorname{Var}(y_{i + K} - y_i) =
2\gamma(0) - 2\gamma(K) = \frac{2(1 - \phi^K)}{1 - \phi^2}$.
| null |
CC BY-SA 4.0
| null |
2023-05-18T11:57:36.917
|
2023-05-18T12:49:09.313
|
2023-05-18T12:49:09.313
|
20519
|
20519
| null |
616223
|
1
| null | null |
0
|
32
|
I am running an ordered logistic regression for my thesis. I am trying to test the relationship between counterterrorism aid and state repression levels in recipient countries. I ran my regression, and for my first model, my International War control variable has a coefficient-value of 38.007 and a very large standard error value (158,641,489.000). This seems very wrong to me, but I do not know what the problem is.
The independent variable of International War is binary 0 = the given country was not part of an international war in the given year, 1 = the given country was part of an international war in the given year.
the dependent variable is ordinal, ranges from 0 to 2. O meaning there were many state sponsored disappearances and 2 there were none.
Does anyone know how this is possible and what it means? Did I do something wrong with my data? [](https://i.stack.imgur.com/zc57O.png)
|
What is wrong with my coefficient's standard error?
|
CC BY-SA 4.0
| null |
2023-05-18T12:16:38.360
|
2023-05-18T12:16:38.360
| null | null |
375799
|
[
"standard-error",
"ordered-logit"
] |
616224
|
1
| null | null |
0
|
24
|
I've been wondering why most publications give stats as mean +/- standard deviation, even for things like measurement devices, where reproducibility is a major concern. (E.g., for a given sample, our devices measured 50 +/- 5). Wouldn't it be be more appropriate to use something like a confidence interval?
Is there any reason why using standard deviation (or RSD) alone is the norm?
Any insight is appreciated, thanks!
|
Why are published stats generally given as +/- standard deviation?
|
CC BY-SA 4.0
| null |
2023-05-18T12:33:43.420
|
2023-05-18T12:33:43.420
| null | null |
388264
|
[
"statistical-significance",
"confidence-interval",
"variance",
"standard-deviation",
"experiment-design"
] |
616226
|
1
| null | null |
0
|
14
|
Problem
am wanting to do a leave1out sensitivity analysis of an rma.mv (three-level) meta-analysis, where "1" is a cluster / sample (rather than a single effect size).
I understand that some functions, such as leave1out, do not work with rma.mv model objects (e.g., [Metafor package: bias and sensitivity diagnostics](https://stats.stackexchange.com/questions/155693/metafor-package-bias-and-sensitivity-diagnostics?newreg=b31ea762e18b4626a7988a69a19f3589)).
Attempted solutions
I do not know how to write a "for-loop" (which Wolfgang suggested in post above), so cannot try that.
I have searched the web, but found no guidance on 'for-loops', nor how to do sensitivity analyses for samples / clusters in a three-level meta analysis.
|
How to conduct sensitivity analyses for cluster / sample in three-level meta analyses in metafor?
|
CC BY-SA 4.0
| null |
2023-05-18T12:44:30.547
|
2023-05-18T12:44:30.547
| null | null |
388266
|
[
"non-independent",
"sensitivity-analysis",
"metafor"
] |
616227
|
1
|
616230
| null |
1
|
23
|
I've been reading Probabilistic Machine Learning, by Kevin Patrick Murphy but I don't quite get the motivation for presenting Machine Learning from a probabilistic point of view. For example, when presenting Linear Regression (or any other model for that matter) it goes like this:
>
The key property of the model is that the expected value of the output is assumed to be a linear function of the input, $E[y|x] = \vec{w}\cdot\vec{x}$, which makes the model easy to interpret, and easy to fit to data.
>
The term “linear regression” usually refers to a model of the following form:
$p(y|x,\theta)=\mathcal{N}(y|w_0 + \vec{w}\cdot\vec{x},σ^2)$
where $\theta = (w_0, w, σ^2)$ are all the parameters of the model. (In statistics, the parameters $w_0$ and $w$ are usually denoted by $\beta_0$ and $\beta$.)
Where $\mathcal{N}$ is the Gaussian distribution.
Why is he choosing to go with the Gaussian? you could actually use any other distribution and modify the $\vec{w}\cdot\vec{x}$ linear function so that when one computes the expected value of $y|x$ you still get $E[y|x] = \vec{w}\cdot\vec{x}$. For example in the Exponential distribution $f(y, \lambda) = \lambda \exp{(-\lambda y)}$ we can set $\lambda = (\vec{w}\cdot\vec{x})^{-1}$ so that $E[y|x] = \vec{w}\cdot\vec{x}$. And in theory we could do that for other probability distributions (even if not always).
With this change I can see that the Negative Log-likelihood expression would change. However, it still holds true that minimising it would give you the probability distribution that is most similar to the empirical one as per the KL Divergence.
I can only guess that maybe the closest probability distribution that is Exponential like is different from the closest probability distribution that is Gaussian like, but still, the author didn't use this argument at any point to justify the election of the Gaussian in the first place.
|
Rationale behind choosing Gaussian probability distribution for Linear Regression
|
CC BY-SA 4.0
| null |
2023-05-18T12:52:38.030
|
2023-05-18T13:04:33.727
| null | null |
357445
|
[
"regression",
"machine-learning"
] |
616228
|
2
| null |
616068
|
0
| null |
I studied [Simulate a Weibull regression model](https://stats.stackexchange.com/questions/591943/simulate-a-weibull-regression-model) and my key takeaway is this (as modified slightly by me since I am trying to model $W$ in the model equation described in the OP):
```
W <- log(rexp(1000))
survreg(Surv(exp(W))~1,dist="exponential")
```
Running the above results in an intercept for $β_0$ near 0. Applying this method to my OP question and code, I get the output illustrated below with the code at the bottom. The code section that adopts the above is in `simParams`: `W <- log(rexp(100))`, `fit <- survreg(Surv(exp(W))~1,dist="exponential")`, and `params <- coefLung + fit$icoef`.
Though this is visually pleasing to my novice eye, a doubt I have is in `rexp(100)`, where I set the 100 arbitrarily. A greater number of samples results in less dispersion, and a lesser number of samples the opposite. Is there an accepted method for setting the number of samples? Perhaps I should have used the number of elements (228) in the `lung` dataset? Maybe this is better addressed in another post.
[](https://i.stack.imgur.com/6iqOF.png)
Code:
```
simNbr <- 1000
time <- seq(0, 1000, by = 1)
fitLung <- survreg(Surv(time, status) ~ 1, data = lung, dist = "exponential")
coefLung <- fitLung$icoef
# Compute exponential survival function for the base fitted model
survival <- 1 - pexp(time, rate = 1/exp(coefLung))
# Generate random distribution parameter estimates for simulations
simParams <- sapply(
1:simNbr,
function(i){
W <- log(rexp(100)) # note the number of random values which has a large impact on dispersion
fit <- survreg(Surv(exp(W))~1,dist="exponential")
params <- coefLung + fit$icoef
return(as.vector(params))
}
)
# Compute the survival curve for each simulation
simPaths <- sapply(
1:simNbr,
function(i) 1 - pexp(time, rate = 1 / exp(simParams[i]))
)
# Set up plot shell
plot(time,survival,type="n",xlab="Time",ylab="Survival Probability",main="Lung Data Survival Plot")
# Plot simulations
plotSims <- data.frame(
time = time,
do.call(cbind,
lapply(1:simNbr,function(i) {
lines(time, simPaths[, i], col = "lightblue", lty = "solid", lwd = 0.25)
return(curve)
}
)
)
)
# Add average of simulations
avgSurvival <- apply(simPaths, 1, mean)
lines(time, avgSurvival, col = "black", lwd = 1)
# Add Kaplan-Meier survival curve for the lung data
lines(survfit(Surv(time, status) ~ 1, data = lung), col = "blue", lwd = 1)
# Plot the base fitted survival curve using exponential
lines(cbind(time, survival), type = "l", col = "red", lwd = 3)
legend("topright",
legend = c("Fitted exponential model","K-M & confidence intervals","Simulations", "Simulations mean"),
col = c("red", "blue", "lightblue", "black"),lwd = c(3, 1, 0.25, 3),lty = c(1, 1, 1, 1),
bty = "n"
)
```
| null |
CC BY-SA 4.0
| null |
2023-05-18T13:03:15.187
|
2023-05-18T13:03:15.187
| null | null |
378347
| null |
616229
|
1
| null | null |
1
|
35
|
In LayerNorm, for a given layer, we first compute the mean and variance of the activation of each sample and then use them to normalize the said activation.
Specifically, the LayerNorm formula looks like this (from [here](https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html)):
$$
y = \frac{x-E[x]}{\sqrt{var(x) + \epsilon}}\times \gamma + \beta
$$
If $\epsilon$ is small, then $y = LN(cx)$ for any $c>0$.
It appears that the information about magnitude $\|x\|$ is lost in the process. This feels like a problem in situations like an auto-encoder.
|
LayerNorm and Magnitude of Input
|
CC BY-SA 4.0
| null |
2023-05-18T13:03:25.280
|
2023-05-19T16:37:11.337
|
2023-05-19T16:37:11.337
|
28942
|
28942
|
[
"neural-networks"
] |
616230
|
2
| null |
616227
|
1
| null |
Sure we can, and we do that with [generalized linear models](https://stats.stackexchange.com/questions/190763/how-to-decide-which-glm-family-to-use/303592#303592). So why linear regression? It's the simplest possible model that has a closed-form solution to estimate its parameters, that's a huge advantage. Maximizing the Gaussian likelihood is equivalent to minimizing squared error and we have some good threads regarding why squared error is that popular, e.g. [Why is the squared difference so commonly used?](https://stats.stackexchange.com/questions/132622/why-is-the-squared-difference-so-commonly-used) or [What makes mean square error so good?](https://stats.stackexchange.com/questions/274650/what-makes-mean-square-error-so-good). Also, notice that calculating the [mean minimizes the squared error](https://math.stackexchange.com/questions/2554243/understanding-the-mean-minimizes-the-mean-squared-error/2554276), so it also makes sense for a model that calculates conditional mean (linear regression) to minimize squared error.
| null |
CC BY-SA 4.0
| null |
2023-05-18T13:04:33.727
|
2023-05-18T13:04:33.727
| null | null |
35989
| null |
616231
|
2
| null |
613116
|
4
| null |
The inequality $e^{-u} \leq 1 - u + u^2 /2$ holds for any $u \geq 0$. Replacing $u$ by $\lambda X$ and taking the
expectation we get
$$
\mathbb{E}[e^{-\lambda X}] \leqslant 1 - \lambda \,\mathbb{E}(X) +
\lambda^2\,\mathbb{E}(X^2) /2.
$$
Now since $1 + v \leq e^v$ for any $v$, by choosing $v := - \lambda \,\mathbb{E}(X) + \lambda^2 \mathbb{E}(X^2)/2$
$$
\mathbb{E}[e^{-\lambda X}] \leq \exp\left \{-\lambda \,\mathbb{E}(X) + \lambda^2 \,\mathbb{E}(X^2)/2 \right\}
$$
which is the result in OP.
| null |
CC BY-SA 4.0
| null |
2023-05-18T13:17:53.160
|
2023-05-18T13:17:53.160
| null | null |
10479
| null |
616232
|
2
| null |
616055
|
0
| null |
Here is an answer that combines several comments.
Several issues can lead to your question
- interpretation of overlap of error bars
Why is mean ± 2*SEM (95% confidence interval) overlapping, but the p-value is 0.05?
- the influence of adding a (random) effect
Fixed vs. random effect meta regression
- the interpretation of the p-value of a main effect or intercept
What does a significant intercept mean in ANOVA?
- If the plot shows confidence intervals of the levels, then the effects, when measured within the subjects, might have different accuracy
In your case, I believe that it is case 2 and your graph does not correctly represent the confidence intervals.
>
The confidence intervals are simply those obtained as a default output from python seaborn
By considering a random effect you are effectively reducing the degrees of freedom. A study based on 56 independent groups/measurements or 1400 independent measurements, that can be a big difference in the estimates of the standard error.
This difference is not visible in your confidence intervals which are computed based on the assumption that the 1400 measurements are independent (but they are not independent, they correlate a lot within the same participant).
```
StdDev Corr
(Intercept) 0.2479765 (Intr) tskndf cndtnv
taskundef 0.1391700 -0.708
conditionvalid 0.1722409 -0.672 0.651
taskundef:conditionvalid 0.1848967 0.652 -0.627 -0.990
Residual 0.2490666
```
This would not be so bad if the random effect is small relative to the residual, but from the output it seems like the individuals have a large variation and the within individual variance (the residual term) is only 25% of the total variance.
| null |
CC BY-SA 4.0
| null |
2023-05-18T13:28:14.080
|
2023-05-18T13:28:14.080
| null | null |
164061
| null |
616233
|
1
| null | null |
0
|
26
|
I am analysing suicide counts and thus it seems appropriate to use Nb/Poisson regression. However, my counts come from survey data and are only whole numbers when unweighted. Once I realise the weights (which include decimals) the counts contain decimals, and I believe Nb/Poisson models require whole numbers to properly compute. Would the only solution to this problem be to round off the weighted counts? Is it perhaps better to explore using OLS regression instead in this case?
Edit:
I could not find a way to use the survey package because I have to first aggregate the data into counts before modelling, but the weighting variables only exist at the raw data level. Thus I just used the "wt" argument when creating the counts.
The raw data looks like this (first four variables are related to weighting):
```
mortuary_code sampling_strata fpc weighting Province Suicide_Mechanism Date_of_Death Month Day_of_Week Sex Age
<dbl> <dbl> <dbl> <dbl> <chr> <chr> <dbl> <dbl> <chr> <chr> <dbl>
1 1 2 33 4.41 Eastern Cape Sharp force 40075 9 Saturday male 41
2 1 2 33 4.41 Eastern Cape Hanging 39983 6 Friday male 68
3 1 2 33 4.41 Eastern Cape Ingestion 40028 8 Monday male 30
4 1 2 33 4.41 Eastern Cape Ingestion 40115 10 Thursday male 32
5 1 2 33 4.41 Eastern Cape Hanging 39980 6 Tuesday male 18
6 1 2 33 4.41 Eastern Cape Hanging 39948 5 Friday male 50
```
I use the following code to create counts:
```
data = main_dataset %>% count(Sex, Age_Group, IMSyear, wt=weighting)
```
The new count-based dataframe looks like this (population and Suicide_rate added separately after):
```
Sex Age_Group IMSyear n population Suicide_rate
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 female 0-14 2009 39.9 7186279 0.555
2 female 0-14 2017 50.9 7924168 0.642
3 female 0-14 2021 74.5 8149386 0.914
4 female 15-24 2009 398. 5131260 7.76
5 female 15-24 2017 312. 4762566 6.56
6 female 15-24 2021 383. 4698113 8.14
```
Thus the issue is that "n" ends up including decimal places after using the "wt" argument. The weights are obviously very important because without them the sampling isn't representative.
My model ends up looking something like this:
```
model = glm.nb(n ~ IMSyear+Age_Group+Sex+offset(log(population)), control = glm.control(maxit = 100), data = data)
```
R doesn't mind fitting the model, it just gives warnings and then rounds to nearest integer from what I can tell. So ultimately my question is whether there is a better way to do things or if I should be satisfied with that.
|
Nb/Poisson regression with weighted survey data resulting in counts with decimals
|
CC BY-SA 4.0
| null |
2023-05-18T13:49:22.497
|
2023-05-22T10:10:42.790
|
2023-05-22T10:10:42.790
|
388182
|
388182
|
[
"poisson-regression",
"negative-binomial-distribution",
"survey-weights"
] |
616234
|
1
|
616262
| null |
4
|
127
|
I am having trouble understanding and proving [Isserli's theorem](https://en.wikipedia.org/wiki/Isserlis%27_theorem) for n=4:
$$ E(x_1x_2x_3x_4) = E(x_1x_2)E(x_3x_4) + E(x_1x_3)E(x_2x_4) + E(x_1x_4)E(x_2x_3) $$
My attempt goes as follows:
Assuming
$$ P(x_i) = \frac{1}{\sqrt{2\pi}}e^{-\frac{x_i^2}{2}}$$
Then
$$\small{ E(x_1x_2x_3x_4) = \int_{x_1=-\infty}^\infty \int_{x_2=-\infty}^\infty \int_{x_3=-\infty}^\infty \int_{x_4=-\infty}^\infty (x_1x_2x_3x_4) P(x_1x_2x_3x_4) dx_4dx_3dx_2dx_1}$$
$$\small{= \int_{x_1=-\infty}^\infty \int_{x_2=-\infty}^\infty \int_{x_3=-\infty}^\infty \int_{x_4=-\infty}^\infty x_1x_2x_3x_4 \frac{1}{\sqrt{2\pi}}e^{-\frac{x_1^2}{2}}\frac{1}{\sqrt{2\pi}}e^{-\frac{x_2^2}{2}}\frac{1}{\sqrt{2\pi}}e^{-\frac{x_3^2}{2}}\frac{1}{\sqrt{2\pi}}e^{-\frac{x_4^2}{2}}dx_4dx_3dx_2dx_1}$$
$$=\small{ \int_{x_1=-\infty}^\infty x_1\frac{1}{\sqrt{2\pi}}e^{-\frac{x_1^2}{2}}\int_{x_2=-\infty}^\infty x_2\frac{1}{\sqrt{2\pi}}e^{-\frac{x_2^2}{2}}\int_{x_3=-\infty}^\infty x_3\frac{1}{\sqrt{2\pi}}e^{-\frac{x_3^2}{2}}\left(\int_{x_4=-\infty}^\infty x_4 \frac{1}{\sqrt{2\pi}}e^{-\frac{x_4^2}{2}}dx_4\right)dx_3dx_2dx_1}$$
$$\small{= E(x_4) \int_{x_1=-\infty}^\infty x_1\frac{1}{\sqrt{2\pi}}e^{-\frac{x_1^2}{2}}\int_{x_2=-\infty}^\infty x_2\frac{1}{\sqrt{2\pi}}e^{-\frac{x_2^2}{2}}\left(\int_{x_3=-\infty}^\infty x_3\frac{1}{\sqrt{2\pi}}e^{-\frac{x_3^2}{2}}dx_3\right)dx_2dx_1}$$
$$\small{= E(x_3)E(x_4) \int_{x_1=-\infty}^\infty x_1\frac{1}{\sqrt{2\pi}}e^{-\frac{x_1^2}{2}}\left(\int_{x_2=-\infty}^\infty x_2\frac{1}{\sqrt{2\pi}}e^{-\frac{x_2^2}{2}}dx_2\right)dx_1}$$
$$\small{= E(x_2)E(x_3)E(x_4) \int_{x_1=-\infty}^\infty x_1\frac{1}{\sqrt{2\pi}}e^{-\frac{x_1^2}{2}}dx_1}$$
$$\small{= E(x_1)E(x_2)E(x_3)E(x_4)}$$
Since each of $x_1, x_2, x_3, x_4$ is assumed to be Gaussian independent random variables, I can't see where does the above calculation go wrong in the context of Isserli's theorem?
|
How to prove Isserli's theorem $ E(x_1x_2x_3x_4) = E(x_1x_2)E(x_3x_4) + E(x_1x_3)E(x_2x_4) + E(x_1x_4)E(x_2x_3) $?
|
CC BY-SA 4.0
| null |
2023-05-18T14:01:45.080
|
2023-05-19T12:19:44.110
|
2023-05-18T14:13:06.793
|
388272
|
388272
|
[
"normal-distribution",
"multivariate-analysis"
] |
616236
|
1
| null | null |
0
|
23
|
Im currently working on a data set and I can not get my statistics to add up. It is a survival analysis and I'm using Kaplan-Meier and Cox proportional-Hazards regression. I have used STATA for analysis.
My cox regression indicates no increased nor decreased risk of survival. I then test the PH assumption and obtain a p-value of 0.4, så assumtions are met. However, when I graph the PH assumption, the lines are crossing. What am I doing wrong? I can see my log likelihood is -914, what does that exactly tell me about the model? [](https://i.stack.imgur.com/8MHy6.png)
[](https://i.stack.imgur.com/vGCYl.png)
[](https://i.stack.imgur.com/IQqKb.jpg)
|
Cox PH - Ph assumption met or no
|
CC BY-SA 4.0
| null |
2023-05-18T14:11:01.053
|
2023-05-18T15:39:04.493
|
2023-05-18T14:58:46.957
|
11887
|
388273
|
[
"survival",
"cox-model",
"kaplan-meier"
] |
616238
|
2
| null |
616090
|
0
| null |
If I've understood correctly what you want to calculate, you can ignore the number of facilities in each region.
For each region, for each month, just calculate the ratio of (number of people tested for condition B) / (number of people tested).
| null |
CC BY-SA 4.0
| null |
2023-05-18T14:18:21.383
|
2023-05-18T14:18:21.383
| null | null |
319175
| null |
616239
|
1
|
616256
| null |
0
|
56
|
I have a series of numbers ranging from -0.69 to 14.703. These datapoints measure YoY change. I want to create a text descriptor based on where a value falls in a range determined by the Standard Deviation. My mean is 1.13144 and Std Dv is 2.1579. This leaves me with these standard deviation steps, to which I assign a value (i.e. if you fall between the mean and (negative) 1 SD, you are Below Average, between the mean and (positive) 1 SD, you are Above Average, and so on.
[](https://i.stack.imgur.com/ozfKV.png)
This the list of values I got the mean and Std Dev from:
[](https://i.stack.imgur.com/vXu9j.png)
Nearly every item in this list falls 1 SD below the mean. I know the data is not normally distributed, but I would have expected something a little less skewed. Am I using this correctly?
Edit: Year over Year (YoY) change. I take last year's sales and this year's and find the percent difference.
|
Is this a good use of Standard Deviation on data that is not normally distributed?
|
CC BY-SA 4.0
| null |
2023-05-18T14:35:27.410
|
2023-06-01T05:07:23.570
|
2023-06-01T05:07:23.570
|
121522
|
66797
|
[
"standard-deviation"
] |
616240
|
1
|
616318
| null |
1
|
51
|
I have data assessing reaction time (RT) with 2 variables:
Variable 1: 'HighLow', with 2 levels ('High' and 'Low')
Variable 2: 'Condition', with 3 levels ('Predicted', 'Implausible', 'Plaus/Unpred')
I am learning about contrasts and have therefore run 2 mixed models that differ based on the contrasts used.
Contrasts for 1st Mixed Model:
HighLow is left as default coding (dummy coded with 'High' representing the intercept).
Condition is simple coded with the following contrast matrix:
```
Predicted 1 1/3 1/3
Implausible 1 -2/3 1/3
Plaus/Unpred 1 1/3 -2/3
```
The first contrast (cH01) therefore represents Predicted-Implausible, and the second contrast represents Predicted-Plaus/unpred.
Contrasts for 2nd mixed model:
Contrasts for Condition is the same as the first mixed model (simple coded).
The contrast for the HighLow variable is sum coded (High=-0.5, Low=0.5).
The mixed model in both cases is
```
lmer(RT ~ Condition*HighLow+(Condition|Pt_ID) + (Condition|SentNumb)
```
Fixed Effects Output For Mixed Model 1
```
Estimate Std. Error df
(Intercept) 0.77357 0.03029 84.70182
ConditioncH01 -0.08738 0.01886 115.78438
ConditioncH02 -0.35944 0.03105 99.02833
HighLowLow -0.01422 0.01439 75.30162
ConditioncH01:HighLowLow 0.05955 0.02043 76.36148
ConditioncH02:HighLowLow 0.17244 0.04049 74.76725
t value Pr(>|t|)
(Intercept) 25.539 < 2e-16 ***
ConditioncH01 -4.634 9.47e-06 ***
ConditioncH02 -11.576 < 2e-16 ***
HighLowLow -0.988 0.32632
ConditioncH01:HighLowLow 2.915 0.00467 **
ConditioncH02:HighLowLow 4.259 5.90e-05 ***
```
Fixed Effects Output For Mixed Model 2
```
Fixed effects:
Estimate Std. Error df
(Intercept) 0.76647 0.02943 76.32919
ConditioncH01 -0.05760 0.01592 95.00997
ConditioncH02 -0.27322 0.02365 109.77111
HighLow1 -0.01422 0.01439 75.30164
ConditioncH01:HighLow1 0.05955 0.02043 76.36134
ConditioncH02:HighLow1 0.17244 0.04049 74.77345
```
My interpretation is as follows:
Model 1
```
(Intercept) : mean of High
ConditioncH01: Predicted,High-Implausible,High
ConditioncH02: Predicted,High- Plaus/unpred,high
HighLowLow: Low-High
ConditioncH01:HighLowLow: Predicted,low-Implausible,low
ConditioncH02:HighLowLow: Predicted,low- Plaus/unpred,low
```
Model 2
```
(Intercept) : grand mean across all levels of both variables
ConditioncH01: Predicted-Implausible (high and low combined)
ConditioncH02: Predicted- Plaus/unpred (high and low combined)
HighLowLow: Low-High/2
ConditioncH01:HighLowLow: (Predicted-Implausible) for high- (Predicted-Implausible) for low
ConditioncH02:HighLowLow: (Predicted-plaus/unpred) for high- (Predicted-plaus/unpred)low
```
However, looking at the means (see below) my assumptions do not all seem to be correct? In particular, I am confused about the interpretation of HighLow in both outputs and the interactions (which are the same in both outputs)?
Means
```
Predicted,High 0.6272715
Implausible,Hig 0.7055791
Plaus/Unpred,High 0.9423626
Predicted,Low 0.6861870
Implausible,Low 0.7123400
Plaus/Unpred,Low 0.8428695
Predicted 0.6564936
Implausible 0.7089290
Plaus/Unpred 0.8896377
High 0.7421962
Low 0.7426768
```
|
Interpretation of lme4 output with different contrasts for variables
|
CC BY-SA 4.0
| null |
2023-05-18T14:36:46.740
|
2023-05-19T11:47:58.243
|
2023-05-19T08:25:58.173
|
362671
|
379020
|
[
"r",
"mixed-model",
"lme4-nlme",
"contrasts"
] |
616241
|
2
| null |
616202
|
1
| null |
One way (out of many) is to adjust the data by whole periods to make them vary more continuously over time.
Specifically, modify the time series $(x)=(x_1,x_2,\ldots,)$ to a time series $(y)=(y_1,y_2,\ldots)$ that is congruent to $(x)$ modulo the period $\tau.$ Begin with $$y_1=x_1 + k\tau$$ where $k$ is any integer you choose to make $y_1$ a "nice" starting value. At each successive time $i+1,$ predict $y_{i+1}$ as $\hat y_{i+1} = y_i$ and then adjust it modulo $\tau$ to make the prediction as close as possible to the observed value:
$$y_{i+1} = x_{i+1} + \left[\frac{\hat y_{i+1} - x_{i+1}}{\tau}\right]\tau.\tag{*}$$
The bracket $[\ ]$ means to round to the nearest integer.
It is explicit in these two formulas that for every $i,$ $y_i$ differs from $x_i$ by an integral multiple of $\tau.$ Thus, $(y)$ is a valid representative of $(x).$ This construction makes successive values of $y_i$ as close as possible to what you might expect based on the preceding values: that's what I mean by "more continuously."
This is simple to code. In `R` for example, with the time series data in a vector `x`, create the adjusted vector `y` with
```
tau <- 360 # ... or 2*pi or whatever
y <- x # Allocates storage for `y`
k <- -1; y[1] <- x[1] %% tau + k * tau # Optional: `k` should be integral
for (i in seq_along(y)[-1]) y[i] <- x[i] + tau * round((y[i-1] - x[i]) / tau)
```
Now you may simply plot $(y).$ If you like, overplot the original data $(x).$ In this figure $(x)$ is plotted as black circles and $(y)$ as gray squares, connected by red line segments.
[](https://i.stack.imgur.com/DKpdM.png)
The results might be meaningless with highly noisy data but they can still be helpful:
[](https://i.stack.imgur.com/ELTD6.png)
If there is some kind of underlying continuity, you now have a chance of seeing it while still displaying the original data.
Of course, if you have a model for the time series that lets you forecast one step into the future, you might do better by forecasting $y_{i+1}$ from preceding values rather than using the naive forecast embodied in $(*).$ If you don't have a model, you might consider modeling $(y)$ rather than $(x)$ and then (if $(y)$ is very noisy) iterating the modification procedure $(*)$ using this provisional model. The idea is that $(y)$ is likely a better manifestation of the evolution of the data over time than is $(x)$ and studying it might reveal information lost by recording $(x)$ modulo $\tau.$ This opens up the entire world of time series modeling techniques to analyze circular data, at very little cost.
| null |
CC BY-SA 4.0
| null |
2023-05-18T14:41:22.640
|
2023-05-18T14:41:22.640
| null | null |
919
| null |
616242
|
2
| null |
616187
|
0
| null |
Thanks to @Thomas' question, I realized that the treatment variable in my dataset wasn't capturing the dynamic nature of the treatment as I had thought.To address this, I created a dynamic treatment variable by multiplying the post variable (indicating the occurrence of an event) with the expansion variable (representing the expansion indicator). The resulting variable, treat, captures the interaction between the event occurrence and the expansion.
```
Data <- Data %>%
mutate(treat = post * expansion)
```
| null |
CC BY-SA 4.0
| null |
2023-05-18T14:43:57.117
|
2023-05-18T14:43:57.117
| null | null |
388217
| null |
616243
|
1
| null | null |
0
|
4
|
I have repeated measures (two assessments) of an independent variable (binomial/two different groups) but dependent variables (continous) measured at only one assessment. I used logistic regression to analyze the relationship between the variables. Is this approproate?
|
Repeated measurement-Logistic Regression
|
CC BY-SA 4.0
| null |
2023-05-18T14:52:02.107
|
2023-05-18T14:52:02.107
| null | null |
387590
|
[
"repeated-measures"
] |
616244
|
1
| null | null |
1
|
10
|
I have pairs of observations (Xi,Yi) with errors in both variables and I need to find the line that best fits the data. I have found some methods, but it is essential to know the standard deviation of the errors of Xi and Yi or the ratio of error variances. What method could I use if I don't know these values? What is the best?
|
What is the best linear regression method when the errors in the variables x and y are unknown?
|
CC BY-SA 4.0
| null |
2023-05-18T15:03:59.140
|
2023-05-19T13:58:24.983
|
2023-05-19T13:58:24.983
|
388277
|
388277
|
[
"regression"
] |
616245
|
1
| null | null |
0
|
8
|
I tried to use a QLR test for structural breaks for a variable that I am forecasting, and I found a break, which is very accurate to geopolitical events in 2022.
Because of this, my significance of some exogenous variables in the model (ARIMAX) changes depending on the size of the training sample.
I'd like to capture this break somehow since I'm only using variations of the AR model and since it makes economic sense, but as far as I know I can't use results of it straightforward with ARIMAX/VAR/ARDL models due to the limitations of the QLR test.
Are there any test for structural breaks for these models that I can use?
Can I use a dummy in this case? If so, is it worth mentioning the QLR test results based on AR(1) process?
|
How to model structural break in ARIMAX/VAR/ARDL
|
CC BY-SA 4.0
| null |
2023-05-18T15:07:17.553
|
2023-05-18T15:07:17.553
| null | null |
361080
|
[
"time-series",
"arima",
"vector-autoregression",
"ardl"
] |
616247
|
1
| null | null |
1
|
36
|
## The Problem
Given probability distributions $P(\theta)$ and $P(X)$, and given an inverse function $Y=f^{-1}(X,\theta)$ that returns a unique $Y$. How can one estimate the unkown distribution $P(Y)$ in the following hierarchical model?
$\theta = f(X,Y)$
$Y \sim P(\cdot)$
$X \sim P(\cdot)$
## The Questions
- What is this problem called, is it a defined problem within statistics with a name?
- What methods exist (or could be reasonably proposed) to solve for $P(Y)$?
## Current Thoughts
At first it seems like a solution is to simply sample independently from $P(\theta)$ and $P(X)$ to estimate $P(Y)$ using the function $Y=f^{-1}(X,\theta)$, but that would be incorrect: the hierarchical model above implies a dependence between $X$ and $\theta$ such that one would need to instead sample from the joint distribution $P(\theta,X)$ (which is not given).
It seems like this problem may be related to importance sampling or approximate Bayes computation.
|
How to solve for an unkown probability distribution within a hierarchical model?
|
CC BY-SA 4.0
| null |
2023-05-18T15:34:14.853
|
2023-05-18T15:39:17.700
|
2023-05-18T15:39:17.700
|
201988
|
201988
|
[
"probability",
"random-variable",
"hierarchical-bayesian",
"importance-sampling",
"approximate-bayesian-computation"
] |
616248
|
2
| null |
616236
|
0
| null |
In reverse order:
The log-likelihood is compared against that of a null model (no covariates) to get the overall `LR chi2` estimate of model significance. As you note, on that basis the overall model doesn't meet the usual standard of significance at p < 0.05. I wouldn't give up on this project on that basis, however, as there does seem to be evidence that `logco` is associated with outcome. Further study seems called for. Don't confuse statistical significance with practical significance, in either direction.
You aren't doing anything wrong in terms of proportional hazards. The lines crossing in the plots, based on groupings by age group, evidently weren't bad enough to influence the global test of proportional hazards.
A couple of things to note on that. First, you show the Global test for proportional hazards, but it might be possible for a test specifically on `age` to have had a different result. Second, evaluating proportional hazards by grouping a continuous predictor like age for Kaplan-Meier plots or `log(-log(survival))` plots can be tricky. As I understand what you've done, those plots don't take other predictors into account. With continuous predictors, I prefer to look at plots of scaled Schoenfeld residuals from the full model over time instead.
| null |
CC BY-SA 4.0
| null |
2023-05-18T15:39:04.493
|
2023-05-18T15:39:04.493
| null | null |
28500
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.