Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
615281
|
1
| null | null |
0
|
33
|
I have a code segment from this repo:
[https://github.com/toshikwa/slac.pytorch/blob/master/slac/utils.py](https://github.com/toshikwa/slac.pytorch/blob/master/slac/utils.py)
[](https://i.stack.imgur.com/VYdtT.png)
I am reading a paper about Soft Actor-Critic, a reinforcement learning algorithm. They have you sample from a Gaussian Log distribution. I believe that would mean you use the PDF to sample from. I am making a lot of assumptions, so please correct me.
I found this post discussing taking a multivariate Gaussian pdf, making it log Gaussian.
[Multivariate log-normal probabiltiy density function (PDF)](https://stats.stackexchange.com/questions/214997/multivariate-log-normal-probabiltiy-density-function-pdf?noredirect=1&lq=1)
The problem is that equation listed doesn't seem to match the code.
[](https://i.stack.imgur.com/Burkm.png)
Maybe I'm barking up the wrong tree.If I need to be more concise or give more information please let me know. Any direction would be so awesome.
To clarify, I'm trying to figure out where the code is pulling the formula for Gaussian log prob if it doesn't match the PDF.
|
Gaussian Log Probability
|
CC BY-SA 4.0
| null |
2023-05-09T02:25:43.840
|
2023-05-09T02:34:53.377
|
2023-05-09T02:34:53.377
|
387526
|
387526
|
[
"multivariate-normal-distribution",
"lognormal-distribution",
"actor-critic"
] |
615283
|
1
| null | null |
0
|
33
|
I ran a multiple regression using statsmodels. I wanted to verify my understanding of calculations for log-likelihood (ll), AIC and BIC. So I attempted to manually calculate the ll, AIC and BIC for the regression and compare my results to what I got from statsmodels. The results I got are a bit off for AIC and BIC.
Below is the statmodels output in an array format
```
import numpy as np
smdata = np.array([[-27362., -20881.], #ll for y1 and y2
[5.473e+04, 4.177e+04], #aic for y1 and y2
[5.477e+04, 4.181e+04]]) #bic for y1 and y2
```
Now, below is what I got from my attempt at manual estimation, the estimates for AIC and BIC are a bit off, not sure if they are due to rounding:
```
mdata = np.array([[-27362.332, -20880.994], #ll for y1 and y2
[54734.664, 41771.988], #aic for y1 and y2
[54771.464, 41808.788]]) #bic for y1 and y2
```
The formulae I used for the ll, AIC and BIC are below:
```
#ll
ll = -(n / 2) * np.log(2 * np.pi) - (n / 2) * np.log(rss / n) - n / 2
#aic
aic = -2 * ll + 2 * k
#bic
bic = -2 * ll + np.log(n) * k
```
Finally, the values for the estimations are below:
```
n = 11614,
k = 5
rss = np.array([75663.11462955, 24783.19428754]) #y1 and y2
```
I am satisified with the results for the ll. I just want to get some insights into why the AIC BIC values are a bit off.
|
AIC and BIC Manual Calculations Are a Bit Off From Statsmodels Estimates in Python
|
CC BY-SA 4.0
| null |
2023-05-09T03:10:30.870
|
2023-05-09T03:10:30.870
| null | null |
364295
|
[
"python",
"aic",
"statsmodels",
"bic"
] |
615284
|
1
| null | null |
0
|
23
|
I have a question about considering the spatial effect on the longitudinal data in a survival model. I have incomplete longitudinal data (just two observations for each individual) and survival data. I know the longitudinal data is related to space; I would like to add the spatial effect of longitudinal data into the joint model.
I see some materials that most analyze the spatial survival model, which analyzes the individual behavior in different regions; it is like they consider the spatial effect on the survival directly, but not spatial information affects the longitudinal data affects the survival probability.
|
How to consider the spatial effect on longitudinal in joint survival model?
|
CC BY-SA 4.0
| null |
2023-05-09T03:31:07.830
|
2023-05-09T03:31:07.830
| null | null |
327159
|
[
"mathematical-statistics",
"survival",
"panel-data"
] |
615285
|
2
| null |
615279
|
1
| null |
Your intuition is a little off here, but you are getting close to the true situation. It is possible to show that a decreasing hazard rate implies a convex log-survival function, and the convexity of the survival function depends partly on the rate-of-change of the hazard function, but also on the actual hazard value. Below I will show how you can derive relevant results for the relationship between a decreasing hazard function and the survival function. I have not included any analysis of the skewness, since this has a complicated relationship to the hazard function.
---
The CDF $F$ can be written in terms of the hazard function $h$ as:
$$F(t) = 1 - \exp \bigg( - \int \limits_0^t h(r) \ dr \bigg),$$
so the log-survival function is:
$$\log S(t) = \log (1-F(t)) = - \int \limits_0^t h(r) \ dr.$$
Consequently, applying [Leibniz integral rule](https://en.wikipedia.org/wiki/Leibniz_integral_rule) we have:
$$\begin{align}
\frac{d}{dt} \log S(t) &= - h(t), \\[6pt]
\frac{d^2}{dt^2} \log S(t) &= - \frac{dh}{dt}(t). \\[6pt]
\end{align}$$
We then have:
$$\begin{align}
\frac{d^2 S}{dt^2}(t)
&= S(t) \cdot \frac{d^2}{dt^2} \log S(t) + \frac{1}{S(t)} \bigg( \frac{dS}{dt}(t) \bigg)^2 \\[6pt]
&= S(t) \Bigg[ \frac{d^2}{dt^2} \log S(t) + \bigg( \frac{1}{S(t)} \frac{dS}{dt}(t) \bigg)^2 \Bigg] \\[6pt]
&= S(t) \Bigg[ \frac{d^2}{dt^2} \log S(t) + \bigg( \frac{d}{dt} \log S(t) \bigg)^2 \Bigg] \\[6pt]
&= - S(t) \Bigg[ \frac{dh}{dt}(t) - h(t)^2 \Bigg]. \\[6pt]
\end{align}$$
Both the survival function and the log-survival function are non-increasing functions, which is a standard property that does not require any assumptions. As you can see, if $h$ is decreasing then the second derivative of the log-survival function must be positive, which means that the log-survival function is convex. As to the survival function, it will be concave at any point where the rate-of-change of the hazard function is larger in magnitude than the square of the hazard rate (and it will be convex at any point where the rate-of-change of the hazard function is smaller in magnitude than the square of the hazard rate).
If you are willing to assume that the hazard rate is decreasing, and the magnitude of the rate-of-change is non-decreasing then there will be a certain point at which the rate-of-change becomes larger in magnitude than the square of the hazard rate and it will never thereafter drop below that value. This would mean that the survival function becomes concave at a certain point and then remains concave thereafter.
| null |
CC BY-SA 4.0
| null |
2023-05-09T04:12:00.237
|
2023-05-09T04:12:00.237
| null | null |
173082
| null |
615286
|
1
|
615292
| null |
0
|
19
|
Consider a null hypothesis:
\begin{align*}
H_0:\;\beta=0
\end{align*}
Here, we can estimate only the upper and lower bounds of $\beta$.
To be clear, let the upper and lower bounds of $\beta$ be $\alpha_u$ and $\alpha_l$, respectively, then what we can get is the estimate $[\widehat{\alpha}_l,\widehat{\alpha}_u]$.
I am wondering whether is it possible to test the null hypothesis only using the estimates $\widehat{\alpha}_l$ and $\widehat{\alpha}_u$.
Is there a reference that deals with this kind of problem?
|
Hypothesis test for a parameter when only the upper and lower bounds of the parameter are estimable
|
CC BY-SA 4.0
| null |
2023-05-09T04:37:19.737
|
2023-05-09T05:29:29.723
| null | null |
375224
|
[
"hypothesis-testing",
"bounds"
] |
615287
|
1
| null | null |
8
|
1347
|
Suppose we take the classical linear regression model:
$$y_i = \beta_0 + \beta_1 x_i + \epsilon_i$$
Over the years, I have heard so many people say that such an interpretation can be drawn from this model:
- On average, a one unit increase in $x_i$ "causes" a $\beta_1$ unit increase in $y_i$
However, we are also told that this model can not directly imply this type of "causation". As a matter of fact, there is a whole field of Statistics that studies causality called "Causal Inference" ([https://en.wikipedia.org/wiki/Causal_inference](https://en.wikipedia.org/wiki/Causal_inference)).
In general, is there any language that can be used to interpret this kind of model without suggesting any misleading claims about causality?
|
Incorrectly Using the Word "Causal" to Describe a Regression Model?
|
CC BY-SA 4.0
| null |
2023-05-09T04:59:39.217
|
2023-05-09T08:49:49.940
|
2023-05-09T05:38:12.847
|
77179
|
77179
|
[
"regression",
"terminology",
"causality"
] |
615288
|
2
| null |
615276
|
2
| null |
If you use nearest neighbor matching, the result is sensitive to the order of the matches. The relationship between the order of matches and match quality has been studied in the literature, but the results only apply in general and may not apply to your specific dataset. You should see matching order as one of many tuning parameters that can be adjusted in matching. The best matching order is the one that yields the best balance.
You can simply avoid making this choice by using optimal matching, which is deterministic and doesn't depend on order. Optimal matching minimizes the overall within-pair distances between units. With larger samples, optimal matching may be slow or intractable, so nearest neighbor matching can be a fast alternative. Otherwise, optimal matching should be preferred when possible.
| null |
CC BY-SA 4.0
| null |
2023-05-09T05:03:10.050
|
2023-05-09T05:03:10.050
| null | null |
116195
| null |
615289
|
1
| null | null |
0
|
27
|
Power, in a two-sample test, is partially determined by the narrowness of the confidence interval for the test statistic the alternative hypothesis is tested over.
This is captured in this photo
[](https://i.stack.imgur.com/W98AKl.png)
via [Wikipedia](https://en.wikipedia.org/wiki/File:Statistical_test,_significance_level,_power.png)
I'm confused how power can be calculated in the case of a one-sample hypothesis test of means, where we're setting some kind of cut-point (a critical value) within the tail of a distribution, such a one-sided z or t-test.
In this case, our significance level and power seem to be interchangeable. This is the case because there is a zero probability that the mean we're testing against is anything other than what is set at the critical value.
Is this correct? Or does it not make sense to speak of power in the one-sample context at all?
|
How are power and statistical significance related for a one-sample hypothesis test of means?
|
CC BY-SA 4.0
| null |
2023-05-09T05:07:41.303
|
2023-05-09T05:07:41.303
| null | null |
43080
|
[
"hypothesis-testing",
"statistical-significance",
"sampling",
"statistical-power",
"critical-value"
] |
615290
|
1
| null | null |
2
|
92
|
If feature importance is only calculated from the training set according to [here](https://stats.stackexchange.com/questions/475567/permutation-feature-importance-on-train-vs-validation-set), does it mean one should never compute shap values on test set? What would it mean if I compute shap values from test set ?
For instance, if i have the following code, `clf_opt` is a random forest estimator and the code runs without any error. What do the shap values of `X_test` represent in this case?
(Note that the validation set (not test set) is part of X_train which goes into K fold cross validation)
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
...# perform K fold cross validation
explainer = shap.TreeExplainer(clf_opt,X_train) # training set is background data
shap_values = explainer.shap_values(X_test) # test set is foreground data
```
I am trying to understand what is the right way to compute shap values for random forest. Most online examples just fit random forest on the entire dataset X without `train_test_split` and compute shap values on the X without the use of background.
I don't know what is the right way to look at shapley values. Should I copmute shapley values only on training set never on test set?
```
explainer = shap.TreeExplainer(clf_opt,X_train)
shap_values = explainer.shap_values(X_train)
```
OR (without providing background data)
```
explainer = shap.TreeExplainer(clf_opt)
shap_values = explainer.shap_values(X_train)
```
OR
```
explainer = shap.TreeExplainer(clf_opt, X_train)
shap_values = explainer.shap_values(X_test)
```
OR
```
explainer = shap.TreeExplainer(clf_opt)
shap_values = explainer.shap_values(X_test)
```
I am very lost. Would really appreciate some guidance here.
|
If feature importance is only computed based on training set, does it mean one should never compute shap values on test set?
|
CC BY-SA 4.0
| null |
2023-05-09T05:08:55.900
|
2023-05-26T03:02:33.723
|
2023-05-09T13:28:07.740
|
20917
|
20917
|
[
"machine-learning",
"python",
"importance",
"shapley-value"
] |
615291
|
2
| null |
615287
|
19
| null |
On average, a one unit increase in $x_i$ causes is associated with an increase in $y_i$ of $\beta_1$ units.
| null |
CC BY-SA 4.0
| null |
2023-05-09T05:21:51.673
|
2023-05-09T08:39:26.190
|
2023-05-09T08:39:26.190
|
121522
|
121522
| null |
615292
|
2
| null |
615286
|
1
| null |
$\beta$ is a parameter (not a random quantity), it has some true scalar value, therefore it does not have an upper and lower bounds.
Sometimes we use confidence intervals which are random (since they are based on the random sample), which we have a level of confidence (usually denoted by $1 - \alpha$) that indeed covers the parameter.
The (parametric) CI is equivalent to an hypothesis testing at significance level $\alpha$.
| null |
CC BY-SA 4.0
| null |
2023-05-09T05:29:29.723
|
2023-05-09T05:29:29.723
| null | null |
73117
| null |
615293
|
1
| null | null |
0
|
14
|
I have 100k historical horse races. The data is sequential in time, so I am wishing to use online learning to train an LSTM (or sequential attention model or something similar...) such that the model updates after each race. The output of the model should correspond to the probability of each horse in a new race winning that race.
I am satisfied with the above framing of my problem. But I am very unsure about how to encode each race into a usable input type for some sequential model. Here are some example fields for each race:
```
"timestamp": 1665523432,
"grade": "5",
"track_name": "Cantebury",
"runners":
[
{
"horse_name": "Pharlap",
"finishing_position": 1,
"finishing_time": 30.83,
"weight": 66
}
...
]
```
I am unsure how to deal with the fact that each race has a variable number of runners. Maybe I can take the max and just zero-pad. There are also over 10k unique runners, so to one-hot encode `horse_name` intuitively seems too sparse.
If anyone could please guide me to some resources, analogous ML problems or offer any advice, I would greatly appreciate it.
|
How to encode sparse and variable length sequential data
|
CC BY-SA 4.0
| null |
2023-05-09T06:10:11.440
|
2023-05-09T06:10:11.440
| null | null |
225810
|
[
"machine-learning",
"time-series",
"dimensionality-reduction",
"lstm",
"attention"
] |
615295
|
2
| null |
210622
|
0
| null |
Unadjusted Kaplan-Meier curves are not a problem in general. The question is: what do you want to show? Usually you would want the plot to be a visualization of the causal average treatment effect of a grouping variable on the survival probability. If your groups were randomized in a large study, you can go usually go ahead and use the standard Kaplan-Meier curves, since randomization should take care of confounding already.
However, if you are analyzing the effect of a non-randomized group (which is always the case in observational studies), it may be very important to adjust for confounders to obtain an unbiased estimate of the treatment-specific survival curves.
The literature on causal inference deals with this problem in great detail. Many methods have been proposed to obtain confounder-adjusted survival curves. Which method you want to use depends on your specific situation. In a recent article me and my colleagues give a detailed overview and comparison of those methods which may provide some guidance ([https://doi.org/10.1002/sim.9681](https://doi.org/10.1002/sim.9681)).
I also developed the `adjustedCurves` R package which implements most of the available methods. Here is a very small example on how this package may be used to obtain adjusted survival curves using g-computation:
```
# install the package if needed
install.packages("adjustedCurves")
# load packages
library(adjustedCurves)
library(survival)
# just used to make the example reproducible
set.seed(42)
# simulate some example data
sim_dat <- sim_confounded_surv(n=50, max_t=1.2)
sim_dat$group <- as.factor(sim_dat$group)
# outcome model
cox_mod <- coxph(Surv(time, event) ~ x1 + x2 + x4 + x5 + group,
data=sim_dat, x=TRUE)
# using g-computation with confidence intervals
adjsurv <- adjustedsurv(data=sim_dat,
variable="group",
ev_time="time",
event="event",
method="direct",
outcome_model=cox_mod,
conf_int=TRUE,
bootstrap=FALSE)
plot(adjsurv, conf_int=TRUE)
```
[](https://i.stack.imgur.com/R1yG6.png)
More information can be found in the extensive documentation of the package and the article I cited.
| null |
CC BY-SA 4.0
| null |
2023-05-09T06:16:44.543
|
2023-05-09T06:16:44.543
| null | null |
305737
| null |
615296
|
1
| null | null |
3
|
37
|
# Background
Discussion from [http://blog.geomblog.org/2005/10/sampling-from-simplex.html](http://blog.geomblog.org/2005/10/sampling-from-simplex.html) and [https://cs.stackexchange.com/questions/3227/uniform-sampling-from-a-simplex](https://cs.stackexchange.com/questions/3227/uniform-sampling-from-a-simplex) have shown algorithms of sampling from unit simplex uniformly:
- Let $X_i \sim Exponential(1)$.
Then $X_i/\sum_i X_i$ gives an uniform sample in unit simplex.
- Let $X_i \sim Uniform([0,1])$.
Then the difference of ordered statistic of ${X_i}$ would give an uniform sample in unit simplex.
# Problem
Is there an algorithm to sample uniformly at random from the intersection of the unit hypercube and a simplex? i.e.
$\sum_i X_i = c$, where $c \in [0, n]$ and $X_i \in [0,1], \forall i$
I have seen someone asked the exactly same question at [https://math.stackexchange.com/questions/4268521/random-sampling-from-the-intersection-of-cube-and-simplex](https://math.stackexchange.com/questions/4268521/random-sampling-from-the-intersection-of-cube-and-simplex), but it is not solved at the time of asking.
# Attempt
Here is my naive approach:
- Find out all vertices of the set, denoted $V:=\{V_i\}$.
- Draw a uniform sample from unit $|V|$-simplex, denoted $W:=\{W_i\}$.
- Since the set is convex, $WV:=\{W_iV_i\}$ would be a sample from the set.
The problem of this naive algorithm is the complexity. Note that it would be $O(2^n)$ in both time and space.
Is there a better algorithm? State the question more precisely.
|
Uniform sampling from intersection of hypercube and simplex
|
CC BY-SA 4.0
| null |
2023-05-08T08:55:33.430
|
2023-05-09T20:06:07.947
|
2023-05-09T06:26:57.757
|
387539
|
387539
|
[
"algorithms",
"randomness",
"sampling"
] |
615297
|
2
| null |
615296
|
2
| null |
I will suggest two algorithms that work if $c$ is not too large, but probably will work poorly for large values of $c$ (which I realize is the important case). As such, I recognize this might not be useful for you in practice. I hope someone else will be able to offer a better answer.
# Algorithm 1: rejection sampling
- Repeat, until $x_1,\dots,x_n$ are all in the hypercube:
For $i:=1,2,\dots,n$:
Sample $u_i$ from $\text{Exp}(1)$ (the exponential distribution).
Set $u := u_1 + \dots + u_n$. Set $x_i := c \cdot u_i/u$ for each $i$.
In other words, in each iteration we sample uniformly at random from the simplex, check if it is the unit hypercube, and if not, repeat until you find a sample that is in the simplex.
Correctness: The correctness of Algorithm 1 follows immediately from the correctness of rejection sampling.
Running time: If $c \le n/\log(n)$, then Algorithm 1 will be efficient. In particular, the expected number of times you repeat will be about some constant. But if $c$ gets larger, the running time of Algorithm 1 will become much worse (probably exponentially rapidly).
# Algorithm 2: approximate sampling
- Set $\alpha := (n+4\sqrt{n})/c$.
- Repeat, until $x_1,\dots,x_n$ are all in the hypercube:
For $i:=1,2,\dots,n$:
Repeatedly sample $u_i$ from $\text{Exp}(1)$, until $u_i \le \alpha$ (a truncated exponential distribution).
Set $u := u_1 + \dots + u_n$. Set $x_i := c \cdot u_i/u$ for each $i$.
I claim this is efficient if $c$ is not too large, and it samples from approximately the correct distribution if $c$ is not too large.
Running time: Intuitively, the number of iterations of the outer loop is significantly less than Algorithm 1, because Algorithm 2 will "retry" values of $u_i$ that are almost guaranteed to lead to rejection.
Correctness: This algorithm is approximately correct, as long as it doesn't make too many iterations. The rough idea is that if we reject a sample of $u_i$ (because it is larger than $\alpha$), then this sample would almost certainly be rejected as outside the hypercube, so we can immediately redraw it. We'll prove this by showing that the probability that they deviate from the proper distribution is not too large. Let's classify an iteration of the outer loop of either algorithm as good if $u \le n+4\sqrt{n}$, or bad otherwise.
Notice that the probability of an iteration being bad is small (about $3 \times 10^{-5}$ by the Central Limit Theorem, as the exponential distribution has mean 1 and standard deviation 1). As long as Algorithm 2 doesn't do too many iterations, it's likely that all iterations of Algorithm 2 will be good.
Moreover, in a good iteration, the probability distribution of $x_1,\dots,x_n$, conditioned on them being in the hypercube, is the same for both algorithms. Why? In Algorithm 1, if $u_i > \alpha$ in a good iteration, then
$$x_i= c \cdot u_i/u > c \alpha/(n+4\sqrt{n}) =1,$$
so $x_1,\dots,x_n$ will surely not be in the hypercube and will be rejected anyway. Hence truncating the exponential distribution in Algorithm 2 does not change the distribution of accepted outputs (conditioned on it being a good iteration).
I don't know how large $c$ can be with Algorithm 2. I think for large enough values of $c$, Algorithm 2 will still exhibit exponential running time, but hopefully Algorithm 2 can support values of $c$ that are a bit larger than Algorithm 1.
| null |
CC BY-SA 4.0
| null |
2023-05-09T04:40:26.440
|
2023-05-09T20:06:07.947
|
2023-05-09T20:06:07.947
|
2921
|
2921
| null |
615299
|
1
| null | null |
0
|
30
|
I have relatively large data sets with about 20000 to 25000 data. The data are the orientation angles (0 - 180°) of fibers in a composite material. You can see two example histograms showing the most extreme histograms. The first one has a "nice" bell-shaped distribution, the second one is also bell-shaped but the tails are much more pronounced. The aim is to fit a parametric probability distribution to the data. This pdf will be used later to create a statistically representative model of the composite. Therefore, standard/established pdfs should be used. My numerical fitting strategy was:
- perform maximum likelihood estimation to determine the characteristic parameters of different pdf functions (normal, t-location scale, logistic, ...).
- quantifying the fit with the KS-test and determining the best function by comparing the p-value.
Now to the problem:
The p-values of many samples are quite small (10^-8 and smaller). In these cases, the decision criterion "largest p-value" does not seem to be significant. After some research, I would say that the small p-values can be explained by the power of the KS-test on large data sets and the pronounced tails ("outliers").
Change in numerical fitting strategy:
The KS-test does not seem to be appropriate for the data, or rather, the data are not appropriate for the KS test. I am aware that one should not select tests or change data to get the desired result. But I still want to approximate my data with standard parametric pdfs, knowing that the fit is not ideal. I would now replace the second step of the fitting strategy with a comparison of the sum of squared errors (SSE). I calculate the error as the distance between the cdf of my empirical data and the cdf of the fitted function, evaluated for each data point. This is advantageous because all errors are taken into account, not just the largest as in the KS-test. Also, the SSE is in a range between 10^-1 and 10^1, so a comparison between the different types of functions seems to make more sense.
My questions:
- Is this an appropriate/valid strategy?
- If you look at the histograms, you can see some kind of constant baseline. This cannot be represented by the pdfs as they tend towards 0 at the tails. One idea is to split the pdf into a constant and a bell-shaped one. Is this an idea that should be pursued further or is it a "waste of time"?
- Do you have any advice or comments on points I have missed or should approach differently?
[](https://i.stack.imgur.com/nQrcx.png)
[](https://i.stack.imgur.com/7ch3I.png)
|
Large data sets with heavy tails - best fitting strategy
|
CC BY-SA 4.0
| null |
2023-05-09T06:48:04.977
|
2023-05-09T06:48:04.977
| null | null |
386397
|
[
"distributions",
"density-function",
"goodness-of-fit",
"kolmogorov-smirnov-test",
"circular-statistics"
] |
615300
|
2
| null |
614735
|
6
| null |
Your take seems to be correct except for this bit:
>
yet in both approaches, I am still estimating the same quantity (i.e. increase odds relative to some change in variable when all other variables are equal).
In the "simple" approach, you do not condition on variables other than gender (and thus you do not ensure the other variables stay fixed when going from one gender to the other; there may be confounding), while in the logistic regression approach, you do. No wonder the two approaches yield different results.
| null |
CC BY-SA 4.0
| null |
2023-05-09T07:02:28.037
|
2023-05-09T07:02:28.037
| null | null |
53690
| null |
615301
|
1
| null | null |
1
|
27
|
There is a folklore white noise hypothesis related to (and equivalent to some forms of) the efficient market hypothesis in finance -see references below. But are there some asset pairs whose return time series (or perhaps some "natural" transforms of those time series) are approximately noises of another color than white ? -I ask as a nonspecialist, obviously.
Thank you.
[https://www.jstor.org/stable/2326311](https://www.jstor.org/stable/2326311)
[https://www.lasu.edu.ng/publications/management_sciences/james_kehinde_ja_10.pdf](https://www.lasu.edu.ng/publications/management_sciences/james_kehinde_ja_10.pdf)
[http://www2.kobe-u.ac.jp/~motegi/WEB_max_corr_empirics_EJ_revise1_v12.pdf](http://www2.kobe-u.ac.jp/%7Emotegi/WEB_max_corr_empirics_EJ_revise1_v12.pdf)
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8450754/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8450754/)
[https://journals.sagepub.com/doi/pdf/10.1177/0256090919930203](https://journals.sagepub.com/doi/pdf/10.1177/0256090919930203)
[http://www.ijhssnet.com/journals/Vol_2_No_22_Special_Issue_November_2012/23.pdf](http://www.ijhssnet.com/journals/Vol_2_No_22_Special_Issue_November_2012/23.pdf)
|
What color financial time series are there?
|
CC BY-SA 4.0
| null |
2023-05-09T07:06:34.380
|
2023-05-09T07:38:38.393
| null | null |
387544
|
[
"time-series",
"finance",
"white-noise"
] |
615302
|
2
| null |
615271
|
1
| null |
This looks like a standard ordinary least squares regression problem. Your $y$ is the dependent variable and $x$ is the independent/predictor variable. You'd formulate the regression problem as you stated, possibly with an intercept term too.
$$
y_i=\alpha x_i+c +\epsilon_i
$$
The regression would estimate $\hat\alpha,\hat c$.
If you have a new $x_j$ you can make a prediction $y_j = \hat \alpha x_j + \hat c$.
This can be implemented using built in functions in R, Excel etc.
| null |
CC BY-SA 4.0
| null |
2023-05-09T07:18:03.980
|
2023-05-09T07:18:03.980
| null | null |
369002
| null |
615303
|
1
|
615307
| null |
2
|
33
|
There is a lot of contradictory information about the purpose of phase 2 clinical trials.
Many sources claim that the aim is to test whether a treatment works, and sample size calculators exist to compare a response rate in a treatment group (as a binary statistic) to a historical control.
But it's clear that a phase 2 trial is almost by definition underpowered to detect minimum clinically important differences, the use of a binary endpoint would reduce the power further, and if not randomised then the lack of concurrent control means that any positive detected effect is very likely to arise from some bias or other.
So, if we can't trust positive or negative results from these trials (because they are underpowered or biased) then what, from a statistical point of view, can we hope to learn about efficacy?
What research objective is actually being met by a typical phase 2 design?
|
What is the statistical rationale for a phase 2 clinical trial?
|
CC BY-SA 4.0
| null |
2023-05-09T07:23:55.687
|
2023-05-09T08:08:13.593
| null | null |
68149
|
[
"experiment-design",
"clinical-trials"
] |
615305
|
1
| null | null |
1
|
22
|
I am trying to find a way to calculate the likelihood that a machine will fail. Specifically, I would like to receive a prediction such as, "There is a 28% chance that this machine will fail within the next 3 days." To do this, I have a time series of sensor data and a known limit for failure.
For instance, let's say I have a motor powering a device, and I have data on the motor's power consumption over time. I also have the power limit for the motor, and I assume that the power required to move the device will increase over time. In an ideal scenario, the power increase over time would be linear, allowing me to calculate the exact date of failure when the power required meets the motor's power limit.
However, in reality, there will be errors and noise in the sensor data, making it impossible to determine the exact date of failure. Instead, I can use a linear fit to calculate when the power limit is reached, but the result will only give me the most likely date of failure.
Therefore, I would like to obtain a probability of failure over time correlation from the data I have. For instance, I would like to know the probability of failure within the next 24 hours (12%), the next 2 days (19%), the next 3 days (35%), and so on, up to the next 10 days (98%) or so.
While I know others have likely faced this issue before, I have not been able to find a suitable solution or suggestion. I am hoping you can provide me with some guidance or links to potential solutions.
|
Calculate failure probability for given time series data
|
CC BY-SA 4.0
| null |
2023-05-09T07:40:02.350
|
2023-05-09T07:40:02.350
| null | null |
387547
|
[
"time-series",
"probability",
"predictive-models",
"failure-rate",
"sensor-data"
] |
615306
|
1
| null | null |
0
|
18
|
So I have got a set of data, which consists of the abundance of each butterfly species in each sampling point, and the temperature of their respective sampling point. Can I first do a Spearman correlation to test the effect of temperature on the overall abundance of butterflies then repeat it again with each species to assess the effect of temperature on species individually?
|
multivariate pearson correlation analysis
|
CC BY-SA 4.0
| null |
2023-05-09T07:49:28.497
|
2023-05-09T07:49:28.497
| null | null |
387418
|
[
"correlation"
] |
615307
|
2
| null |
615303
|
2
| null |
What area are we talking about? Oncology or non-Oncology?
- non-Oncology: There's often two different types:
Phase 2a: Often "proof of concept" type of trials that try to give enough efficacy evidence to make the decision to continue further development sensible. If one simulates these things out with realistic assumptions (e.g. many drugs do not work all that well, realistic costs, discounted value of potential future drug far in the future) and assumes that one also has other promising options to invest money/effort and allocate patients to trials, it often turns out that makes sense to have such an early hurdle, in part of efforts involved in subsequent stages. The powering of such studies is usually a decision theoretic trade-off between their cost vs. the value for (potentially) avoiding future cost or making future investment more certain.
Phase 2b: Dose finding (often parallel group or sometimes cross-over studies) to identify doses that provide the right trade-off between efficacy and safety. Often involving extensive modeling (e.g. using MCP-Mod or similar approaches). Again, there's huge costs in only determining your final dose in Phase 3 (including longer-term studies in Ph3, required size of the safety database for the dose to be approved, if not already done developing a final market image of a drug etc.) so that it makes sense to find a single dose that is promising to take forward. These types of studies are usually pretty well powered (at least for an analysis that uses modeling of dose-response and potentially tries to exploit things like patterns over time etc., and possibly uses shorter-term or different outcomes than Phase 3).
- Oncology:
Traditionally dose finding has not been as much of a focus in Oncology, but that may be changing due to regulatory changes.
You seem to be referencing a particular type of (often) single arm nature, where one looks at something like an outcome such as "objective response" (=tumor shrinking sufficiently based on specific criteria). There are several complications here.
* Firstly, is this a surrogate outcome for what we really care about (in Oncology usually people surviving longer), which may not be the same between different cancer types.
* Secondly, are we sure there's a causal effect of the drug? That is not always clear if there's no control group. Formally, one could do a Bayesian comparison with historical controls (e.g. using robust meta-analytic predictive priors or any of the many alternatives). Often it's rather looked at in terms of threshold crossing, which in the end may be justified on a Bayesian basis or in other ways. Lots of issues play into this like sampling variation, changes in medical practice, other differences between current patients and historical controls etc. On a whole, the ICH E10 guidance is rather sensible on this topic, which is rather thoroughly debated with lots of references to whether medical interventions are parachutes or not.
* If the assumptions are done sensibly, such studies can be well powered. And, of course, their interpretation may be affected by the knowledge that underpowered studies are more likely to produce false positive "significant" results.
* Depending on how regulators judge the previous points in combination with the unmet need, the results of such trials might lead to accelerated/conditional approval (or not), but then there's usually still a Phase 3 trial that looks at progression free survival/overall survival.
In the end, a lot may just come down to [eNPV optimization](https://dx.doi.org/978-3-319-46076-5).
| null |
CC BY-SA 4.0
| null |
2023-05-09T08:08:13.593
|
2023-05-09T08:08:13.593
| null | null |
86652
| null |
615308
|
1
|
615310
| null |
3
|
162
|
I was modelling a linear regression (OLS) and tried using scaling techniques on the predictor variables. I could see the range of the variables change, however the prediction results remain the same. I would like to learn why scaling does not affect the prediction output but the coefficients. In addition the accuracy and model evaluation parameters remain the same before and after scaling.
|
Why feature scaling does not affect prediction output in regression?
|
CC BY-SA 4.0
| null |
2023-05-09T08:09:58.087
|
2023-05-09T08:29:18.553
|
2023-05-09T08:27:00.247
|
35989
|
362211
|
[
"regression",
"least-squares",
"feature-scaling"
] |
615309
|
1
| null | null |
0
|
13
|
I already asked this question in the "Mathematics" stackexchange, but apparently did not find the right audience, so I am duplicating [my question](https://math.stackexchange.com/questions/4676240/equivalence-between-crlb-and-uncertainty-propagation-formula) here, hoping someone might be of help.
My question is: what is the relationship between the Cramér-Rao Lower Bounds (CRLB) and the uncertainty propagation formula?
I would have imagined that for a function $f$ of normally distributed random variables $x_i\sim N(\mu_i,\sigma_i^2)$, those two results might be similar if not equal.
The uncertainty propagation gives:
$$\Delta f^2 = \sum_i \left\vert\frac{\partial f}{\partial x_i}\right\vert^2\sigma_i^2$$
The CRLB can be written as:
$$\text{CRLB} = \left[\sum_i \frac{1}{\sigma_i^2}\left(\frac{\partial f}{\partial x_i}\right)^2 \right]^{-1}$$
Am I missing an underlying hypothesis?
|
Equivalence between CRLB and uncertainty propagation formula
|
CC BY-SA 4.0
| null |
2023-05-09T08:10:02.220
|
2023-05-09T08:10:02.220
| null | null |
385607
|
[
"random-variable",
"uncertainty",
"error-propagation",
"cramer-rao"
] |
615310
|
2
| null |
615308
|
6
| null |
This is the result that you should expect because [linear regression is scale-invariant](https://stats.stackexchange.com/questions/311198/showing-that-the-ols-estimator-is-scale-equivariant). This means that scaling does not affect its results. Scaling the features would change the parameters, but would not change the results. This is [easy to see](https://stats.stackexchange.com/questions/524188/why-are-my-weights-and-and-bias-incorrect-after-scaling/524192#524192), take a trivial linear regression model
$$
y = \beta_0 + \beta_1 x + \varepsilon
$$
Now, if you scaled $x$ by dividing it by some constant $c$, to get exactly the same (optimal) result as previously, you would need just to have $\beta_1$ be $c$ times larger, so it becomes $(\beta_1 c) (x/c) = \beta_1 x$. The parameter estimates would adapt to scaling by increasing or decreasing accordingly.
In fact, this is the case for many machine learning algorithms as you can learn from threads like [Which machine learning algorithms get affected by feature scaling?](https://stats.stackexchange.com/questions/401079/which-machine-learning-algorithms-get-affected-by-feature-scaling) or other questions tagged as [feature-scaling](/questions/tagged/feature-scaling). Scaling could make the difference though if you used regularized regression, or if using other than the OLS algorithm to obtain the results, or for random effects regression, etc.
| null |
CC BY-SA 4.0
| null |
2023-05-09T08:17:57.297
|
2023-05-09T08:29:18.553
|
2023-05-09T08:29:18.553
|
35989
|
35989
| null |
615312
|
2
| null |
615287
|
20
| null |
There is a very careful formulation in Gelman, Hill, and Vehtari Regression and Other Stories:
>
From the data alone, a regression only tells us about comparisons between units, not about changes within units. Thus the most careful interpretation of regression coefficients is in terms of comparisons, for example [...] "Comparing two items $i$ and $j$ that differ by an amount $x$ on predictor $k$ but are identical on all other predictors, the predicted difference $y_i - y_j$ is $\beta_k x$, on average.
This is of course a bit of a mouthful.
| null |
CC BY-SA 4.0
| null |
2023-05-09T08:49:49.940
|
2023-05-09T08:49:49.940
| null | null |
43625
| null |
615313
|
1
| null | null |
1
|
50
|
In Christopher Bishop's PRML book,in section 1.4, the author explains how intuition fails in higher dimensions. He does this by using a gaussian distribution in higher dimensions. He concludes the following-
>
If we transform from Cartesian to polar coordinates, and then integrate out the directional variables, we obtain an expression for the density p(r) as a function of radius r from the origin. Thus p(r)δr is the probability mass inside a thin shell of thickness δr located at
radius r. This distribution is plotted, for various values of D, in Figure 1.23, and we see that for large D the probability mass of the Gaussian is concentrated in a thin
shell.
Exercise 1.20 I think, is supposed to have proven this. It first takes an equation that supposedly describes the gaussian distribution in higher dimensions, like so-
$$p(\textbf{x})=\frac{1}{(2\pi\sigma^2)^{D/2}}\exp\left(-\frac{\Vert\textbf{x}\Vert^2}{2\sigma^2}\right)$$
It then converts it into polar coordinates and finds the density w.r.t r, in which the "direction variables have been integrated out".
$$p(r)=\frac{S_Dr^{D-1}}{(2\pi\sigma^2)^{D/2}}\exp\left(-\frac{r^2}{2\sigma^2}\right)$$
# My questions
My questions are the following-
- How does the author obtain the first equation? Isn't the equation that describes a higher dimensional gaussian distribution the following?
$$N(\textbf{x}|\mu,\Sigma)=\frac{1}{(2\pi)^{D/2}}\frac{1}{|\Sigma|^{1/2}}\exp\left(-\frac{1}{2}(\textbf{x}-\mu)^T\Sigma^{-1}(\textbf{x}-\mu)\right)$$
I think he considers the mean to be 0 for every rv, but why isn't the covariance matrix there in the equation?
- How do we get the second equation, the one in the polar coordinate system, from the first? And what does the author mean by "integrating out the directional variables"?
# For more context, Exercise 1.20
## The question
[](https://i.stack.imgur.com/2BwWq.png)
## The book's solution
[](https://i.stack.imgur.com/oP9c6.png)
|
PDF for the high dimensional gaussian distribution in Christopher bishop's PRML book (Exercise 1.20)
|
CC BY-SA 4.0
| null |
2023-05-09T08:53:14.197
|
2023-05-09T13:13:37.933
|
2023-05-09T09:03:19.593
|
362671
|
358344
|
[
"probability",
"normal-distribution"
] |
615317
|
2
| null |
614570
|
7
| null |
There's a [famous quote from George Box](https://stats.stackexchange.com/questions/57407/what-is-the-meaning-of-all-models-are-wrong-but-some-are-useful) that
>
All models are wrong, but some are useful.
Sure, you can use truncated distribution or other distribution that out-of-a-box has a restricted range. But what would be the upper bound? If you get it wrong, your model would be wrong as well!
However let's suppose that you didn't restrict the range, so what? Yes, your model could say that the life expectancy as a function of income could be 200 years old for a billionaire, so what? First of all, life expectancy is not a function of income, a billionaire may die as any of us in a scenario where their wealth would not change anything. So your model is obviously wrong, as life expectancy as a function of wealth is not the "true" explanation. The explanation could be useful in some scenarios though while remembering the limited applicability of the model. Truncating the distribution would be just lipsticking the pig.
But, of course, if we have good reasons to use models using things like truncated distributions, we do so. But doing this to cover up the fact that the model does not work for some scenarios is not a good reason. In fact, it may hide the problems with the model giving you a false sense of it working properly. It would only force your linear predictions to fit [the square hole](https://www.youtube.com/watch?v=jkz7bnYfuOI).
| null |
CC BY-SA 4.0
| null |
2023-05-09T09:25:09.500
|
2023-05-09T09:31:34.440
|
2023-05-09T09:31:34.440
|
35989
|
35989
| null |
615318
|
1
| null | null |
1
|
32
|
I have 2 groups of participants A and B in an opinion survey. Participants A give an opinion on a group of "objects" X and participants give an opinion on a group of "objects" Y. Opinions are values on a scale 1 to 5. (I have all scores, but can not do additional surveys)
The mean values (opinion scores) for X and Y are very similar. But most participants in A don't know Y, and most participants in B don't know X -- i.e. most can not compare X and Y, but only objects withing group X or group Y.
But there is a small overlap in participants A and B. Those participants score some objects from both X and Y, and suddenly there is a large difference between mean values if a take scores only from the overlapping participants. The problem is that this is a small overlap of participants and a small portion of the objects that is being scored.
Still, this leads me to believe that the (main) reason that the mean values from both surveys are similar is because the participants have no comparison between the groups.
I know something about basic statistics, but not enough to argue/proove that the similarity in mean values in not valid argument that groups X and Y are of similar quality.
I would need to do an appropriate analysis to give a more objective comparison of X and Y.
I need some pointers on literature where I would find established statistical methods to make a more valid comparison for this situation or at least to argue that comparing mean values with un-paired participants is not valid. Maybe just the correct English terms would help, so I can search further on my own (English is not my native language and in school we only learned our terms in our language).
|
How to compare mean values in mostly un-paired opinion surveys
|
CC BY-SA 4.0
| null |
2023-05-09T09:37:40.127
|
2023-05-09T09:57:26.947
|
2023-05-09T09:57:26.947
|
121522
|
387563
|
[
"mean",
"missing-data"
] |
615320
|
1
| null | null |
0
|
9
|
I have a data set with 22 000 observations, which I split in a train / test set of 80/20 respectively. I am trying out different regression models, and ways to represent my data, which is about travel times between bus stops.
I have a variable which represents time and, depending on how I represent it, either can be split into 24-1 dummy encoded covariates or 2 covariates consisting of a sin & cos representation.
If two models are approximately similar, but the one using dummy encoding is slightly better in terms of my evaluation metrics, what would be the argument of choosing the dummy representation?
|
Covariate representation versus data size
|
CC BY-SA 4.0
| null |
2023-05-09T10:54:17.880
|
2023-05-09T10:54:17.880
| null | null |
320876
|
[
"modeling",
"model-evaluation",
"predictor",
"bias-variance-tradeoff"
] |
615321
|
2
| null |
239867
|
0
| null |
Note that
$$\frac{1-\Phi\left(x\right)}{\phi\left(x\right)}=\frac{1}{\lambda\left(-x\right)}$$
where
$$\lambda\left(x\right)=\frac{\phi\left(x\right)}{\Phi\left(x\right)}$$ is the inverse-Mill's ratio. It has the properties that $$\lambda\left(-x\right)>x;$$ $$0\le\lambda'\left(x\right)\le-1.$$ So $$\lim_{x\to\infty}\frac{1-\Phi\left(x\right)}{\phi\left(x\right)/x}=\lim_{x\to\infty}\frac{x}{\lambda\left(-x\right)}$$ Applying L'hospitals rule yields $$\lim_{x\to\infty}\frac{1}{\lambda'\left(-x\right)}.$$ It's known that $$\lim_{x\to\infty}\lambda'\left(-x\right)=1.$$
| null |
CC BY-SA 4.0
| null |
2023-05-09T11:05:48.950
|
2023-05-09T11:05:48.950
| null | null |
387570
| null |
615322
|
1
|
615325
| null |
2
|
134
|
I'm new to Survival Analysis. Usually in survival analysis, we want to model the survival function progress w.r.t time. This is normally done through Cox model, or KM-model within a specific time horizon
However, this assume that we know exactly when the hazard event occurred. If we want to model a system which we can only observed the state periodically, but not continuously. Then modeled using traditional cox model, this model will be biased as we don't know the exact time when the hazard event occurred
For example, we want to model a survival function on a system with multiple agents which can be either alive, or dead, but we can only observed at a specific time interval. Then we know the amount of alive, and dead agents at time step t-1, and t. But we can't model how the survival function had progress between that interval
So in general, how does this kind of observational noise be deal with such that the model is not biased?
|
How to deal with noisy observation in Survival Analysis
|
CC BY-SA 4.0
| null |
2023-05-09T11:11:45.047
|
2023-05-09T13:49:15.243
| null | null |
236007
|
[
"survival",
"modeling",
"cox-model",
"kaplan-meier",
"interpolation"
] |
615323
|
1
| null | null |
3
|
34
|
We have a rather mathematical/statistical problem and hope to get some help.
We have data from a multicenter randomized clinical trial where we compare single with combined treatments at 4 different time points (baseline, interim visit, final visit, follow up).
We fitted a mixed effect model to predict the outcome by time point, objective (single vs. combined treatment) and the interaction time point with objective as fixed effects and patient ID nested in center as random intercept. Next to the unadjusted model we fitted an adjusted model adding fixed effects such as age and gender. We used the lmer function from the lme4 package in R. Here are the model equations:
```
lmer(outcome ~ time point*objective + (1 | center / subject))
lmer(outcome ~ time point*objective + age + gender + educational attainment + hearing aid indication + PHQ-9 baseline + (1 | center / subject))
```
This is the intention to treat analysis, missing data were imputed with mice before the analysis.
Here is the model output: [](https://i.stack.imgur.com/jo0RZ.jpg)
For time point (= visit_type) and the interaction with time point, the model output from the unadjusted model is identical to the adjusted model (up to 7 decimals). We are wondering if this is mathematically/statistically possible or if it is more likely that we have a bug in our code?
In the per protocol analysis, where we don't impute missing values, we do not encounter identical results.
|
Identical output for unadjusted and adjusted mixed effect model with lme4 in R
|
CC BY-SA 4.0
| null |
2023-05-09T11:12:34.757
|
2023-05-09T15:17:08.557
|
2023-05-09T15:17:08.557
|
362671
|
387571
|
[
"r",
"mixed-model",
"lme4-nlme"
] |
615324
|
2
| null |
615266
|
1
| null |
It seems very questionable practice to exclude a study based on some automatic criterion flagging it as "an outlier". It may just be variation that just happens despite all studies addressing the same question in the same way, it may be that the "outlying" study actually measured what you really want while there's something wrong with the other studies, there might be genuine differences that are perfectly okay and all studies answer slightly different questions, or any kind of mixtures of the above.
| null |
CC BY-SA 4.0
| null |
2023-05-09T11:19:39.150
|
2023-05-09T11:19:39.150
| null | null |
86652
| null |
615325
|
2
| null |
615322
|
3
| null |
We never have access to the exact moment an event occurred: even if we record time to event in days, we do not discern between time of day and therefore the time intervals are days. However, with contextual reasoning, you might say the difference of less than a day is negligible for a specific analysis (although it might not be: for example hours of survival in an intensive care unit). It is all about what time unit to you is acceptable.
If you believe that the interval you measured is sufficient to capture differences, then there might not be a problem. If you believe it is not sufficient, then your data is lacking and a model might not solve that.
I am not aware of any way to model specific occurrence of the events within your time intervals, but if any such methods exist, they are likely assumption-heavy and unlikely to give reliable results without meeting these assumptions.
| null |
CC BY-SA 4.0
| null |
2023-05-09T11:23:05.440
|
2023-05-09T11:23:05.440
| null | null |
385890
| null |
615326
|
1
| null | null |
2
|
50
|
I'm looking at the association between two categorical variables in a genus of birds. The variables are 'Conservation Concern' (Yes/No) and another binary variable (Yes/No) and I have these for every species in the genus. I know there may be issues with the variables (i.e. have they been adequately described) but they are what they are. I've calculated odds ratios but I'm not sure if I'm supposed to do a statistical test to show they are different.
When I run a statistical test (e.g. a Fisher's exact test) I get an odds ratio of 6 (agreeing with my hand calculation) and a p-value of 0.11. This means I can't say there is a significant difference in conservation status between the two groups. But the point of tests like Fisher's exact test is to tell you whether the odds ratio is different from 1, because they are usually used to infer things about a greater population, but I'm not trying to estimate a value as I know what it is, and it's 6. However, I suspect I should instead just be interpreting this as "I can't say this difference isn't due to chance".
I've looked at [this question](https://stats.stackexchange.com/questions/2628/statistical-inference-when-the-sample-is-the-population) but am still unsure, mainly because that example involves a changing population.
I am wary of reporting results without supporting them with statistical analyses. But I also think it is weird to discount a known true value. I therefore don't know whether or not it's appropriate to use a statistical test in this case to make inferences about my data.
|
Null hypothesis significance testing when values are known
|
CC BY-SA 4.0
| null |
2023-05-09T11:28:14.013
|
2023-05-09T18:40:05.437
| null | null |
158612
|
[
"hypothesis-testing",
"sampling",
"p-value",
"sample",
"fishers-exact-test"
] |
615327
|
1
| null | null |
2
|
37
|
I want to follow a similar approach to what was done in the paper of Bornkamp et al. in 2021 (DOI: 10.1002/pst.2104). More specifically they added a link to a R markdown file ([https://oncoestimand.github.io/princ_strat_drug_dev/princ_strat_example.html](https://oncoestimand.github.io/princ_strat_drug_dev/princ_strat_example.html)) with a working example how to implement the principal stratum strategy. One of their approach was modelling the stratum membership by a logistic regression and the predicted probabilities are then used to adjust a Cox-model by weighting. In the end, they compared the estimated coefficients to other Cox-models.
As said, I want to follow this approache but I want to understand the reasoning for this. In the paper of Bornkamp & Bermann from 2020 (DOI: 10.1080/19466315.2019.1575280) they have quite a nice reasoning for the weighting approach. However, they are not quantifying a regression coefficient but rather a difference in probabilities as
$$
\Delta(t)=P(T(1)>t\mid B(1)=1)-P(T(0)>t\mid B(1)=1)
$$
where
$$
\begin{aligned}
P(T(0)>t\mid B(1)=1)=&\int P(T(0)>t\mid B(1)=1,Z)p(Z\mid B(1)=1)dZ\\
&\text{Principal ignorability:}\\
=&\int P(T(0)>t\mid Z)p(Z\mid B(1)=1)dZ\\
&\text{Due to randomization:}\\
=&\int P(T(0)>t\mid X=0,Z)p(Z\mid B(1)=1)dZ\\
&\text{Consistency assumption:}\\
=&\int P(T>t\mid X=0,Z)p(Z\mid X=1,B(1)=1)dZ\\
\end{aligned}
$$
What results from this is according to the paper:
$$
\begin{aligned}
p(T(0)\mid B(1)=1)&=\int p(T\mid X=0,Z)p(Z\mid X=1,B(1)=1)dZ\\
&=\int p(T,Z\mid X=0)\frac{p(Z\mid X=1,B(1)=1)}{p(Z\mid X=0)}dZ\\
&=\int p(T,Z\mid X=0)\frac{p(B=1\mid Z,X=1)p(Z\mid X=1)p(X=1)}{p(Z\mid X=0)p(B=1,X=1)}dZ\\
&\text{Due to randomization }p(Z\mid X=1)=p(Z\mid X=0):\\
&\propto \int p(T,Z\mid X=0)\underbrace{p(B=1\mid Z,X=1)}_{w(Z)}dZ
\end{aligned}
$$
Now, they say in the paper that this result can be implemented via modelling for example a weighted Kaplan-Meier, where the weights $w(z)$ are modelled via logistic regression. However, they do not mention anything about integrating over the covariates $Z$. They call this approach "weighted placebo patients".
My problem is that I do not really see the connection between this "weighted placebo patients" approach to the weighted Kaplan-Meier estimator (and thus also do not see the relation to the weighted Cox-model).
Any help to clarify this is much appreciated!
|
Reasoning weighted regression via causal inference
|
CC BY-SA 4.0
| null |
2023-05-09T11:57:25.167
|
2023-05-09T11:57:25.167
| null | null |
387557
|
[
"causality",
"weighted-regression"
] |
615328
|
1
| null | null |
0
|
17
|
I am studying the effect of employment on population growth using barroregressions. The dependent variable is thus the average log population growth in Norwegian regions between 1995 and 2013.
The controls are log population in 1995 (initial population to see if there were divergence in Norwegian regions in the period 1995-2013),
share of the workforce with a higher education degree and employment in a sector (where a sector is either the production sector, the business sector or the service sector).
In the next regression I change the employment variable to be a so called "shift-share" variable after Bartik (1991). This variable removes the idiosyncratic "shocks" in the employment growth variable, and is constructed from a period before our study period. The variable is constructed for the period 1970-1990.
When we use the employment in the production variable as a control, the coefficient is 0.16 and significant at a 5%-significance level.
When we use the Bartik variable instead, the coefficient is still 0.16, but more significant (at 1%).
What does the higher significance level imply, and why doesn't the coefficient change you think?
|
Help on interpreting the results when significane level change, but not the coefficient
|
CC BY-SA 4.0
| null |
2023-05-09T12:02:55.237
|
2023-05-09T12:15:17.303
| null | null |
387576
|
[
"regression"
] |
615329
|
2
| null |
615328
|
1
| null |
Possibly, by removing shocks in the employment growth, you remove variation in the independent variable's ability to predict the outcome. The overall fitted model might give the same line with the same slope (and therefore the same coefficient), but the extent to which the model fits is improved by removing noise in the predictive ability of the independent variable. Therefore, the p-value would decrease: the independent variable is more informative for the outcome.
| null |
CC BY-SA 4.0
| null |
2023-05-09T12:15:17.303
|
2023-05-09T12:15:17.303
| null | null |
385890
| null |
615330
|
2
| null |
615313
|
1
| null |
When $x\sim\mathcal N_p(0,\sigma^2I_p)$, its density is
$$\varphi(x)=\frac{1}{(2\pi\sigma^2)^{D/2}}\,\exp\left(-\frac{\sigma^{-2}}{2}\textbf{x}^T\textbf{x}\right)=\frac{1}{(2\pi\sigma^2)^{D/2}}\,\exp\left(-\frac{\vert\vert \mathbf x\vert\vert^2}{2\sigma^{2}}\right)$$
The [change to Cartesian](https://en.wikipedia.org/wiki/Polar_coordinate_system#Converting_between_polar_and_Cartesian_coordinates) $\mathbf x=(x_1,\ldots,x_p)$ from [spherical coordinates](https://en.wikipedia.org/wiki/Polar_coordinate_system#Converting_between_polar_and_Cartesian_coordinates) $(\varrho,\theta_1,\ldots,\theta_{p-1})$ is given by
$$
\mathbf x=\left(\begin{matrix}
\varrho\sin\theta_1\\
\varrho\cos\theta_1\sin\theta_2\\
\quad\vdots\\
\varrho\cos\theta_1\cdots\cos\theta_{p-1}
\end{matrix}\right)
$$
and its Jacobian is
$$\varrho^{p-1}\prod_{i=1}^{p-1} \sin^{p-2}\theta_i$$
Hence
$$p(\varrho,\boldsymbol{\theta})= \exp\{-\varrho^2/2\sigma^2\}\,\varrho^{p-1}\prod_{i=1}^{p-1} \sin^{p-i-1}\theta_i$$
and
$$p(\varrho)= \exp\{-\varrho^2/2\sigma^2\}\,\varrho^{p-1}\int\prod_{i=1}^{p-1} \sin^{p-i-1}\theta_i\,\text d\boldsymbol{\theta}$$
| null |
CC BY-SA 4.0
| null |
2023-05-09T12:35:55.290
|
2023-05-09T13:13:37.933
|
2023-05-09T13:13:37.933
|
7224
|
7224
| null |
615331
|
1
| null | null |
4
|
52
|
I am trying to assess the effects of an experimental treatment on the insect fauna of artificial ponds. The treatment is applied to the entire pond.
I was able to sample each pond four times, each sample collected from a different spatial location within the pond, but at the same moment in time. So I have replicated similar samples from each pond.
I have used a mixed effects model approach using each sample as my lowest-level experimental unit (rows in a matrix), treatment as a fixed effect, and pond identification as a random effect.
However, recently I was advised to pool all samples from the same pond together and use the pond as my lowest-level experimental unit and treatment as a fixed effect. I feel that this approach is the simplest one, but that I would be losing statistical power just for the sake of simplicity.
Which approach should I use?
|
Should I pool multiple observations from the same experimental unit, or use mixed effects models
|
CC BY-SA 4.0
| null |
2023-05-09T12:58:34.847
|
2023-05-12T19:10:20.640
| null | null |
171805
|
[
"mixed-model",
"mixed-type-data"
] |
615332
|
1
| null | null |
0
|
29
|
I am performing a Bayesian calibration of a computer model and wondering if I am setting up my likelihood correctly. I have model output generated using Monte Carlo sampling of prior distributions of model parameters. I then assign a score ("likelihood") to each simulation in my ensemble using model-observation misfit:
\begin{equation}
s_j = \exp \left[ -\frac{1}{ 2\sigma^2 } \sum_i{ \left(f_i^j - z_i\right)^2 } \right]
\end{equation}
where $f$ is the modeled quantity, $z$ is the observed quantity, $\sigma^2$ is the misfit variance, $j$ is the index of the ensemble members, and $i$ is the index of the observations. The assumptions here are that the misfits are i.i.d. and normally distributed. I then go on to normalize the scores, $s_j$, to obtain a set of weights that are used to scale my prior distributions and obtain posteriors.
My confusion stems from the fact that, if there are many misfits that are used in the calculation of $s_j$ (i.e., there are many $i$'s in the above equation), the score, $s_j$, that's calculated for every ensemble member will be driven to zero. In general, for any value of $\sigma$, if there is a large enough sample of misfits, all of the $s_j$'s will be, essentially, zero.
What am I missing here? Am I violating some underlying assumption in how I'm constructing my likelihood and calculating $s_j$?
As a little side test, I made a Python script that generates random samples of the misfit, $\left(f_i^j - z_i\right)$, as i.i.d. samples from a normal distribution and then calculates $s_j$. With this setup, where I have 100 misfits, $s_j$ is already really small (1e-19). In my real case, I have even more misfits and the $s_j$'s are even smaller.
```
mu = 0.
sigma = 50.
n = 100
r = np.random.normal(loc=mu, scale=sigma, size=(n,))
s_j = np.exp( (-1. / (2. * sigma**2)) * np.nansum( r**2 ) )
```
|
Likelihood being driven to zero because of large number of model-observation misfits
|
CC BY-SA 4.0
| null |
2023-05-09T12:59:18.233
|
2023-05-09T13:02:55.647
|
2023-05-09T13:02:55.647
|
387569
|
387569
|
[
"bayesian",
"residuals",
"likelihood"
] |
615333
|
2
| null |
552262
|
0
| null |
There's insufficient information here to assess this distribution well or understand the skewness. It's possible that the data is zero-inflated, with the positive values reasonably symmetric. A more finely-resolved histogram and more information about the data would help understand it better.
| null |
CC BY-SA 4.0
| null |
2023-05-09T13:10:55.327
|
2023-05-09T13:10:55.327
| null | null |
121522
| null |
615334
|
1
| null | null |
3
|
36
|
Instrumental variable regression is estimated with `ivreg` in R. `ivreg` provides diagnostics with the `diagnostics` option in `summary`. The first stage is similar to an OLS regression where the endogenous variable is regressed on all instruments and exogenous variables to check instrument relevance. The weak instrument test is often said to be higher than 10 or 20 to indicate good instruments. It is `17.254` in the example below.
My issue is that I often get weak instrument test statistics of about 100.000. I am looking for advice on causes and recommendations for "very high" weak instrument statistics.
```
library(ivreg)
data(mtcars)
summary(ivreg(mpg ~ vs + am | disp | cyl, data = mtcars), diagnostics = TRUE)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 29.19516 5.30464 5.504 7.01e-06 ***
disp -0.04204 0.01506 -2.791 0.00936 **
vs 0.43059 2.61832 0.164 0.87056
am 0.99878 2.18173 0.458 0.65063
Diagnostic tests:
df1 df2 statistic p-value
Weak instruments 1 28 17.254 0.000278 ***
Wu-Hausman 1 27 2.346 0.137213
Sargan 0 NA NA NA
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 3.312 on 28 degrees of freedom
Multiple R-Squared: 0.7272, Adjusted R-squared: 0.698
Wald test: 26.07 on 3 and 28 DF, p-value: 2.948e-08
```
I found that the this F-test is calculated as the square of the coefficients t-statistic. Thus my question can be translated into what are potential causes of very high t test statistics in the first stage?
```
summary(lm(disp ~ cyl + vs + am, data=mtcars))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -55.34 97.95 -0.565 0.576616
cyl 50.85 12.24 4.154 0.000278 ***
vs -20.55 37.52 -0.548 0.588146
am -48.24 26.02 -1.854 0.074294 .
4.154^2 = 17.25572
```
|
When is an instrument too strong in the first stage?
|
CC BY-SA 4.0
| null |
2023-05-09T12:29:09.357
|
2023-05-09T13:47:22.360
|
2023-05-09T13:47:22.360
|
184220
|
184220
|
[
"r",
"instrumental-variables",
"2sls"
] |
615335
|
2
| null |
615205
|
8
| null |
'Mean ± SD' is notation. Once you define it in a manner visible to the reader, you can use it in that manner regardless of the values.
When your statistics are skewed enough that they are positive with a standard deviation larger than the mean, the question is whether describing them in terms of mean and standard deviation is really sensible because cumulants other than mean and variance will be highly relevant for the distribution, making it significantly different from a normal distribution (for which mean and variance are the only non-zero cumulants).
Chances are that the logarithm of your positive random variable is quite better approaching a normal distribution and parameterising that makes more sense.
| null |
CC BY-SA 4.0
| null |
2023-05-09T13:24:25.087
|
2023-05-10T14:01:55.537
|
2023-05-10T14:01:55.537
|
22047
| null | null |
615337
|
1
|
615346
| null |
0
|
69
|
We have a dataset on cancer patients who have consented to join a study after their diagnosis, which could be months or even years later. After some follow-up, an event occurs. We can fit this data using the following code in R:
```
fit <- survfit(Surv(time, status) ~ group, data = data)
```
Here, the variable time represents the time between the event and the diagnosis, while status indicates whether the event occurred or not.
However, it is important to note that any patient who enters the study must have survived from the time of diagnosis until the time of entry. If they had died during this period, they would not have been included in the study, and we would not have any information about them. This phenomenon is known as "immortal time bias."
To avoid this bias, we need to define the time period during which we are "watching" each subject. In other words, we need to set the origin as the date of diagnosis and the entry time as the date of consent. In Stata, this can be done using the `stset` command with the `enter()` and `origin()` options. What is the equivalent in R?
In short, in the `stset` command Stata has the options, 'start', 'stop' and 'origin'. We have a cohort where the start dates come after the origin date. What is the equivalent in R?
|
How to address 'immortal time bias' using R - equivalent to Stata stset?
|
CC BY-SA 4.0
| null |
2023-05-09T13:48:09.410
|
2023-05-10T16:37:11.930
| null | null |
6454
|
[
"r",
"survival",
"stata",
"bias-correction"
] |
615338
|
2
| null |
615322
|
3
| null |
>
... we know the amount of alive, and dead agents at time step t-1, and t. But we can't model how the survival function had progress between that interval
What you describe is a discrete-time survival model, for which there are well defined methods of analysis. Obviously the details of the survival function within each time period can't be determined, but there is no need to resign yourself to "biased" results that depend "heavily" on assumptions if your primary interest is in the associations between predictor variables and outcome.
Such survival models can be handled as binomial regressions with "long-form" "person-period" data. For each time period there is a separate row of data for each at-risk individual, specifying the time period, the individual's covariate values in place for that period, and whether or not the individual experienced the event during that period. An individual no longer at risk after a last observation time has no data rows after the last observation time, which handles right-censoring in a very straightforward way. Search this site for pages discussing discrete-time survival; [this page](https://stats.stackexchange.com/q/614270/28500) is one of many, with some links to further reading.
In particular, if you use a complementary log-log link in the binomial regression, the results are those of a "grouped proportional hazards model." You get regression coefficients for associations of covariates with outcome just as you do for a Cox proportional hazards regression in continuous time (which, as another answer notes, is seldom truly "continuous" in practice, anyway), under the same proportional hazards assumption. See [this page](https://stats.stackexchange.com/q/429266/28500) for an outline of the approach, with a link to a detailed text on discrete-time survival models.
| null |
CC BY-SA 4.0
| null |
2023-05-09T13:49:15.243
|
2023-05-09T13:49:15.243
| null | null |
28500
| null |
615339
|
1
| null | null |
0
|
19
|
I am trying to build my first VAR model, consisting of three time series, for forecasting and have gotten quite far. I have made all the tests and comparing models indcluding different lags, different adjustments, inclusion of dummy variables, and different orders of differencing the variables. But I really do not know how to value the different measures of strength or validity that have been suggested to me. They seem sometimes to point in different directions.
For example, after choosing a model we can check for Granger causality and we can look for residual correlation (autocorrelation). While plotting and playing around with the parameters I have noticed the following:
- Two of the series have strongly correlated errors and are likely to be cointegrated given a Johansens measure. For this I intend to look further into a VECM model. However, the first difference for one of the series shows apparent heteroskedasticity. I have tried to get rid of this using different levels of lambda in a BoxCox transformation of this first difference, and also using ordinary normalization (using the function normalize() from the BETS package in R). This has been unsuccesful in two ways: 1. The transformed series looks awkward - the heteroskedasticity is gone but the left half is everywhere above zero and the right half everywhere below zero using the recommended lambda, and no other value of lambda yields any better results, and 2. When I use this transformed variable then the Granger causality, that was earlier very clear between the two series, has now completely disappeared (p-value went from 0,004 to 0,7). At the same time, the residual correlation that was earlier 0,5 is now completely gone. This seems intuitive given that the granger causality has disappeared. However, what measure should I prioritize here? Is Granger causality more important than reducing the residual correlation?
- Another issue is that when I include a time dummy for one series (the third one) where there is an apparent structural break in levels, this clearly removes all the residual correlation between this variable and my main variable. But (!) only when I use the original, non-differenced variables. When I use the differenced series, the dummy doesnt reduced the correlation as much. So which method would you prefer here? A VAR model with or without differenced variables, where the non-differenced model including the dummy yields the lowest residual correlation of all models, or the model with first-differenced variables, simply because of following the recommendation to use differenced variables?
What measures of validity of a VAR model for forecasting is most important? How would you rank the different measures? And now I only mean the measures that are important in model selection, and not the forecasting evaluation measures such as RMSE, MAPE, etc.?
Any input is appreciated!
Edit: More concisely, what is not clear to me is what measures to trust the most when building a VAR model. Is the existence of Granger causality more important than the non-existence of residual correlation?
I ask because I noticed that, after Box-Cox transforming one of the variables, residual correlation was reduced but so was Granger causality.
|
Trade-offs when building VAR models
|
CC BY-SA 4.0
| null |
2023-05-09T14:00:23.447
|
2023-05-09T21:41:22.647
|
2023-05-09T21:41:22.647
|
377742
|
377742
|
[
"forecasting",
"model-selection",
"vector-autoregression",
"cointegration",
"granger-causality"
] |
615341
|
2
| null |
348160
|
1
| null |
You can compute an exact distribution by using a repeated convolution.
Here is an example with r code:
```
n = 10
### a vector of probabilities for the number of rupees
p = rep(0,n*5+5+1)
p[1+c(0,1,3,5)] = c(11,3,1,1)/16
### loop some number of times for the number of shrubs
for (i in 2:n) {
p_new = p*0
### go through all values of the vector of probabilities and compute the new probabilities
for (j in 1:(n*5+1)) {
p_new[j+c(0,1,3,5)] = p_new[j+c(0,1,3,5)] + p[j]*c(11,3,1,1)/16
}
p = p_new
}
plot(0:(n*5), cumsum(p[1:(n*5+1)]), xlab = "number of rupees", ylab = "cumulative probability")
```
[](https://i.stack.imgur.com/H8f7A.png)
| null |
CC BY-SA 4.0
| null |
2023-05-09T14:06:52.403
|
2023-05-09T14:06:52.403
| null | null |
164061
| null |
615342
|
2
| null |
614983
|
2
| null |
This all seems sensible to me. I would explain the surprising result of a significant (at $\alpha = 0.05$) $p$-value by noting that your plot shows the whole population. It's not unreasonable that average effect of going from left to right leg (0.02 * 2 = 0.04 in whatever units your response has), while small, is still significant across the population since your data set is reasonably large. Squinting at the picture, I can convince myself that more of the lines are decreasing from left to right than increasing (since you are using sum-to-zero contrasts, a parameter estimate of 0.02 means the response for left legs is 0.02 units higher than the population mean on average).
Also note that these effects, while significant, are small relative to the standard deviations at all levels (ranging from 0.08 to 0.36), consistent with the noisy-looking picture.
To help convince yourself, try drawing boxplots of (left-right) for each subject, subdivided by age, and including the boxplot notches that indicate approximate 95% CIs for the median. Computing (left-right) will eliminate the variation in the intercept; showing the notches will contrast the inference on the average difference between groups (analogous to a standard error of the mean) with the overall variation shown by the boxes ($\approx$ IQR, analogous to the standard deviation).
| null |
CC BY-SA 4.0
| null |
2023-05-09T14:11:25.757
|
2023-05-09T14:11:25.757
| null | null |
2126
| null |
615343
|
1
|
615402
| null |
1
|
26
|
I am trying to generate random parameters for the lognormal distribution in order to test parameter sensitivity and for simulating (predicting) future survival rates, using the `lung` dataset. I realize the optimal fit for `lung` is Weibull, but I am trying to get my arms around lognormal in a survival environment (I have other data that does fit lognormal). I am apparently misusing either, or both, of the `mvrnorm()` and `plnorm()` functions in the `### lognormal distribution ###` section of the code presented at the bottom of this post.
The `### weibull distribution ###` section of the code, which works well (output illustrated in the first frame of the image below), is from the answer provided in [How to generate multiple forecast simulation paths for survival analysis?](https://stats.stackexchange.com/questions/614198/how-to-generate-multiple-forecast-simulation-paths-for-survival-analysis;) I tried adapting that for lognormal and get the error shown and explained in the second frame of the image below.
Please, could someone offer guidance for correctly using the lognormal distribution in this context, in the manner the Weibull distribution is (correctly) used? Later, after lognormal, I will similarly experiment with gamma and exponential distributions for survival analysis.
[](https://i.stack.imgur.com/mhQ8K.png)
Code:
```
library(MASS)
library(survival)
### weibull distribution ###
weibCurve <- function(time, survregCoefs) {
exp(-(time/exp(survregCoefs[1]))^exp(-survregCoefs[2]))
}
# fit Weibull
fit1 <- survreg(Surv(time, status) ~ 1, data = lung)
# plot raw data as censored
plot(survfit(Surv(time, status) ~ 1, data = lung),
xlim = c(0, 1000), ylim = c(0, 1), bty = "n",
xlab = "Time", ylab = "Fraction surviving")
# overlay Weibull fit
curve(weibCurve(x, fit1$icoef), from = 0, to = 1000, add = TRUE, col = "red")
# repeat the following to add randomized predictions for periods >= 500
newCoef <- MASS::mvrnorm(n = 1, fit1$icoef, vcov(fit1))
curve(weibCurve(x, newCoef), from = 500, to = 1000, add = TRUE, col = "blue", lty = 2)
### lognormal distribution ###
# fit lognormal
fit2 <- survreg(Surv(time, status) ~ 1, data = lung, dist = "lognormal")
mu <- fit2$coef
sigma <- fit2$scale
# plot raw data as censored
plot(survfit(Surv(time, status) ~ 1, data = lung),
xlim = c(0, 1000), ylim = c(0, 1), bty = "n",
xlab = "Time", ylab = "Fraction surviving")
# overlay lognormal fit
x = seq(from = 1, to = 1000, by = 1)
curve(plnorm(x,meanlog=mu,sdlog=sigma,lower.tail=FALSE),from=0,to=1000,add=TRUE,col="red")
# repeat the following as needed to add randomized predictions for periods >= 500
newCoef <- MASS::mvrnorm(n = 1, fit2$icoef, vcov(fit2))
x = seq(from = 500, to = 1000, by = 1)
curve(plnorm(x,meanlog=newCoef[1],sdlog=newCoef[2],lower.tail=FALSE),
from = 500, to = 1000, add = TRUE, col = "blue", lty = 2)
```
|
How to generate random values representing lognormal parameters for simulating the lognormal distribution using survival data in R?
|
CC BY-SA 4.0
| null |
2023-05-09T14:12:14.467
|
2023-05-10T06:59:21.767
| null | null |
378347
|
[
"r",
"survival",
"multivariate-normal-distribution",
"lognormal-distribution",
"weibull-distribution"
] |
615344
|
1
| null | null |
1
|
164
|
Textbook literature often denotes the estimated weight matrix of a linear regression model $y = Wx + \epsilon, \epsilon \sim \mathcal{N}(0,\sigma^2)$ by $\hat W$ due to the inherent variability in the sample size. Limited access to all possible instances hinders our ability to find the true matrix $W$. Additionally, the relationship between the inputs ($x$) and the output ($y$) might not be linear, resulting in uncertainty represented by the random variable $\epsilon$. Incorporating both sources of randomness into the estimate of $W$, we can see that $\epsilon$ is random $\rightarrow y$ is random $\rightarrow \hat W$ is random. Therefore, $\hat W | data \sim \mathcal{N}(W,\sigma^2(X^TX)^{-1})$, where $X \in \mathbb{R}^{n \times p}$, and $p$ is the number of features while $n$ is the sample size.
- Do you agree with me that in this model, we have two sources of randomness/uncertainty: the sample size variation and the uncertainty induced by the input-output relationship?
- As $n \rightarrow \infty$, $(X^TX)^{-1}$ approaches the true population inverse covariance matrix of the feature vector, if the features are naturally random. However, if the features are deterministic (hence the randomness of the input is due to the sample size, not to inherent randomness in the input), then I guess each element of the covariance matrix of the estimate of $W$ should go to zero, indicating
that the uncertainty about the model weight matrix becomes
negligible. However, I think it will converge to constant values, but what do those values mean? In this case, I think the model projects spurious randomness, and this should be appropriately treated.
- If the distribution of $\hat W$ accounts for all sources of randomness, Why do we need a Bayesian approach to estimate the weight then? There is already a way to estimate the uncertainty when predicting a new instance: $p(y_{test} | Data) = \int p(y_{test} | \hat W,x_{test})p(\hat W|data) d \hat W$, where $p(\hat W|data)\sim \mathcal{N}(W,\sigma^2(X^TX)^{-1}$) (assuming that I do have the model $p(y_{test} | \hat W,x_{test})$. I understand that the original assumption in this modelling is that we have true value of $W$ which is the true mean of the distribution of $\hat W$ and based on this assumption, we should only use that value. However, I am a bit confused of why we can't model the uncertainty this way.
|
The uncertainity about the weight matrix in linear model
|
CC BY-SA 4.0
| null |
2023-05-09T14:19:57.187
|
2023-05-11T20:03:12.667
|
2023-05-11T20:03:12.667
|
296047
|
296047
|
[
"linear-model",
"uncertainty"
] |
615345
|
2
| null |
615205
|
1
| null |
You use mean ± SD to summarize the distribution of your data and mean ± SE to indicate the uncertainty of your estimate of the mean. However, mean ± SD might provide a bad summary of the distribution, as seems to be the case for your data. Then you must look around for other descriptors to provide the shorthand summary. If space is not an issue, show the distribution with a histogram, density plot or whatever. It might be worth the effort to identify the distribution of your data (negative binomial, Poisson, or whatever) and provide the distribution parameters as a summary.
| null |
CC BY-SA 4.0
| null |
2023-05-09T14:22:28.260
|
2023-05-09T14:22:28.260
| null | null |
344547
| null |
615346
|
2
| null |
615337
|
2
| null |
You are correct that this needs to be taken into account. Anyone entering the study at some time after diagnosis provides no information about what might have happened during the intervening time. If diagnosis date is the `time=0` reference for the survival model, then time to study entry needs to be treated as a left truncation of survival time.
This situation can be handled with the counting-process data format for survival. The simplest way to proceed is to set `time=0` for each individual to the date of diagnosis and express other times relative to that time. That sets the time origin for the observations. If calendar date of diagnosis is a potential predictor, that calendar date can be coded separately.
Then you can specify the outcome variable for the regression model as `Surv(startTime, stopTime, event)`. That is interpreted as left truncation at `startTime` and either right censoring or an event at `stopTime`, depending on the value of `event`. This counting-process data format also simplifies many other types of survival analysis, as explained in Section 3.7 of [Therneau and Grambsch](https://www.springer.com/us/book/9780387987842). Section 3.7.3 in particular discusses a situation similar to what you describe.
| null |
CC BY-SA 4.0
| null |
2023-05-09T14:24:26.640
|
2023-05-09T14:24:26.640
| null | null |
28500
| null |
615347
|
2
| null |
615331
|
4
| null |
Murtaugh (2007) makes the point that if you have a pooled (all samples within a group share the same predictor variables), balanced (all groups are sampled at the same intensity) design and a linear (responses treated as Gaussian) model, then you will get exactly the same inferences at the population level (i.e. about differences between treatment) if you average the samples each group as if you fit a multi-level model. He recommends averaging because you're less likely to make a mistake in the analysis.
For example:
```
library(lmerTest)
## simulate data
dd <- expand.grid(pond = factor(1:10),
sample = factor(1:4))
dd$treat <- ifelse(as.numeric(dd$pond) <= 5, "C", "T")
dd$y <- simulate(~ treat + (1|pond),
seed = 101,
newdata = dd,
newparams = list(beta = c(1, 2),
theta = 1,
sigma = 1))[[1]]
## aggregate data to pond level
dd2 <- aggregate(dd, y ~ pond, FUN = mean)
dd2 <- merge(dd2, unique(dd[c("pond", "treat")]))
## fit both models (could also use nlme::lme())
mod1 <- lmer(y ~ treat + (1|pond), dd)
mod2 <- lm(y ~ treat, dd2)
## results
printCoefmat(coef(summary(mod1)), digits = 3)
Estimate Std. Error df t value Pr(>|t|)
(Intercept) 0.899 0.330 8.000 2.72 0.0262 *
treatT 2.260 0.467 8.000 4.84 0.0013 **
---
printCoefmat(coef(summary(mod2)), digits = 3)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.899 0.330 2.72 0.0262 *
treatT 2.260 0.467 4.84 0.0013 **
```
- If your responses are pooled but not balanced you may be able to use weights to fit the aggregated data
- If your design is not pooled (e.g. a randomized complete block design) but you only have two levels of the treatment you may be able to analyze within-block differences
- If your responses are non-Gaussian things get messy
- If you're actually interested in quantifying the variability at different levels (between-pond vs. within-pond) then you need the multilevel model
Murtaugh, Paul A. 2007. “Simplicity and Complexity in Ecological Data Analysis.” Ecology 88 (1): 56–62.
| null |
CC BY-SA 4.0
| null |
2023-05-09T14:28:56.963
|
2023-05-09T14:37:55.760
|
2023-05-09T14:37:55.760
|
2126
|
2126
| null |
615348
|
1
| null | null |
1
|
52
|
Problem Setting/Context:
I have feedback(each feedback has multiple sentences) associated with different products(you can safely assume that a feedback talks about one single product), I need to assign a continuous score for each feedback, for simplicity lets assume that this score can lie between 0 and 1(on a continuous scale) where 0 is negative, 0.5 is neutral and 1 indicates positive.
Note that none of the feedback has any labels associated with it, so I will have to build a model for any sort of predictions. This makes it challenging since assigning continuous scores to a feedback is difficult even if I were to label some data for fine tuning/ training a model to output continuous scores
Need:
The reason for moving towards a continuous score is to make the scoring more granular instead of just saying positive, negative or neutral additionally this also opens up options towards adding your own threshold to categorize the feedback if needed at a later point or use these scores to identify how sentiments are trending over time
Some Ideas I have explored:
- One possible way is to train a model to categorize each sentence in the feedback discretely as positive(2),negative(0) or neutral(1) and then average it out to get a continuous value for a feedback.
Sample Feedback:
This Product is amazing. This product is not so good in some areas.
Then the score for the feedback would be (2+0)/2 = 1
---
- Another possible approach which extends from the above approach could be using Prediction Probability and then taking the mean.
Sample Feedback:
This Product is amazing but under performs in some areas . The durability of the product is okay.
The scoring would be as follows:
[](https://i.stack.imgur.com/ucZaK.png)
Question:
- Are there any ways to directly get continuous scoring out of models on Hugging Face or any other machine learning libraries. I already know that VADER does provide continuous score but it is dictionary based which comes with its own set of problems.
- Please provide you inputs on the approaches above, specifically from a conceptual standpoint(machine learning concepts) do these make sense or is this incorrect? Feel free to list out any drawbacks/problems you see
FYI:I understand that for most systems a 5 point rating system works fine and moving to a continuous scale might also raise a few questions about the interpretations of these scores but this is where we are at. Please feel free to add any suggestions you might have on this.
|
Sentiment Analysis with a Continuos Output Labels
|
CC BY-SA 4.0
| null |
2023-05-09T14:57:53.857
|
2023-05-12T12:01:20.783
|
2023-05-12T12:01:20.783
|
307980
|
307980
|
[
"classification",
"natural-language",
"sentiment-analysis"
] |
615349
|
1
| null | null |
0
|
52
|
I am a masterstudent in behavioural sciences, currently writing my thesis. Unfortunately statistics isn't my biggest talent, so I was hoping someone out here could help me with an issue I am facing.
I have a dataset in SPSS with missing values for different variables. I ran a Missing Value Analysis using EM but SPSS computed impossible values for the missing data in some cases. For example: -4 on a variable that has a range of answer-options from 1 to 7 or -10 on a variable that is a percentage. How do I make sure SPSS computes values that are fitting to my dataset?
Thank you!
|
Issue: SPSS computes impossible data with Expectation Maximization for missing values
|
CC BY-SA 4.0
| null |
2023-05-09T15:10:43.480
|
2023-05-09T15:10:43.480
| null | null |
387587
|
[
"spss",
"missing-data",
"expectation-maximization"
] |
615350
|
1
| null | null |
0
|
14
|
I can estimate a linear probability model on about 126,000 firm-quarter observations, but estimating the same model using logistic regression drops the usable sample to only about 75,000 observations because of complete separation of data points (certain state fixed effects have no variation in the dependent variable). Do the 51,000 observations that cannot be used in the logistic regression still add value to the estimation of my model under LPM? My question is partially motivated by the fact that the LPM produces statistically significant results, but results under logistic regression are not quite significant, and I'm wondering if it could be a power issue.
|
LPM vs. logistic regression with complete separation of data points
|
CC BY-SA 4.0
| null |
2023-05-09T15:11:57.550
|
2023-05-09T15:11:57.550
| null | null |
387586
|
[
"logistic",
"nonlinear-regression"
] |
615351
|
1
| null | null |
0
|
22
|
I'm looking to understand this simple model in R:
m1 <- lmer(Y ~ exam_type + (1 + exam_type | subject ) )
Assume we have a dataset where:
- Y is numeric
- exam_type is a factor with 5 levels, let's say
- subject is also a factor, with many levels ( you can interpret this as a student id )
Specifically, could someone please explain:
- What are the assumptions of m1 above, compared, for example, with the following linear model:
---- m2 <- lm(Y ~ exam_type + subject + prev_test_score) ( assume prev_test_score is some other numerical variable).
I understand the assumptions of m2 : we assume there's a linear relationship between Y and all the features involved in the prediction, we assume errors are iid gaussian, and we simply estimate coefficients
- What does m1 estimate? It looks like it estimates coefficients, but just for exam_type. I'm confused because I thought that different subjects would have different coefficients for exam_type, in this formulation.
Thanks a lot for the help! Most important for me would be a clear formulation of the assumptions of the m1 formulation above, and what exactly we are estimating. Thanks a lot!
|
Interpreting a simple linear mixed error model in R lmer
|
CC BY-SA 4.0
| null |
2023-05-09T15:24:39.743
|
2023-05-09T15:24:39.743
| null | null |
74056
|
[
"regression",
"mixed-model",
"multiple-regression",
"lme4-nlme"
] |
615352
|
2
| null |
614983
|
1
| null |
It seems like you have only one measurement for each leg of every patient. Hence the `leg %in% sbjID` matches with the residual. Use only `(1|sbjID)` as random effect.
The fixed effect structure assumes that there might be a systematic difference between the right and left legs. Is there any biological reason why you would expect that?
| null |
CC BY-SA 4.0
| null |
2023-05-09T15:27:20.037
|
2023-05-09T15:27:20.037
| null | null |
12360
| null |
615353
|
2
| null |
615344
|
2
| null |
- Notice that the sample covariance is $S_n = \frac{1}{n}\sum_i x_i x_i^T=\frac{1}{n}X^TX$. If the features are drawn from a distribution with covariance matrix $\Sigma$, then $S_n$ converges in probability to $\Sigma$ when $n$ approaches infinity. In other words, $(X^TX)^{-1} \to n^{-1} \Sigma^{-1} \to 0$ when $n \to \infty$.
- In linear regression the features are usually considered as fixed. In particular, the distribution of the estimator $\hat W \sim \mathcal N(W,\sigma^2(X^TX)^{-1})$ is due to the randomness of $y$ only. (Intuitively, it is the distribution you will get by sampling $y$ over and over with the same features). If you would like to consider how $\hat W$ varies due to randomness of the features, you will have to additionally model the distribution of $x$, which will result in a much more complicated distribution of $\hat W$ (and probably impossible to find in closed form).
- The estimator $\hat W$ is fixed by the data, so the expression $p(\hat W|data)$ doesn't make a lot of sense (it is just equal to 1). To make sense of it you would have to treat $W$ itself as the random variable, which is exactly the Bayesian approach.
| null |
CC BY-SA 4.0
| null |
2023-05-09T15:59:36.653
|
2023-05-09T15:59:36.653
| null | null |
348492
| null |
615354
|
1
| null | null |
1
|
22
|
I am using the MLMED macro to test a 2-1-1 moderated mediation model where repeated measures (Level 1) are nested within individuals (Level 2). Theoretically, my moderator is on Level 1 as it changes with time, but I want to model it as a Level 2 moderator in order to be able to use MLMED (MLMED cannot handle level 1 moderators).
My understanding is that they did something similar in the following article:
[https://doi.org/10.1002/cncr.33850](https://doi.org/10.1002/cncr.33850)
This is, the moderators are on Level 1 (ie., fatigue, depression, physical activity change with time), but MLMED should have treated them as being on Level 2.
How should I implement this in my analysis? Should I create aggregate scores for the moderator variables (the average for each participant across time points) before running the analysis or should I use participants' scores at all time points? Originally, I thought that it wouldn't make a difference on the grounds that MLMED was using the average score of each participant across time points (since it's a between group effect on level 2). However, I have tried out both ways with my data and I get different results.
Practically, in which of the following ways should my dataset be structured?
Participant# - Time - Moderator
1 - 1 – score of participant 1 at T1
1 - 2 – score of participant 1 at T2
1 - 3 – score of participant 1 at T3
2 - 1 – score of participant 2 at T1
2 - 2 – score of participant 2 at T2
2 - 3 – score of participant 2 at T3
OR
Participant# - Time - Moderator
1 - 1 – average score of participant 1 across T1-T3
1 - 2 – average score of participant 1 across T1-T3
1 - 3 – average score of participant 1 across T1-T3
2 - 1 – average score of participant 2 across T1-T3
2 - 2 – average score of participant 2 across T1-T3
2 - 3 – average score of participant 2 across T1-T3
I hope my question is clear enough. Your help is very welcome!
|
2-1-1 moderated mediation model with MLMED macro
|
CC BY-SA 4.0
| null |
2023-05-09T16:09:53.567
|
2023-05-09T16:09:53.567
| null | null |
387589
|
[
"mixed-model",
"interaction",
"spss",
"multilevel-analysis"
] |
615355
|
1
| null | null |
0
|
25
|
I have a dataset with a dependent categorical variable N = {1,2,3}, and five independent continuous variables. I am running multinomial logistic regressions for all independent variables together and alone, but I also want to find an optimal model (similar to minimum AIC). However, I appear to be unable to find a way perform model selection in Stata which is compatible with multinomial logistic regression. Is there any way to do this in Stata?
|
Model selection for multinomial logistic regression in Stata
|
CC BY-SA 4.0
| null |
2023-05-09T16:27:47.700
|
2023-05-09T16:27:47.700
| null | null |
372002
|
[
"model-selection",
"stata",
"multinomial-logit"
] |
615356
|
2
| null |
615337
|
2
| null |
This is an interesting problem. As an R question it would, of course, be off topic for the site. You already have one solution from EdM regarding setting the `starttime` and `stoptime` variables in the `Surv` command: this effectively defines the time-dependent risk set treating diagnosis date as Day 0 or Time 0. What you are modeling with this approach is a flexible non-parametric baseline hazard function as a function of time from diagnosis and not time from entry into study.
The problem is that there are many "times" which can be predictive of outcomes: there is for instance the patient's age, or the calendar year, or time from entry into study (either initiation of treatment or signing informed consent). These are broadly referred to as the age, period, and cohort effects.
You might be surprised to learn left truncation as a solution to immortal time bias is less common than you expect. For instance, in every cancer study I have supported, not one has modeled event time from diagnosis or time from last treatment. The truth is, if the "gap" exists for every patient in the sample, the associated Nelson-Aelan curve is very misleading because the incidence is, by definition, 0 per person-time from time 0 until the first observed failure. Note: you can't even estimate a Kaplan Meier because not all subjects are at risk (for a measurable event) at time 0. Adjusting the start and stop times affects whom you compare to whom, but this approach still cannot account for subjects who died prior to inclusion in the sample. If you are lucky enough to sample a few subjects in the early time period, it's important to handle these events appropriately, and in this case, EdM's approach is a valid --but not the only-- approach to get a stable estimate of incidence over time. The approach should also be justified by the proportionality of the hazard ratios as assessed by plots and tests if need be.
I actually disagree with the way most time-to-event analyses are conducted for cancer studies, but not for that reason! As you know, a Cox model is a very flexible model in that it is semi-parametric in how it handles an arbitrary baseline hazard function (as a default or in multiple groups via stratification), and it handles censoring. A cox model also allows for adjustments like in a regression model. I actually believe time from entry into study should be the time 0 because this time is the most "artificially" constructed time; that is to say, when generalizing the results there is no need to "predict" time from entry into study for a patient (not subject) who's not actually in a study. For generalizable times like time from diagnosis, calendar year, etc. you can simply adjust for these as variables in the model. Depending on the sample size, you can use more and more sophisticated approaches like splines to estimate other time-dependent incidence parameters.
Stephen Senn has written extensively on the benefits of adjusting for prognostic variables, even in randomized ITT analyses. I agree with him and would suggest, depending on the analysis, and the nature of the functional shape of the hazard function, entry-into-study is a reasonable choice of time 0, and other predictive "times" can be handled as variables in the model. Note: these would not be time varying functions since, from study entry, age, cohort, and period all move in unison, so the typical adjustments would be "age at study entry" or "years since initial diagnosis at study entry" or "drug-free interval" all measured in clinically relevant units.
| null |
CC BY-SA 4.0
| null |
2023-05-09T16:30:03.733
|
2023-05-10T16:37:11.930
|
2023-05-10T16:37:11.930
|
8013
|
8013
| null |
615358
|
1
| null | null |
1
|
16
|
I have repeated measures (two assessments) of an independent variable (binomial/two different groups) but dependent variables (continous) measured at only one assessment - what analysis approach would be appropriate?
I checked if there are significant differences regarding those variables between the groups. I also did logistic regression analysis.
But is there another approach which could analyze the relationship between these variables or wether the dependent variables influence the group membership? Would ANOVA or ANCOVA be appropriate?
|
Repeated measurment - dependent variable only at one assessment point
|
CC BY-SA 4.0
| null |
2023-05-09T16:52:35.717
|
2023-05-13T14:39:00.047
|
2023-05-13T14:39:00.047
|
387590
|
387590
|
[
"repeated-measures"
] |
615359
|
1
| null | null |
0
|
50
|
I have a generalized additive mixed model (GAMM) I'm using for modelling fish counts and many covariates to test (with different proportions of NA's in each covariate), but not enough data to include them all in the same model at once. I'm aware of, but don't fully understand, the `select=TRUE` method and have also been taught to remove variables when the EDF is at or slightly below 1, and clearly not significant (or a parametric estimate is very close to zero). I'd like to know what are some things to keep in mind when removing variables in this way and how it differs from `select=TRUE`. Is the 2nd method OK if the sample size isn't large enough to accommodate more than a few covariates (in addition to random effects)?
My goal is to explain the data with the minimum number of predictor variables.
EDIT:
Select=FALSE:
```
mod <- bam(num ~
CYR.std * Season +
sed_depth * ave_hw +
total_ave_ma +
s(ave_tt) +
s(temp) +
s(DO) +
s(bottom_canopy) +
# Structural components
s(Site, bs = "re") +
s(Site, fCYR, bs = "re") +
s(Site, CYR.std, bs = "re") +
s(Season, fCYR, bs = "re") +
offset(log(area_sampled)),
data = toad,
method = 'fREML',
# method = "ML",
discrete = TRUE, # discretization only available with fREML
select = FALSE,
family = poisson,
control = list(trace = TRUE))
> summary(mod)
Family: poisson
Link function: log
Formula:
num ~ CYR.std * Season + sed_depth * ave_hw + total_ave_ma +
s(ave_tt) + s(temp) + s(DO) + s(bottom_canopy) + s(Site,
bs = "re") + s(Site, fCYR, bs = "re") + s(Site, CYR.std,
bs = "re") + s(Season, fCYR, bs = "re") + offset(log(area_sampled))
Parametric coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -4.6363582 1.3406977 -3.458 0.000544 ***
CYR.std 0.0233992 0.1144859 0.204 0.838053
SeasonWET 0.9901602 1.5486474 0.639 0.522582
sed_depth 0.0137553 0.0046723 2.944 0.003240 **
ave_hw 0.0331158 0.0149016 2.222 0.026263 *
total_ave_ma 0.0222740 0.0062046 3.590 0.000331 ***
CYR.std:SeasonWET -0.0498692 0.1380378 -0.361 0.717896
sed_depth:ave_hw -0.0006050 0.0002818 -2.147 0.031788 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Approximate significance of smooth terms:
edf Ref.df Chi.sq p-value
s(ave_tt) 1.9033 2.294 15.164 0.000946 ***
s(temp) 1.0000 1.000 0.010 0.922106
s(DO) 2.1938 2.736 3.398 0.250328
s(bottom_canopy) 1.0000 1.000 0.043 0.836254
s(Site) 0.9325 47.000 1.355 0.506944
s(fCYR,Site) 90.5698 511.000 347.037 < 2e-16 ***
s(CYR.std,Site) 21.0667 46.000 147.907 0.001109 **
s(Season,fCYR) 11.8387 16.000 90.683 0.000468 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
R-sq.(adj) = 0.764 Deviance explained = 70.3%
fREML = 1132.4 Scale est. = 1 n = 856
```
|
GAM selection via EDF vs. adding penalties
|
CC BY-SA 4.0
| null |
2023-05-09T17:06:33.593
|
2023-05-09T20:54:47.310
|
2023-05-09T20:54:47.310
|
337106
|
337106
|
[
"r",
"regression",
"model-selection",
"regularization",
"generalized-additive-model"
] |
615360
|
1
| null | null |
4
|
60
|
I am looking to develop a neural network in which the output from one set of nodes is "orthogonal" to the output from another set of nodes in the sense that if the first set of nodes is used to predict A then the second set of nodes is uninformative of A. For example:
```
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense, Input, Lambda
from tensorflow.keras import backend as K
# Define the size of the input
input_size = 10 # Replace this with the size of your input
# Input layer
inputs = Input(shape=(input_size,))
# Dense layer
dense = Dense(8, activation='relu')(inputs)
# Split the dense layer into two halves
dense1, dense2 = Lambda(lambda x: split(x, num_or_size_splits=2, axis=1))(dense)
# Classification layers
output1 = Dense(1, activation='sigmoid')(dense1)
output2 = Dense(1, activation='sigmoid')(dense2)
# Create the model
model = Model(inputs=inputs, outputs=[output1, output2])
# Compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Print the model summary
model.summary()
```
I am looking for a way to ensure that `dense1` is uninformative of `output2` and `dense2` is uninformative of `output1`.
Any ideas on how this can be achieved?
|
Orthogonal embeddings
|
CC BY-SA 4.0
| null |
2023-05-09T17:10:12.633
|
2023-05-10T02:02:29.383
|
2023-05-09T20:54:35.993
|
204868
|
204868
|
[
"neural-networks"
] |
615361
|
1
| null | null |
1
|
49
|
Show that for testing the hypothesis $H_{0}: \theta_{1} \leq \theta \leq \theta_{2}$ versus $H_{1}: \theta<\theta_{1}$ or $\theta>\theta_{2}$,or the hypothesis $H_{0}:\theta=\theta_{0}$ versus $H_{1}: \theta\ne\theta_{0}$ in the one-parameter exponential family ,the UMP tests do not exist.
We can find some specific examples to show that the UMP tests fail to exist.In general,how to proof the UMP tests do not exist theoretically ? I think proof it by contradiction will come into play likewise specific cases,are there some suggestions to construct the conflict?
|
The existence of a UMP test in one-parameter exponential families
|
CC BY-SA 4.0
| null |
2023-05-09T17:12:35.743
|
2023-05-09T17:12:35.743
| null | null |
371966
|
[
"hypothesis-testing",
"mathematical-statistics",
"inference"
] |
615362
|
1
| null | null |
0
|
35
|
I'm trying to solve the following: Given a standard one-variable linear regression model $$y = \beta_1 x + \beta_0 + \epsilon$$ where $\beta_1, \beta_0\in\mathbb{R}$ and $\epsilon\sim N(0, \sigma^2)$ with $n$ sample data points $X\in\mathbb{R}^n$, how to calculate the expected value of $R^2$? Thanks!
WLOG, assume the model has no bias. $$y = \beta x + \epsilon$$
$$\hat{\beta} = \frac{X^T Y}{X^T X} = \frac{\beta ||X||^2 + X^T\epsilon}{||X||^2}$$
$$\hat{Y} = \hat{\beta}X = \beta X + \frac{X^T\epsilon X}{||X||^2}$$
$$SST = ||Y - \bar{Y}||^2$$
$$SSR = ||\hat{Y} - \bar{Y}||^2$$
$$R^2 = \frac{SSR}{SST}$$
But the denominator has randomized part, so I can't use linearity to solve this problem?
|
Expected value of $R^2$ for univariate regression
|
CC BY-SA 4.0
| null |
2023-05-09T17:46:45.573
|
2023-05-09T18:57:38.193
|
2023-05-09T18:57:38.193
|
387591
|
387591
|
[
"regression",
"univariate"
] |
615363
|
1
|
615365
| null |
1
|
30
|
I have three times series (X, Y and Z) with a length of 2000 observations. I am fitting three different models:
- $Y_t = \beta_0 + \beta_1 Y_{t-1} + \epsilon_t$;
- $Y_t = \beta_0 + \beta_1 Y_{t-1} + \beta_2 X_{t-1} + \epsilon_t$;
- $Y_t = \beta_0 + \beta_1Y_{t-1} + \beta_2 Z_{t-1} + \epsilon_t$.
The adjusted $R^2$ of the second model is identical to that of the first model ($0.607)$, while the adjusted $R^2$ from the third is higher ($0.638$). However, when I employ a rolling window of a thousand observations and compute the one-step-ahead forecasts for the three models, comparing them by $MSE$, there is no significant difference in performance. This is counterintuitive to me, since the in-sample $R^2$ from the third model is significantly higher than that of the second, while no regressor is being added (hence there is no over-fitting).
|
Best model in-sample is the worst out-of-sample
|
CC BY-SA 4.0
| null |
2023-05-09T17:49:15.710
|
2023-05-09T18:36:08.043
| null | null |
313581
|
[
"regression",
"forecasting",
"r-squared"
] |
615364
|
1
| null | null |
0
|
22
|
I know that given a moving average process in the form $$X_t= \Theta(B) \varepsilon_t$$then $X_t$ is invertible if the roots of $\Theta(B)$ are outside the unit circle. But what about something in the form
$$X_t= a + \Theta(B) \varepsilon_t ?$$
It seems to me that $X_t -a $ is invertible if the roots of $\Theta(B)$ are outside the unit circle but intuitively nothing guarantees that $X_t$ is invertible too.
|
Invertibility of "sum" of processes
|
CC BY-SA 4.0
| null |
2023-05-09T18:11:38.347
|
2023-05-09T18:11:38.347
| null | null |
361672
|
[
"time-series",
"moving-average"
] |
615365
|
2
| null |
615363
|
1
| null |
>
no regressor is being added (hence there is no over-fitting)
This is where you are going wrong. Model B can be overfit compared to model A even if the models are non-nested. The argument that "overfitting can only occur when predictors are added" is incorrect.
As an illustration, imagine that you are randomly generating a time series $(Z_t)$ and fitting your third model... and that you are doing this many times, and keeping the one realization of $(Z_t)$ that gave you the highest $R^2$ in-sample. Since the $(Z_t)$ is random, its future realizations will not improve your forecasts, and the high $R^2$ is simply due to overfitting compared to any other model.
| null |
CC BY-SA 4.0
| null |
2023-05-09T18:36:08.043
|
2023-05-09T18:36:08.043
| null | null |
1352
| null |
615366
|
2
| null |
615326
|
1
| null |
Hypothesis testing is only appropriate if we estimate a population parameter, i.e., if there is a "true" population parameter (e.g., a group mean, or a coefficient in a linear relationship), which we don't know, but can estimate based on a random sample of data and a hypothesized relationship. The null hypothesis then is that the true parameter is equal to some specific value, or less than/greater than a particular value. The test tells us whether the estimate is consistent with the null hypothesis or not.
In your case, you know the true value, so you don't need to hypothesize about it. If you want to analyze whether it differs from some other value, all you have to do is look at whether there is mathematical equality or not.
| null |
CC BY-SA 4.0
| null |
2023-05-09T18:40:05.437
|
2023-05-09T18:40:05.437
| null | null |
1352
| null |
615367
|
1
| null | null |
1
|
17
|
Deeply sorry as this wasn't really covered in my statistics classes. I am current comparing datasets for people divorcing vs. dissolving their marriages. I have two variables, length of marriage, and age of the parties, that are not normally distributed. length is skewed positively. The Age is a little flatter than the bell. I can get normal distribution using a Box-Cox, but I want to compare the means of the two populations, to see if there's a difference. Do i need to pick a Lambda to compare them, and if so, how?
the range and distributions for both populations for the variables are similar, they just aren't normal.
It's entirely possible I am going about this the wrong way, and should do a different test all together, but I only have access to Excel with the XLstats add on.
|
Comparing means in Box-Cox transformed data
|
CC BY-SA 4.0
| null |
2023-05-09T18:55:56.653
|
2023-05-09T18:55:56.653
| null | null |
387593
|
[
"t-test",
"normalization",
"two-sample",
"boxcox-transformation"
] |
615368
|
1
|
615446
| null |
1
|
59
|
I am doing the nearest neighbour matching from the package MatchIt.
For example
```
data("lalonde")
m.out1 <- matchit(
treat ~ age + educ,
data = lalonde,
method = "nearest",
distance = "glm",
replace = FALSE
)
```
But this algorithm is sensitive to the order that the treated units are matched. As in
The Effect: An Introduction to Research Design and Causality, a book by Nick Huntington-Klein
[](https://i.stack.imgur.com/0MwX1.png)
I wish to see the effect of this randomness on my result. For example, without replacement, I can generate 100 different matching results and compare how different they are or derive the variation in matching results. How can I get this kind of different matching results by matchit? I tried set.seed() in R and it returns the same results.
Thank you a ton in advance.
|
Nearest neighbor matching without replacement with MatchIt
|
CC BY-SA 4.0
| null |
2023-05-09T19:00:14.670
|
2023-05-15T18:55:11.850
| null | null |
327954
|
[
"r",
"matching"
] |
615369
|
2
| null |
614842
|
1
| null |
As others have pointed out, ROC curve does not display threshold values, unless additionally annotated. For the purpose of determining the best decision threshold, classification plot may be more convenient.
[](https://i.stack.imgur.com/RvqXw.png)
To determine the threshold, draw a horizontal line from the desired sensitivity (90% in the figure) to the sensitivity curve (blue), then draw a vertical line to the x-axis. The ordinate of the point at which the vertical line intersects the specificity curve (red) is the matching specificity, and the abscissa is the corresponding decision threshold. This is similar to choosing the best combination of (sensitivity, specificity) on the ROC graph, but in addition it directly provides the decision threshold.
| null |
CC BY-SA 4.0
| null |
2023-05-09T19:51:22.920
|
2023-05-10T19:18:36.767
|
2023-05-10T19:18:36.767
|
134310
|
134310
| null |
615371
|
1
| null | null |
2
|
84
|
As background Mitochondrial Eve is the name give to our common ancestor about 200,000 years. I was asking about this in the biology stack exchange, but it seems like kind of a math question.
Mitochondrial DNA is passed from mother to daughter. A women could say all of her daughters would have her Mitochondrial dna. Also, all of her daughters' daughters would have her Mitochondrial dna. However, her son's daughter's would not have her Mitochondrial dna. (Her son married some other women, and their children would have that women's Mitochondrial dna.)
Given this framework, assuming no evolutionary forces, fitness of one DNA over another, should the descendants end up with a random Mitochondrial dna? If you had a thousand women, should dna of the descendants thousand of years later be equally distributed in groups that represent the Mitochondrial descendants of each women?
Thanks
Edit --
I changed the wording a bit. Basically, I think the initial population should be equally represented in final population, if there is no selection bias.
|
Does Mitochondrial Eve suggest evolutionary fitness of her DNA?
|
CC BY-SA 4.0
| null |
2023-05-09T20:37:37.467
|
2023-05-11T14:02:42.740
|
2023-05-09T23:03:32.670
|
387600
|
387600
|
[
"distributions"
] |
615372
|
2
| null |
524832
|
2
| null |
## Preamble
As explained in the comment, CUPED stands for Controlled Experiment Using Pre-Experiment Data. It is used to increase the power of A/B tests, and is very popular with the tech companies like Microsoft, Netflix, Facebook etc.
Loosely speaking, the idea is that you use some suitable data (usually pre-experiment) to explain away some of the variance in the data from your experiment. The result is that you can detect smaller effects or conclude your experiment sooner. At big tech companies even small effect sizes or putting a change into production even a week sooner can correspond to large amounts of money.
## The question
I will walk through that section of the linked [article](https://Controlled%20Experiment%20Using%20Pre-Experiment%20Data) (article title in case the link dies, Improving the Sensitivity of Online Controlled Experiments
by Utilizing Pre-Experiment Data, Deng, Xu, Kohavi, Walker).
We start at equation $(3)$.
$$\hat{Y} = \bar{Y} - \theta \bar{X} + \theta E[X] \hspace{1cm}(3)$$
In the article they note the following, which I will label $(\star)$
$$-\theta E[\bar{X}] + \theta E[X]=0 \hspace{1cm} (\star)$$
First, let's check we understand all the pieces here. The sample mean $\bar{X}$ is an estimate for the true, population mean $\mu$. The sample mean is an unbiased estimator, so $E[\bar{X}]=\mu$. Also, we have that $E[X]=\mu$ by definition. This explains the observation $(\star)$.
Now in equation $(3)$ I think you may have misread it. Notice that we have $-\theta \bar{X}$ and not $-\theta E[\bar{X}]$, and so the last two terms in $(3)$ do not cancel. I think this is the issue confusing the OP.
However I will run through the first part of section 3.2.1 of the paper just to explain the details.
In the paper, they then claim $\hat{Y}$ is an unbiased estimator of $E[Y]$. This means that $E[\hat{Y}]=E[Y]$
We can see this,
$$E[\hat{Y}]=E[\bar{Y} - \theta \bar{X} + \theta E[X]] = E[\bar{Y}]-\theta E[\bar{X}]+\theta E[E[X]]$$
by the linearity property of expectation. Then also note the fact that $E[E[X]]=E[X]$ and so we have
$$ E[\bar{Y}]-\theta E[\bar{X}]+\theta[E[X]]$$
finally use our observation $(\star)$, this becomes
$$E[\bar{Y}]$$
and so we have
$$E[\hat{Y}] = E[\bar{Y}] = E[Y]$$
## Why care?
The reason this matters is we are showing we can use $\hat{Y}$ instead of $\bar{Y}$, for our experiment. Why would we do that? Well the next steps in the paper show that
$var(\hat{Y}) = var(\bar{Y})(1-\rho^2)$
So $var(\hat{Y})$ is no bigger than $var(\bar{Y})$. If $X$ is chosen well then we get a large value of $\rho$ and we can significantly reduce our variance.
| null |
CC BY-SA 4.0
| null |
2023-05-09T20:57:05.940
|
2023-05-09T20:57:05.940
| null | null |
358991
| null |
615373
|
2
| null |
615190
|
1
| null |
If treated stores are (truly) randomly selected, this is an experimental setup, and you can derive ATT straightforwardly in a number of ways. Random selection with an appropriate sample size does a better job of eliminating the heterogeneity than other methods that use pre-treatment data to build a synthetic counterfactual to derive the ATT. Additional variables related to confounding effects might help reveal nuanced relationships and help ensure that the relationships are not spurious (a sample of 100 of 1,000 stores is reasonably safe in this regard, IMO), but the randomization is doing the heavy lifting. You would not need pre-treatment sales data to get a reasonable estimate of the ATT.
With knowledge (that I don't have) about the expected duration of the coupon's effect on sales; you can pick an appropriate post-treatment time period/cutoff (or several like short-, medium-, long-term); aggregate the sales or number sold per-store for the item among each of the treatment group and control group members, respectively, during the period; and use an appropriate model to get the difference in means. If the distributions of sales are appropriate/normal, a simple t-test or regression model would do the trick. Otherwise, you could take the natural log of the sales or use an appropriate GLM, like Poisson or negative binomial.
| null |
CC BY-SA 4.0
| null |
2023-05-09T21:10:12.177
|
2023-05-09T21:10:12.177
| null | null |
186886
| null |
615375
|
1
| null | null |
1
|
15
|
For an integer $k > 0$, $\mu_i \in \mathbb{R}$, and let $\zeta_i \sim \mathcal{N}(0, 1)$, $1 \leq i \leq k$. In the background we are taking $k \to \infty$.
Then the random variable $T$ has a scaled [Noncentral $\chi^2 (k , \lambda)$ distribution](https://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution):
$$
\begin{align}
T &= \frac{1}{k}\sum_{i = 1}^k (\mu_i + \zeta_i)^2, \\
\lambda &= \sum_{i=1}^n \mu_i^2.
\end{align}
$$
In other words, $T = T^{\lambda, k}$ is a two-parameter family:
- with mean $\mu = 1 + \lambda/k$
- with variance $2(1 + 2\lambda/k) = 2(2\mu - 1)$
Note the strong (linear) dependence between mean and variance. Applying a [variance-stabilizing transformation](https://en.wikipedia.org/wiki/Variance-stabilizing_transformation), we recognize the variance of $T^{\lambda, k}$ as $h(\mu) = 2(2\mu - k)$ and therefore the delta-method-approved transformation is $f(\cdot; \lambda)$ given by
$$
\begin{align}
f(t; \lambda)
&\propto \int_{t_0}^t \frac{1}{\sqrt{h(\mu)}} \\
&= C_1 \sqrt{t - 1/2} + C_0
\end{align}
$$
for arbitrary $C_1$ and $C_0$.
But obviously $\mathbb{P}(T < 1/2) > 0$, so $f$ can't be applied to all inputs.
So is $T^{\lambda, k}$ not strictly variance-stabilizable?
- Finally, I am aware of transformations that make $T$ more normal, such as a cube root and others.
- I understand that if we approximate $f(t) = 2\sqrt t$ then $\operatorname{var} f(t) \approx 4 - 1/(1 + \lambda/k)$ which is already not bad. But I, of course, would like to do better.
- I also understand that by Chebyshev's inequality, $\mathbb{P}(T < 1/2) \to 0$ as $k\to\infty$. Indeed, the delta method is completely local in $f$ as long as $f(T)$ is a well-behaved random variable, so we could as well define
\begin{align}
f(t) &= \begin{cases} \sqrt{t - 1/2}, & t \geq 1/2 \\
0, &t < 1/2
\end{cases}
\end{align}
or even something creative like
\begin{align}
f(t) &= \begin{cases} \sqrt{t - 1/2}, & t \geq 1/2 \\
-\sqrt{1/2 - t}, &t < 1/2
\end{cases}
\end{align}
or even branch cut to
\begin{align}
f(t) &= \begin{cases} \sqrt{t - 1/2}, & t \geq 1/2 \\
i\sqrt{1/2 - t}, &t < 1/2
\end{cases}
\end{align}
|
Variance stabilization of Scaled Noncentral Chi-squared
|
CC BY-SA 4.0
| null |
2023-05-09T21:25:49.760
|
2023-05-09T21:25:49.760
| null | null |
35280
|
[
"variance",
"heteroscedasticity",
"variance-stabilizing"
] |
615377
|
1
| null | null |
1
|
28
|
I am using Statsmodels SARIMAX to make forward-looking forecasts and I want to be able to reproduce my forecasts at a later time. Is there any way to ensure that forecasts will be the same between different runs and is there any functionality for setting a random seed?
|
Statsmodels SARIMAX reproducible forecasts
|
CC BY-SA 4.0
| null |
2023-05-09T21:27:25.853
|
2023-05-09T21:27:25.853
| null | null |
387602
|
[
"forecasting",
"statsmodels"
] |
615378
|
1
| null | null |
1
|
25
|
Should I be doing a paired or unpaired 2 sample t-test for my data?
I have 5 age groups (e.g. 0-44, 45-65, and so forth), and I have data for a 7-year period of deaths per those age groups. Should I be doing a paired 2 sample 2 test as the deaths are directly related to the ages?
|
Unpaired vs Paired t-test
|
CC BY-SA 4.0
| null |
2023-05-09T21:42:13.447
|
2023-05-09T23:58:59.417
| null | null |
387604
|
[
"t-test"
] |
615379
|
1
| null | null |
0
|
44
|
I would like to train a neural network to learn a probability distribution. (Density estimation of a model-independent "true" probability density function of a given output.) That is, I have a set of training examples where a real-valued output is drawn from a certain (unknown, non-parametric) probability distribution, conditioned on a given set of input features. Given the set of input features of an unknown, I would like the neural network to output information about the probability distribution (for that set of input features), such that I could potentially do further processing on it. (e.g. randomly sample from it; note that I don't just want the neural network to predict the optimal expected value of the output, but rather the full probability density function).
Given that there's a known range and effective minimum "resolution" of the output value (e.g. a difference of 1 unit is meaningful, but a difference of 0.1 unit is not), my general thoughts are to discretize the output space, then effectively do a multi-class classification of each of the output value bins, using softmax to normalize to probabilities. My understanding of the standard multiclass classification training process is that the predicted outputs should (ideally) converge to the underlying probability distribution, conditioned on the input features. (Please correct me if I'm wrong.)
The hangup I have with the process is there's no ordinal coordination between the various bins. That is, each bin is considered independently of the others in such a multiclass classification process, such that the probability of bin 5 and 6 are no more correlated than bins 5 and 25. Due to the nature of the problem, I would expect there to be some correlation/smoothing of probability values at scales larger than that of the "resolution". (The probability density function is smooth and continuous, without major swings.)
I'm wondering if there's any accepted loss functions or regularization procedures which would potentially help impose such a constraint in the functional form of the predicted probability distribution. (Or if there's an alternate formulation of the training which means my initial discretization idea is misguided.) I could probably ad hoc an additional loss term, but I was wondering if there might be something with a more well-supported theoretical justification.
|
Learning a smooth, real-valued non-parametric probability distribution
|
CC BY-SA 4.0
| null |
2023-05-09T22:23:16.790
|
2023-05-11T15:20:57.367
|
2023-05-11T15:20:57.367
|
69382
|
69382
|
[
"neural-networks",
"regularization",
"loss-functions"
] |
615381
|
1
| null | null |
3
|
34
|
I got some idea about gam (generalized additive model) plot from a course of Noam Ross. However, I have several confusion and got stucked.
I fit a logistic gam and can interpret by using `trans=plogis`. But when interpreting in terms of link function, I don't know how to interpret them. The plot shows y-axis range that includes zero even if when there are no zero in data.
Also the range in y-axis is different than that of a plot using predict function.
Again, predicted value from predict function using `terms` and `link` also are different. Thus plot using these two also have different y-axis value.
What could be the reason behind it? Is there any way to have the actual y-axis value in terms of link function?
This is my code:
```
library(mgcv)
library(gss)
data(wesdr)
attach(wesdr)
head(wesdr)
g = gam(ret~s(dur)+s(gly)+s(bmi),family=binomial,method="REML")
p= predict(g, type="terms") # predicted value for three variables using "terms"
p.s=predict(g,exclude=c("gly","bmi"), type="link") # predicted value for first variable using "link"
par(mfrow=c(1,3))
plot(g,select=1) #smooth plot for first variable from model
plot(dur,p[,1]) #smooth plot for first variable using type 'terms'
plot(dur,p.s) #smooth plot for first variable using type 'link'
```
All these plots have different yaxis! [](https://i.stack.imgur.com/f0dJH.png)
|
Gam plots y-axis range is different for model that that from predict function
|
CC BY-SA 4.0
| null |
2023-05-09T23:26:23.567
|
2023-05-10T12:38:33.723
|
2023-05-09T23:41:01.300
|
387609
|
387609
|
[
"logistic",
"data-visualization",
"generalized-additive-model",
"mgcv",
"smoothing"
] |
615382
|
1
|
615385
| null |
1
|
36
|
Given an even number of sample points in a plane, I want to compute the sum of squared distances from the sample center as part of estimating the Rayleigh parameter. One way of doing it is to compute the sample center ($\bar{x}, \bar{y}$), then $r_i = \sqrt{(x_i - \bar{x})^2 + (y_i - \bar{y})^2}$ and then I compute $\sum{r_i^2}$.
It would be very convenient if I could instead just draw the sample points 2 at a time (without replacement), measure their distance $d_{ij} = \sqrt{(x_i - x_j)^2 + (y_i - y_j)^2}$, and find a function f that gives $\sum{r_i^2} = f({d})$.
I've tested a few sets and it appears that $\sum{r^2} = \sum_{i=1, j=n/2+1}^{i=n/2, j=n}\sqrt{\frac{(x_i-x_j)^2+(y_i-y_j)^2}{2}}$ works, but I have not been able to prove this. Is that equation correct?
|
Computing sum squared distances without computing center
|
CC BY-SA 4.0
| null |
2023-05-09T23:40:31.627
|
2023-05-10T01:46:27.470
|
2023-05-10T01:46:27.470
|
34792
|
34792
|
[
"sampling",
"sums-of-squares",
"rayleigh-distribution"
] |
615383
|
1
| null | null |
1
|
31
|
I am studying letality in cardio-vascular events. My outcome is binary (death at 1 month) for each event. My patients can have multiple cardiovascular events but obvioulsy only one of them can lead to death.
So for exemple a patient can have ten cardiovascular event the last one leading to death.
I have explaining variables at event, patient and county levels.
Therefore I used a mixed model logistic regression.
However I wonder if the fact that among my repeated events only one can lead to death (you obviously can only die once) is an issue.
|
GLMM mixed models. Binary data with only one positive outcome possible
|
CC BY-SA 4.0
| null |
2023-05-09T23:46:41.977
|
2023-05-11T21:02:27.443
|
2023-05-11T21:02:27.443
|
354185
|
354185
|
[
"mixed-model",
"multilevel-analysis",
"glmm"
] |
615384
|
2
| null |
615378
|
1
| null |
Based solely on what you said, you shouldn't do it with a paired test unless you are sure that your samples are paired through the time the data is measured.
| null |
CC BY-SA 4.0
| null |
2023-05-09T23:58:59.417
|
2023-05-09T23:58:59.417
| null | null |
387599
| null |
615385
|
2
| null |
615382
|
2
| null |
By viewing $\{x_1, \ldots, x_n\}$ and $\{y_1, \ldots, y_n\}$ as two separate samples, $\sum r_i^2 = (n - 1)S_x^2 + (n - 1)S_y^2$, where $S_x^2 = (n - 1)^{-1}\sum_{i = 1}^n (x_i - \bar{x})^2$ is the well-known sample variance. So your problem reduces to how to re-express the sample variance as the sum of squared paired differences.
In non-parametric statistics, this formulation is called [U-statistic](https://en.wikipedia.org/wiki/U-statistic#:%7E:text=In%20statistical%20theory%2C%20a%20U,producing%20minimum%2Dvariance%20unbiased%20estimators.). For sample variance, it is easy to verify that the sample variance is a U-statistic of order-2:
\begin{align}
S_x^2 = \frac{1}{n - 1}\sum_{i = 1}^n (x_i - \bar{x})^2 = \frac{1}{n(n - 1)}\sum_{1 \leq i \neq j \leq n}\frac{(x_i - x_j)^2}{2}. \tag{1}
\end{align}
If you prefer increasing sum indices (as opposed to distinct sum indices), $(1)$ can be clearly rewritten as:
\begin{align}
S_x^2 = \frac{2}{n(n - 1)}\sum_{1 \leq i < j \leq n}\frac{(x_i - x_j)^2}{2}. \tag{2}
\end{align}
Therefore, the quantity of your interest can be written as
\begin{align}
\sum_{i = 1}^nr_i^2 &= \frac{1}{2n}\sum_{1 \leq i \neq j \leq n}[(x_i - x_j)^2 + (y_i - y_j)^2] \\
&= \frac{1}{n}\sum_{1 \leq i < j \leq n}[(x_i - x_j)^2 + (y_i - y_j)^2].
\end{align}
---
Proof of (1). Since $x_i - x_j = 0$ when $i = j$, it follows that
\begin{align}
& \sum_{1 \leq i \neq j \leq n}(x_i - x_j)^2 \\
=& \sum_{i = 1}^n\sum_{j = 1}^n(x_i - x_j)^2 \\
=& \sum_{i = 1}^n\sum_{j = 1}^n[(x_i - \bar{x}) - (x_j - \bar{x})]^2 \\
=& \sum_{i = 1}^n\sum_{j = 1}^n[(x_i - \bar{x})^2 - 2(x_i - \bar{x})(x_j - \bar{x}) + (x_j - \bar{x})^2] \\
=& n\sum_{i = 1}^n(x_i - \bar{x})^2 + 0 + n\sum_{j = 1}^n(x_j - \bar{x})^2 \\
=& 2n(n - 1)S_x^2.
\end{align}
| null |
CC BY-SA 4.0
| null |
2023-05-10T00:06:47.603
|
2023-05-10T01:26:13.050
|
2023-05-10T01:26:13.050
|
20519
|
20519
| null |
615386
|
1
| null | null |
1
|
14
|
I'm interested in adding basis functions to a Gaussian Process. In particular, following [Section 2](https://gaussianprocess.org/gpml/chapters/RW2.pdf) of Rasmussen's book, I have
$$g(x)=f(x) + h(x)^\top\beta,\qquad f(x)\sim\mathcal{GP}\left(0,k\left(x,x'\right)\right).$$
In the book, $\beta\sim\mathcal N\left(b,B\right)$ and therefore, everything can be nicely derived by completing squares.
Is there a robust way to estimate this specification with a non-negativity constraint on the parameters? In other words, with $\beta\ge 0$?
|
Gaussian Process with non-negative basis coefficients
|
CC BY-SA 4.0
| null |
2023-05-10T00:23:48.987
|
2023-05-10T00:23:48.987
| null | null |
195019
|
[
"gaussian-process"
] |
615387
|
2
| null |
288674
|
3
| null |
No. Linear regression is also BUE.
Source: [https://www.ssc.wisc.edu/~bhansen/papers/gauss.pdf](https://www.ssc.wisc.edu/%7Ebhansen/papers/gauss.pdf)
| null |
CC BY-SA 4.0
| null |
2023-05-10T00:35:56.247
|
2023-05-10T00:35:56.247
| null | null |
387611
| null |
615388
|
1
| null | null |
2
|
13
|
My question arises from an assignment rule which gives monetary transfers to cities with a population of less than 25,000 inhabitants. However, the transfer varies between cities depending on its respective population and poverty levels. With that context, is it possible to run an RDD? If so, which literature is suggested to read? Thanks in advance and forgive my poor English.
|
Is it possible to do regression discontinuity (RDD) with different treatment intensities?
|
CC BY-SA 4.0
| null |
2023-05-10T01:28:00.063
|
2023-05-10T01:28:00.063
| null | null |
355269
|
[
"econometrics",
"regression-discontinuity"
] |
615389
|
2
| null |
615360
|
2
| null |
There are many possible solutions to this issue.
- The use of GANs, as always, comes with the caveat that the model is difficult to train. In my application, each of the two classes has many labels. This was a deterrent to the use of discriminators.
- I considered the use of the loss identified in Supervised Contrastive Learning (https://arxiv.org/abs/2004.11362). The issue here is that we need many labels of each type of class to be orthogonal to each other. The adoption of this loss function was a part of the final solution but proved to be insufficient in of itself.
- I used the results in https://jmlr.org/papers/volume16/qiu15a/qiu15a.pdf. The addition to the loss function uses the dot product (similar to Supervised Constrastive Learning) but with a min and no logit-like formulation.
The combination of 2 and 3 proved sufficient in my application. They did lead to a need to use higher dimensionality subspaces, likely because the subspaces are forced to be orthogonal, but was an easy compromise to make in my data.
| null |
CC BY-SA 4.0
| null |
2023-05-10T02:02:29.383
|
2023-05-10T02:02:29.383
| null | null |
204868
| null |
615391
|
1
|
615550
| null |
0
|
23
|
I need some help/advice for tackling the following problem: interested in exploring the association between various covariates and the reason for termination from a data registry. The hypothesis/suspicion is that a lower proportion of ethnic/racial minorities will get kidney transplant rather than dialysis at time of end stage kidney disease onset (i.e., time of termination from registry).
Outcome is reason for termination (the dataset has this as a categorical variable with numerous categories which I narrowed down to only 4 categories: transplant, dialysis, death, other). The "other" category is the most numerous as it contains everybody terminated for any reason other than transplant/dialysis/death, plus over 300 cases who had missing value for reason for termination (these patients might still be in the registry for all we know, there's no way of knowing what happened with them), followed by transplant, dialysis and death (just a couple of cases of death).
Covariates: some are time-independent (sex, race/ethnicity) while others are measured repeatedly at 6-months visits so time-dependent (such as lab values, hypertension status, eGFR).
I'm thinking that either option A or B might work here:
A. repeated measures multinomial logistic regression analysis, given the outcome with more than 2 categories, and the time-dependent nature of some of the covariates, or
B. repeated measures competing risks/cause-specific hazards analysis
However, I am not sure if I even need to consider repeated measures here, as the outcome itself (reason for termination) is fixed. Nor am I sure if I even have competing risks per se - I think the idea is that we will see some racial effects where minorities get the transplant less frequently and are more often receiving dialysis but I don't really know if we need competing risks for this?
Now, If I do need to account for the repeated measures nature of some covariates, I have no experience with how to do this for multinomial or competing risks models and I haven't been able to find many resources that are easy to understand and provide specific examples for implementing in R or SAS.
Can somebody please help advise what the appropriate type of analysis would be given the study context/research hypothesis?
Thank you kindly!
|
repeated measures multinomial logit or competing risks?
|
CC BY-SA 4.0
| null |
2023-05-10T02:50:40.727
|
2023-05-11T12:18:46.093
| null | null |
262514
|
[
"r",
"survival",
"repeated-measures",
"time-varying-covariate"
] |
615392
|
1
| null | null |
2
|
22
|
My understanding is that a discriminative classifier such as a CNN that takes an input $x$ and produces a discrete output label $y$ is typically trained to predict the best value of $y$, and would not give accurate probabilities for the various possible values of $y$. So it's an example of a deep neural net that cannot be used to reliably estimate the probability of a prediction.
If we compare a CNN to a generative model such as an autoregressive language model that is trained to estimate $p(\mathbf{w})$, where $\mathbf{w} = [w_1 ... w_n]$ is a sequence of words and $p(\mathbf{w}) = p(w_1) \prod_{i=2}^n p(x_i | x_{1..i-1})$, the underlying components of the discriminative CNN classifier vs. the generative autoregressive language model are similar (i.e., convolutional networks and transformers are both constructed using matrix multiplications and activations) and both are trained in similar ways (via supervised vs. self-supervised learning).
So would $p(\mathbf{w})$ yield an accurate estimate of the probability of an input sequence of words?
|
Accuracy of probability estimate from generative autoregressive language model
|
CC BY-SA 4.0
| null |
2023-05-10T04:04:30.573
|
2023-05-10T05:58:40.613
| null | null |
385459
|
[
"autoregressive",
"generative-models",
"llm"
] |
615393
|
1
|
615398
| null |
0
|
24
|
Suppose there is dataset of 100 patients' including the item "Blood pressure".
45 patients belongs to Group A, and the rest of 55 belongs to Group B.
In consequence, there will be 30 vs 30 patients(60 patients for total) pair matching after using Propensity Score Matching.
I'm trying to find out P-value for "Blood Pressure" with t-test if it is normal distribution, or with Wilcoxon rank sum test if it is non-normal distribution.
To determine which method to use, I have to use Shapiro-Wilk normality test for "Blood pressure" to find out whether if it's normal distributed or not.
Am I supposed to perform Shapiro-Wilk normality test for the dataset after it's pair matched(Calculate for each Group A(30 patients) and Group B(30 patients). 2 times calculation for total)?
or for the original dataset?(Group A with 45 patients + Group B with 55 patients. 2 time2 calculation)
|
When I try to use PSM, Should I calculate for normal distribution with Shapiro-Wilk normality test before pair matching or after pair matching?
|
CC BY-SA 4.0
| null |
2023-05-10T04:41:18.860
|
2023-05-10T06:32:04.477
| null | null |
386592
|
[
"regression",
"logistic",
"t-test",
"propensity-scores"
] |
615395
|
1
| null | null |
1
|
30
|
I have been using JASP sw for a month or so, and I like it. But what's been bothering me is that my professor in uni doesn't know this sw and is doubting its credibility.
I would appreciate suggestions for scholarly literature testing its credibility.
|
statistical software JASP and its credibility for academic reasearch
|
CC BY-SA 4.0
| null |
2023-05-10T05:44:36.417
|
2023-05-10T07:41:57.127
|
2023-05-10T06:46:10.550
|
380492
|
380492
|
[
"references",
"jasp"
] |
615396
|
2
| null |
615392
|
1
| null |
Both CNNs and autoregressive language models are classifiers and both return probabilities. What you are asking is if those probabilities are well [callibrated](https://stats.stackexchange.com/questions/418766/what-is-probability-estimates-calibration). The answer is, as you could have guessed, “it depends”. Unfortunately, there's no clear answer or conclusive results. For example, research by [Minderer et al (2021)](https://arxiv.org/abs/2106.07998) has shown that different deep learning models differ in how well are they calibrated but there is no clear explanation when it is the case.
| null |
CC BY-SA 4.0
| null |
2023-05-10T05:58:40.613
|
2023-05-10T05:58:40.613
| null | null |
35989
| null |
615397
|
1
|
615400
| null |
3
|
281
|
My question concerns the Bonferroni correction of p-values. In my analysis, I would like to multiply the obtained p-values by N, where N is the chosen number for the Bonferroni correction, such as 2, 3, 4, ..., N. Hence, I would not like to divide the p-thresholds by N, as it is often done.
However, the multiplication of p-values by a certain factor can lead to p-values that exceed 1.
I have seen people debating over p-values that exceed 1 due to the multiplcation of the obtained p-values, but there is no consensus if that makes sense or if p-values > 1 are always and in every case meaningless. Furthermore, I read that software packages, like SPSS, automatically limit the p-values to 1 in this case.
Could somebody please explain if p-values should always be limited to 1 even though they can mathematically exceed 1 when multiplied by a certain factor due to Bonferroni correction?
|
Bonferroni corrected p-values that exceed 1: possible or non-sense?
|
CC BY-SA 4.0
| null |
2023-05-10T06:28:21.650
|
2023-05-10T06:55:03.830
| null | null |
374910
|
[
"p-value",
"bonferroni"
] |
615398
|
2
| null |
615393
|
2
| null |
You should never use a hypothesis test to assess balance. Balance is achieved by ensuring that the full distribution of covariates is the same between treatment groups. You should compare multiple features of the covariate distribution to determine whether you have achieved balance. Numerical statistics like standardized mean differences and Kolmogorv-Smirnov statistics can be supplemented with visual inspections of the distributions. All this is explained in great detail in the [cobalt vignette](https://cran.r-project.org/web/packages/cobalt/vignettes/cobalt.html), including why you should not be using hypothesis tests to assess balance.
Given that you will not be using a hypothesis test to assess balance, there is no reason to use a test to decide whether a covariate is normally distributed. Whether it is or isn't has no bearing on how you should proceed.
| null |
CC BY-SA 4.0
| null |
2023-05-10T06:32:04.477
|
2023-05-10T06:32:04.477
| null | null |
116195
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.