Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
613193
|
1
|
613201
| null |
2
|
41
|
I want to use an ARIMA based or Prophet-based model to train on my data. I have split my data into train and test, and the model is trained on train only and forecasts for test.
The further the forecasts in the test, the worse the error, since it has been trained on train and further down the line from the train, the model might err more. In that case when evaluating the test data forecasts for the error metrics, shouldn't we take a weighted average considering the proximity of the forecasts to the end of train data as the weight since they would be more accurate?
Would love your thoughts on this, thanks for the help in advance :)
|
Time series statistical model evaluation: weighted average across different forecast horizons
|
CC BY-SA 4.0
| null |
2023-04-17T09:54:57.660
|
2023-04-17T14:21:47.873
|
2023-04-17T14:21:47.873
|
53690
|
380399
|
[
"machine-learning",
"time-series",
"arima",
"model-evaluation"
] |
613194
|
1
| null | null |
1
|
74
|
Problem: there are $N$ machines and probability $P$ that a machine is not working. Task: calculate the probability that more than $X$ machines are going to be broken if $N$ is increased. In addition, I have to plot a distribution of the broken machines number for a specific $N$.
Here is what I did.
The probability that more than $X$ machines are broken is:
```
m = N*P
1-poisson.cdf(x, m)
```
And in order to plot a distribution for this variable I have to generate random variables with `poisson.rvs(m, N)` and then plot a histogram.
Am I correct in my speculations here?
|
Poisson approximation for the probability calculation
|
CC BY-SA 4.0
| null |
2023-04-17T09:57:56.320
|
2023-05-02T19:10:56.157
|
2023-05-02T19:10:56.157
|
919
|
310856
|
[
"probability",
"self-study",
"python",
"binomial-distribution"
] |
613196
|
2
| null |
613128
|
5
| null |
>
In this case there should be the equality as an alternative hypothesis and therefore the difference as an null hypothesis?
Hypothesis testing works well when a particular hypothesis makes a precise prediction. Like the observed value is likely equal or above/below some value. Hypothesis testing is about making predictions based on a theory and observing whether those predictions come true.
If you would have a hypothesis that drug X and Y are unequal, then you would have an untestable theory. Given some differences between the populations of X and Y, whether that difference is zero or not, in nearly any circumstances the observations of samples from X and Y will be different anyway, independent from the hypothesis whether or not X and Y similar distributions.
(and even when samples from X and Y are observed to be equal, then this might not be significant as the difference can be as small as we like, making the observation of equal samples not anything special that falsifies the hypothesis)
### Test of equivalence
However what would be possible is a [test of equivalence](https://en.m.wikipedia.org/wiki/Equivalence_test), which relates to a hypothesis that the mean difference between X and Y is between some small range. Then, observing a larger range could falsify that hypothesis.
So the 'alternative hypothesis' can be used as the null hypothesis. But, it needs to be expressed in a form that restricts the observations. The hypothesis 'X ≠ Y' doesn't do that. However a limit on the difference '|X-Y| < a' is a testible theory/hypothesis.
### Confidence interval
Another popular alternative to null hypothesis testing is to present confidence intervals. The confidence interval can be seen as [the range of hypothetical parameter values that pass a null hypothesis test](https://stats.stackexchange.com/questions/351320/) where the hypothetical value is the null hypothesis.
Related:
[Why are standard frequentist hypotheses so uninteresting?](https://stats.stackexchange.com/questions/594774/why-are-standard-frequentist-hypotheses-so-uninteresting)
| null |
CC BY-SA 4.0
| null |
2023-04-17T10:21:25.103
|
2023-04-17T10:41:08.130
|
2023-04-17T10:41:08.130
|
164061
|
164061
| null |
613197
|
1
|
613217
| null |
2
|
35
|
I'm trying to test whether ski skill level has an impact on an athlete's motivation to visit a resort. I have three skill-level categories and three motivation categories.
I have calculated my chi-squared value ($53.03$) and my $p$ value is $1.065E-08$, which is much lower than the critical value of $15.51$ at $0.05$ with $8$ degrees of freedom. I think that means I cannot reject my null hypothesis (that there is no relationship). Or is that wrong?
But my Cramer's V result is $0.287$, stating that there is a limited association between the categories. Which do I believe? Or what do these conflicting results mean?
[](https://i.stack.imgur.com/SjJji.png)
[](https://i.stack.imgur.com/Eulvd.png)
|
Conflicting Chi Squared and Cramers V result
|
CC BY-SA 4.0
| null |
2023-04-17T10:38:12.387
|
2023-04-17T12:53:46.590
|
2023-04-17T12:53:46.590
|
345611
|
385896
|
[
"hypothesis-testing",
"statistical-significance",
"categorical-data",
"chi-squared-test",
"cramers-v"
] |
613198
|
1
|
613264
| null |
0
|
35
|
I'm trying to generate simulation paths across time in R for survival and hazard functions using the Weibull distribution. Here are the steps I'm taking so far. I need direction because I'm not able to generate simulation paths with the below:
First: Here's an example of a single basic hazard curve generated in R which works fine (code followed by image):
```
# function for hazard:
weibHaz <- function(x, shape, scale) dweibull(x, shape=shape,
scale=scale)/pweibull(x, shape=shape, scale=scale,lower.tail=F)
# plot hazard:
curve(weibHaz(x, shape=1.5, scale=1/0.03), from=0, to=80, ylab='Hazard', xlab='Time', col='red')
```
[](https://i.stack.imgur.com/NQncT.png)
Second: Generate random numbers from Weibull distribution and store them in `simulated_data`, using same parameters as in the above hazard curve:
```
set.seed(123) # set a seed for reproducibility
simulated_data <- rweibull(n = 20, shape=1.5, scale=1/0.03)
```
Third: plot the simulation results (code followed by image):
```
# ts() creates a time series object from the simulated data
# frequency = 1 indicates the time series data is not periodic
ts_data <- ts(simulated_data, start = 1, frequency = 1)
plot(ts_data, main = "Simulated Time Series Data", xlab = "Time", ylab = "Data")
```
[](https://i.stack.imgur.com/U0FyE.png)
The third step above obviously shows incorrect results. I need to see 20 curves over 80 time periods. The below image shows the type of simulation paths I've generated before using other data and using autoregressive time-series models. This is the form of output data and plot I'm looking for for the hazard curve generated with the Weibull distribution. Any recommendations for doing this correctly?
[](https://i.stack.imgur.com/UeoN5.png)
|
How to generate simulation paths for Weibull distribution in R?
|
CC BY-SA 4.0
| null |
2023-04-17T10:46:55.650
|
2023-04-17T21:33:55.753
| null | null |
378347
|
[
"r",
"survival",
"simulation",
"hazard"
] |
613200
|
1
| null | null |
0
|
12
|
I want to estimate the standard error of an estimate.
It depends on the rows and columns of a dataframe.
Two possibilities:
- Make a bootstrap on the rows to estimate $se_{row}$.
Do the same with the columns to estimate $se_{col}$.
Then take the general standard error is $se_{tot} = \sqrt{se_{row}^2 + se_{col}^2}$
- Do a double bootstrap on the rows and columns, and get the standard error this way
Which one is more sound ?
For info, the rows are items' difficulties and columns are judges.
|
Double bootstrap or 2 separated bootstraps to estimate the standard error?
|
CC BY-SA 4.0
| null |
2023-04-17T11:08:28.590
|
2023-04-17T15:06:45.643
|
2023-04-17T15:06:45.643
|
333075
|
333075
|
[
"bootstrap",
"standard-error",
"psychometrics"
] |
613201
|
2
| null |
613193
|
1
| null |
That depends on what exactly you want to evaluate. If you want to evaluate how well the model predicts $h$ steps ahead, instead of a single split into training and test you could use rolling or expanding windows (see e.g. [section 5.10](https://otexts.com/fpp3/tscv.html) of Hyndman & Athanasopoulos "Forecasting: Principles and Practice"). You would apply a loss function on the prediction errors (or more generally, the predictions and the actual values) and would obtain the distribution of losses over the test cases. You could then take some summary metric of the loss distribution such as mean loss.
If you want to evaluate how well the model predicts for several different horizons $h$, you could do the same for each horizon and then construct a weighted average of mean losses (or some other combination) depending on how important the accuracy of the predictions for each horizon is. E.g. if one step ahead is twice as important as two steps ahead, you could either reflect it in the loss function (double the loss for one step ahead relative to two steps ahead) or by weighting the mean losses of the different horizons accordingly (double weight vs. unit weight).
| null |
CC BY-SA 4.0
| null |
2023-04-17T11:08:55.833
|
2023-04-17T13:35:26.107
|
2023-04-17T13:35:26.107
|
53690
|
53690
| null |
613202
|
2
| null |
613128
|
6
| null |
The principle of statistical hypothesis tests, by definition, treats the null hypothesis H0 and the alternative H1 asymmetrically. This always needs to be taken into account. A test is able to tell you whether there is evidence against the null hypothesis in the direction of the alternative.
It will never tell you that there is evidence against the alternative.
The choice of the H0 determines what the test can do; it determines what the test can indicate against.
I share @Michael Lew's reservations against a formal use of the term "proposed hypothesis", however let's assume for the following that you can translate your scientific research hypothesis into certain parameter values for a specified statistical model. Let's call this R.
If you choose R as H0, you can find evidence against it, but not in its favour. This may not be what you want - although it isn't out of question. You may well wonder whether certain data contradict your R, in which case you can use it as H0, however this has no potential, even in case of non-rejection, to convince other people that R is correct.
There is however a very reasonable scientific justification for using R as H0, which is that according to Popper in order to corroborate a scientific theory, you should try to falsify it, and the best corroboration comes from repeated attempts to falsify it (in a way in which it seems likely that the theory will be rejected if it is in fact false, which is what Mayo's "severity" concept is about). Apart from statistical error probabilities, this is what testing R as H0 actually allows to do, so there is a good reason for using R as H0.
If you choose R as H1, you can find evidence against the H0, which is not normally quite what you want, unless you interpret evidence against H0 as evidence in favour of your H1, which isn't necessarily granted (model assumptions may be violated for both H0 and H1, so they may both technically be wrong, and rejecting H0 doesn't "statistically prove" H1), although many would interpret a test in this way. It has value only to the extent that somebody who opposes your R argues that H0 might be true (as in "your hypothesised real effect does not exist, it's all just due to random variation"). In this case a test with R as H1 has at least the potential to indicate strongly against that H0. You can even go on and say it'll give you evidence that H0 is violated "in the direction of H1", but as said before there may be other explanations for this than that H1 is actually true. Also, "the direction of H1" is rather imprecise and doesn't amount to any specific parameter value or it's surroundings. It may depend on the application area how important that is. A homeopath may be happy enough to significantly show that homeopathy does something good rather than having its effect explained by random variation only, regardless of how much good it actually does, however precise numerical theories in physics/engineering, say, can hardly be backed up by just rejecting a random variation alternative.
The "equivalence testing" idea would amount to specifying a rather precise R (specific parameter value and small neighbourhood) as H1 and potentially rejecting a much bigger part of the parameter space on both sides of R. This would then be more informative, but has still the same issue with model assumptions, i.e., H0 and H1 may both be wrong. (Obviously model assumption diagnoses may help to some extent. Also even if neither H0 nor H1 is true, arguably some more distributions can be seen as "interpretatively equivalent" with them, e.g., two equal non-normal distributions in a two-sample setup where a normality-based test is applied, and actually may work well due to the Central Limit Theorem even for many non-normal distributions.)
So basically you need to choose what kind of statement you want to allow your test to back up. Choose R as H0 and the data can only reject it. Choose R as H1 and the data can reject the H0, and how valuable that is depends on the situation (particularly on how realistic the H0 looks as a competitor; i.e., how informative it actually is to reject it). The equivalence test setup is special by allowing you to use a rather precise R as H1 and reject a big H0, so the difference between this and rejecting a "random variation/no effect" H0 regards the precise or imprecise nature of the research hypothesis R to be tested.
| null |
CC BY-SA 4.0
| null |
2023-04-17T11:25:08.847
|
2023-04-17T20:32:06.800
|
2023-04-17T20:32:06.800
|
247165
|
247165
| null |
613203
|
1
|
613235
| null |
1
|
65
|
I've done a linear regression model to determine the effect of personality position on a scale and changes in reaction time. I adjusted for age as a confounding factor. I've got the output, but there are so many p-values that I don't know what each of them mean.
Here is the output:
[](https://i.stack.imgur.com/vj5kL.png)
More importantly, what does the p-value 0.03158 at the end signify?
|
Interpreting output from a linear regression model?
|
CC BY-SA 4.0
| null |
2023-04-17T11:15:38.023
|
2023-04-17T16:12:12.363
|
2023-04-17T15:53:14.613
|
345611
|
374911
|
[
"r",
"regression",
"statistical-significance",
"p-value",
"interpretation"
] |
613204
|
1
| null | null |
1
|
11
|
A single class of students were given a session on a subject. This subject was split in example A (taught using active method) and example B (taught using passive method). Students were examined on the topic prior to the session, immediately afterward, and one week afterward. The test was identical each time. The test was anonymous, so I am unable to assess individuals test scores.
I'm trying to find the appropriate tests to assess the following hypotheses:
- Students' marks will be higher for example A than example B (taking into account prior knowledge).
- Long-term retention (after one week) will be better for example A than example B.
I'm a little confused which statistical tests would be appropriate as I don't have paired values, so am just assessing the class in general.
Any ideas? TIA
|
Appropriate statistical test for comparing student test scores
|
CC BY-SA 4.0
| null |
2023-04-17T11:45:13.393
|
2023-04-17T11:45:13.393
| null | null |
385898
|
[
"hypothesis-testing",
"statistical-significance"
] |
613205
|
1
| null | null |
1
|
21
|
Any method I may use to estimate the probability of components for each model, by using finite mixture model in r.
|
Method to estimate probability of each component for each model using finite mixture model
|
CC BY-SA 4.0
| null |
2023-04-17T11:48:44.493
|
2023-04-17T11:48:44.493
| null | null |
385902
|
[
"probability"
] |
613207
|
1
| null | null |
0
|
20
|
When applying ARDL
I used Dickey-Fuller unit root test, and found variables integrated at I(1) and I(0).
For running the ARDL model do I use my original data (not differenced) or do I use the differenced data?
|
ARDL Unit root df
|
CC BY-SA 4.0
| null |
2023-04-17T12:10:02.953
|
2023-04-17T16:20:22.737
|
2023-04-17T16:20:22.737
|
53690
|
383188
|
[
"unit-root",
"differencing",
"ardl"
] |
613209
|
1
| null | null |
0
|
17
|
I'm measuring the effect of tree Genus and location on % change in diameter at breast height of trees (converted to 0-100 rather than a %). I underwent a 2-way ANOVA using the following code:
```
swp_model <- lm(percentage ~ Species + Ward + Species : Ward, data = trees)
```
However, I was diagnosing the model and was unsure whether these results might indicate a non-parametric test would be better suited: [](https://i.stack.imgur.com/wnnLX.png)
[](https://i.stack.imgur.com/Pj6eJ.png)
[](https://i.stack.imgur.com/DHZ9f.png)
The data has a strong positive skew as seen here:
[](https://i.stack.imgur.com/uSNaU.png)
I underwent a few transformations of the percentage variable to see if that made the data a better fit for the test. Square rooting seemed to have the best result, as this mostly fixed the normal probability assumption, but not the other two. At this stage would it be better to use a non-parametric test for this data? If so, which would be the most appropriate?
|
Is this data appropriate for a 2-way ANOVA? If not is there a non-parametric alternative?
|
CC BY-SA 4.0
| null |
2023-04-17T12:14:27.997
|
2023-04-17T12:25:56.803
|
2023-04-17T12:25:56.803
|
345611
|
385904
|
[
"anova",
"nonparametric",
"residuals",
"normality-assumption"
] |
613210
|
2
| null |
315705
|
0
| null |
This recent [article](https://arxiv.org/abs/2212.12478) presents some benchmarks on image classification on small databases. It is a clear review with many references about the state-of-the-art methods and presents many comparisons in small databases like medical images sets.
One of the key points is that especially for small database a careful hyperparameter optimization is essential to have an objective comparison between models. The authors also present a benchmark to have a consistent comparison between different approaches. This benchmark consists of 5 datasets spanning various domains.
If the link will break in the future, the article is "Image Classification with Small Datasets: Overview and Benchmark" of Lorenzo Brigato et al., 2022. You can find it on arXiv, a database of pre-print scientific articles.
| null |
CC BY-SA 4.0
| null |
2023-04-17T12:20:11.440
|
2023-04-17T13:06:43.010
|
2023-04-17T13:06:43.010
|
379875
|
379875
| null |
613211
|
1
| null | null |
0
|
12
|
I'm attempting to implement an extended Kalman filter (EKF) to estimate an angle $\alpha$ given measurements of two scalars $x$ and $y$ where the measurement function is $\alpha=atan(\frac{y}{x})$. With the measurement function in the form $y=h(x)$, this gives the following
$$\frac{y}{x}=tan(\alpha)$$
This is a case of requiring two measurements for the scalar state to be observable, but not being able to reconstruct two measurements from the a priori or predicted state. Is it possible to implement a filter for a measurement function of this form and, if so, how would one approach it?
Given that the measurement covariances for both $x$ and $y$ are known could I treat $\frac{y}{x}$ as a measurement with an associated covariance?
Thanks in advance!
|
Extended Kalman Filter for estimating angle using tan measurement function and two measurements
|
CC BY-SA 4.0
| null |
2023-04-17T12:23:44.610
|
2023-04-17T12:23:44.610
| null | null |
382694
|
[
"estimation",
"covariance",
"kalman-filter",
"nonlinear"
] |
613213
|
1
| null | null |
0
|
51
|
## Questions
I trained two models by using two different data sets.
```
model = pmd.auto_arima(data, trend='ct')
```
- What is intercept in the output of print(model)?
I don't understand why model 1 doesn't have intercept but model 2 does. This is because both intercepts' p-values are small enough.
- In print(model.summary()), sometimes results are like SARIMAX(0, 1, 0)x(0, 0, [1], 12), while other times results are like SARIMAX(0, 0, 1)x(0, 0, 1, 12). What does [] mean?
## Results
### Model 1
```
# print(model)
ARIMA(0,1,0)(0,0,1)[12]
# print(model.summary())
SARIMAX Results
============================================================================================
Dep. Variable: y No. Observations: 49
Model: SARIMAX(0, 1, 0)x(0, 0, [1], 12) Log Likelihood -1457.746
Date: Sun, 16 Apr 2023 AIC 2924.292
Time: 11:40:50 BIC 2934.662
Sample: 12-01-2018 HQIC 2926.494
- 12-01-2023
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
intercept 8.857e+06 2769.690 3197.669 0.000 8.85e+06 8.86e+06
drift -1.344e+05 1.72e+05 -0.783 0.434 -4.71e+05 2.02e+05
ma.S.L12 0.2522 0.147 1.715 0.086 -0.036 0.540
sigma2 4.65e+15 2.85e-05 1.63e+20 0.000 4.65e+15 4.65e+15
===================================================================================
Ljung-Box (L1) (Q): 1.35 Jarque-Bera (JB): 59.46
Prob(Q): 0.24 Prob(JB): 0.00
Heteroskedasticity (H): 5.93 Skew: -0.83
Prob(H) (two-sided): 0.00 Kurtosis: 7.04
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.39e+45. Standard errors may be unstable.
```
### Model 2
```
# print(model)
ARIMA(0,0,1)(0,0,1)[12] intercept
# print(model.summary())
SARIMAX Results
==========================================================================================
Dep. Variable: y No. Observations: 56
Model: SARIMAX(0, 0, 1)x(0, 0, 1, 12) Log Likelihood -1422.761
Date: Sun, 16 Apr 2023 AIC 2834.512
Time: 10:43:18 BIC 2845.111
Sample: 06-01-2018 HQIC 2842.139
- 01-01-2023
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
intercept -5.499e+06 2509.045 -2191.829 0.000 -5.5e+06 -5.49e+06
drift 3.894e+05 1.59e+05 2.456 0.014 7.87e+04 7e+05
ma.L1 -0.3393 0.089 -3.827 0.000 -0.513 -0.166
ma.S.L12 0.5483 0.119 4.625 0.000 0.316 0.781
sigma2 1.585e+15 4.69e-05 3.38e+19 0.000 1.58e+15 1.58e+15
===================================================================================
Ljung-Box (L1) (Q): 0.07 Jarque-Bera (JB): 678.34
Prob(Q): 0.79 Prob(JB): 0.00
Heteroskedasticity (H): 169.83 Skew: 2.79
Prob(H) (two-sided): 0.00 Kurtosis: 16.63
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.22e+35. Standard errors may be unstable.
```
|
Questions about the summary of auto ARIMA (Python pmdarima)
|
CC BY-SA 4.0
| null |
2023-04-17T12:44:00.020
|
2023-04-17T12:44:00.020
| null | null |
365006
|
[
"machine-learning",
"python",
"arima",
"intercept"
] |
613214
|
1
| null | null |
0
|
20
|
What is the formula for calculating the minimum sample size needed to achieve a specified relative precision for a given sample statistic's standard error, at a specified level of significance?
Mostly I find formulas specific for the [sample](https://www.itl.nist.gov/div898/handbook/prc/section2/prc222.htm) [mean](https://stats.stackexchange.com/questions/429398/determining-sample-size-for-given-confidence-interval-and-margin-of-error). But I'm looking for the text-book description of the general case. Ideally this should already take into account the [case](https://stats.stackexchange.com/questions/7004/calculating-required-sample-size-precision-of-variance-estimate) where the distribution of the sample statistic is unknown and one needs to find the standard error via bootstrapping.
## My approach
What I tried by reverse engineering is this relation for the standard error of a sample statistic $SE(\theta)$, the relative precision $\delta$ and significance level $\alpha$ $(\delta, \alpha \in [0,1])$.
$$
\begin{align}
SE(\theta) \overset{!}{≥} \frac{\delta \cdot \theta}{z(\alpha)} \, , \label{eq:one}\tag{1}
\end{align}
$$
with the critical value $z$ of a [one-/two-tailed hypotheses test](https://en.wikipedia.org/wiki/One-_and_two-tailed_tests) for a given probability distribution. Since in general $\text{SE}(\theta) \sim \frac{1}{\sqrt{n}}$, one can solve (1) for $n$ to compute the appropriate sample size
### Example
This [article](https://web.eecs.umich.edu/%7Efessler/papers/files/tr/stderr.pdf) summarizes the formulas for $SE(\theta)$ of sample mean $\bar{X}$, sample variance $S^2$ and standard deviation $S$ in the case of a normally distributed data set:
$$
\begin{align*}
\sigma_{\bar{X}} &= \frac{\sigma}{\sqrt{n}} \label{eq:two} \tag{2} \\
\sigma_{S^2} &= \sigma^2\sqrt{\frac{2}{n-1}} \label{eq:three} \tag{3} \\
\sigma_{S} &\approx \frac{1}{\sqrt{2(n-1)}} \quad \text{(for n > 10) } \, .\label{eq:four} \tag{4}
\end{align*}
$$
This leads to the minimum number of required samples $n$:
$$
\begin{align}
n_{\bar{X}} &= \left(\frac{\sigma}{\bar{X}} \cdot \frac{z_{1-\alpha/2}}{\delta}\right)^2 \label{eq:five} \tag{5} \\
n_{S^2} &= 2\left(\frac{z_{1-\alpha/2}}{\delta}\right)^2 +1 \label{eq:six} \tag{6} \\
n_{S} &= \frac{1}{2}\left(\frac{z_{1-\alpha/2}}{\delta}\right)^2 +1 \label{eq:seven} \tag{7}
\end{align}
$$
The calculation implemented in Python:
```
import numpy as np
from scipy import stats
def get_n(
stat: str,
alpha: float,
delta: float,
n_sides: int = 2,
loc: float = 0.0,
scale: float = 1.0,
) -> int:
"""
Computes the required number of samples to draw from a normal distribution
in order to estimate the mean, variance or standard deviation with
specified precision and significance.
Parameters
----------
stat : str
Specifies the statistic for which to compute the number of samples
alpha : float
Significance level used to compute the critical value of the
tailed hypothesis test
delta : float
Required relative precision of the parameter estimation
n_sides : int, optional
Specifies wether a one- or two-tailed hypothesis is used to compute
the critical value.
Default is 2.
loc: float, optional
Position of the normal distribution. Default is 0.0
scale: float, optional
Width scale of the normal distribution. Default is 1.0
Returns
-------
n : int
Number of samples
"""
# Critical value for tailed hypothesis test
z: float = stats.norm.ppf(1 - alpha / n_sides)
# Number of samples
n: float
if stat == "mean":
n = ((scale * z) / (loc * delta)) ** 2
elif stat == "var":
n = 2 * (z / delta) ** 2 + 1
elif stat == "std":
n = 1.0 / 2 * (z / delta) ** 2 + 1
else:
raise ValueError(
"Expected string from ['mean', 'var', 'std'] "
f"for paramter 'stat', got '{stat}' ({type(stat).__name__})"
)
# Return integer
return int(np.ceil(n))
```
For example, running this in the case of the sample mean, $\delta = 0.01$, $\alpha=0.05$ (two-tailed test) and normal distribution centered around 42 with width 2.3:
```
loc = 42 # known population mean
scale = 2.3 # known population standard deviation
alpha = 0.05 # significance in %
delta = 0.01 # relative accuracy in %
stat = "mean"
n = get_n(
stat=stat,
alpha=alpha,
delta=delta,
loc=loc,
scale=scale,
)
```
`--> n = 116 `
A simple simulation to do the validation confirms it (using `np.random.normal(loc=loc, scale=scale, size=n)` and comparing the sample mean `np.mean()` to the population mean `loc`):
[](https://i.stack.imgur.com/b9tqV.png)
[](https://i.stack.imgur.com/etJz2.png)
|
General method for determining the sample size given the standard error
|
CC BY-SA 4.0
| null |
2023-04-17T12:48:55.370
|
2023-04-17T12:59:33.313
|
2023-04-17T12:59:33.313
|
385649
|
385649
|
[
"statistical-significance",
"sample-size",
"standard-error"
] |
613215
|
1
| null | null |
1
|
9
|
I have developped a DL model to classify images and I am trying to optimize the hyperparameters. Reading the literature, I have found the ASHA algorithm and its implementation in SHERPA, like reported [here](https://github.com/sherpa-ai/sherpa/blob/master/examples/keras_mnist_mlp_successive_halving.ipynb). In the example there is something that I cannot understand. I have a maximum resource $R=9$ and a reduction factor $\eta = 3$, a minimal resource $r=1$ and an early-stopping rate $s=0$. To have a finished configuration, as reported in the link, I need to train 9 configurations for 1 epoch, then I select $3$ of them and train for $3$ other epochs, and finally training the surviving configuration for additional 9 epochs. Therefore, the number of trials should be $9+3+1 = 13$, but in the link are $12$. I do not understand.
|
Hyperparameter Optimization with ASHA algorithm in Sherpa library: strange number of trials
|
CC BY-SA 4.0
| null |
2023-04-17T12:49:59.993
|
2023-04-17T12:49:59.993
| null | null |
379875
|
[
"python",
"hyperparameter"
] |
613217
|
2
| null |
613197
|
1
| null |
I think you have misinterpreted how you calculate the significance threshold for a chi-squared test. See for example from [this link about chi-squared independence tests,](https://www.scribbr.com/statistics/chi-square-test-of-independence/) which shows that the chi-square value exceeds the cutoff and is significant.
[](https://i.stack.imgur.com/xT8w0.png)
Your sum chi-square value of $53.03$ is really high. You are correct that this test requires $8$ degrees of freedom and consequently a cutoff of $15.507$. But your interpretation is wrong...your chi-squared value has far exceeded the cutoff, which is shown by your incredibly low p-value (which is in scientific notation to denote how low it is). Therefore, there is no surprise that your test has a Cramer's V coefficient like yours. There is clearly some association between your two factors.
Just as a side note, it appears your first row total is incorrect (it should be $63$ if I'm not mistaken). I would adjust that so your test statistics are completely accurate.
| null |
CC BY-SA 4.0
| null |
2023-04-17T12:51:49.440
|
2023-04-17T12:51:49.440
| null | null |
345611
| null |
613218
|
1
| null | null |
0
|
19
|
I've been collecting real estate transactions in my neighborhood for a while and have data about resales since 2017. The neighborhood was established in 2005, so much longer than what I have data for.
I'm looking to estimate average residence time (time between sales of a single home) in two ways:
- (# of total homes) / (# sales in the last 12 months)
This is more or less a proxy estimate since it doesn't include an actual sale of a single home twice, as if it were a longitudinal sample. The average residence time that I get out of this is very comparable to the expected (10-12 years).
- actual longitudinal tracking of sales
This method (for now) is severely flawed because the length of the dataset is much shorter than the age of the neighborhood. I cannot seriously state that homes are being sold every 22 months because a small fraction of homes that I have longitudinal data for.
The figure shows the CDF for (time of sale)-(time of next listing) for the homes I have data for.
[](https://i.stack.imgur.com/uTeOk.png)
Besides collecting data for the next 20 years or so, is there a way to get an estimate out of the data I already have?
|
Real estate: estimating average residence time based on limited time date
|
CC BY-SA 4.0
| null |
2023-04-17T12:52:44.413
|
2023-04-17T12:52:44.413
| null | null |
334797
|
[
"estimation"
] |
613219
|
2
| null |
574135
|
0
| null |
Depending on the model you are using, rescaling the image can help you in bettering the performances. It means passing from [0, 255] range to [0, 1] or [-1,1] range for pixel values. It depends on the network you are using. In Tensorflow (the module that I use to perform DL calculations) you can use [this](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Rescaling) function. You could use also Data Augmentation in your preprocessing layers like reported [here](https://www.tensorflow.org/tutorials/images/data_augmentation). It can help in bettering the performances and avoiding overfitting.
| null |
CC BY-SA 4.0
| null |
2023-04-17T12:59:14.517
|
2023-04-17T12:59:14.517
| null | null |
379875
| null |
613222
|
1
| null | null |
2
|
20
|
I'm working on a regression model that predicts age of clients. The problem is that there aren't many variables to work with. So my question, is it better to study correlations and contingency tables for variables then selecting the ones to use for regression OR is it better to use all of the variables then see which ones contribute more to the model through t-statistics and likelihood ratios?
|
Is it better to select variables before regression or performing regression and then performing tests to select the variables?
|
CC BY-SA 4.0
| null |
2023-04-17T14:26:54.400
|
2023-04-17T14:26:54.400
| null | null |
306573
|
[
"regression",
"statistical-significance",
"multiple-regression",
"t-test",
"likelihood-ratio"
] |
613223
|
1
| null | null |
1
|
54
|
I'm trying to perform quantile regression in R on a multivariate data set, where I want to visualize the 10% and 90% bounds. I also intend to calculate PICP - Prediction interval coverage probability according to
[](https://i.stack.imgur.com/6lBhj.png)
Where
[](https://i.stack.imgur.com/Zf0CA.png)
LPIi and UPIi are respectively lower and upper bounds of the prediction interval constructed for ith sample. ti is the ith target value. I make predictions with another model - in the example I use a linear model, but my intention is also to include other models such as a GP.
Currently, I'm a bit stuck on the visualization part, as I'm not sure how to visualize the lines for the upper and lower bound, when I have multiple covariates.
I've tried to create a reproducible example below which highlights what I want to do. I'm using the `quantreg` package:
```
# Create a covariate
set.seed(123456)
x1 <- rnorm(1000, mean = 75, sd = 5)
# Create a dummy encoding for the hours of the day and remove first column to avoid dummy trap.
dummies <- sample(1:24, 1000, replace = TRUE)
dummies <- model.matrix(~factor(dummies))[,-1]
colnames(dummies) <- paste0("hour", seq(2:24))
#Create a response variable
set.seed(123456)
y <- rnorm(1000, mean = 50 + x1 + dummies %*% (10 * 1:23), sd = 10)
# Combine into a data frame
df <- cbind.data.frame(y, x1, dummies)
# Split into some train and test data
set.seed(123456)
id <- sample(1000, 900) # 90-10 ratio
train <- df[id, ]
test <- df[-id,]
# Load Quantile Regression package
library(quantreg)
# Fit Quantile model for tau = 0.1 & tau = 0.9
QR_fit_1 <- rq(y ~., data = train, tau = 0.1)
QR_fit_9 <- rq(y ~., data = train, tau = 0.9)
#### Results in: Error in rq.fit.br(x, y, tau = tau, ...) : Singular design matrix
#### Add small jitter according to https://stats.stackexchange.com/questions/78022/cause-of-singularity-in-matrix-for-quantile-regression
jittered_y <- jitter(df[, 1])
train_jit <- train
test_jit <- test
train_jit$y <- jittered_y[id]
test_jit$y <- jittered_y[-id]
# Fit Quantile models again - still issues
QR_fit_1 <- rq(y ~., data = train_jit, tau = 0.1)
QR_fit_9 <- rq(y ~., data = test_jit, tau = 0.9)
# From the documentation, I try the suggested approach of changing method on page 5
# https://cran.r-project.org/web/packages/quantreg/vignettes/rq.pdf
QR_fit_1 <- rq(y ~., data = train, tau = 0.1, method ="fn")
QR_fit_9 <- rq(y ~., data = train, tau = 0.9, method ="fn")
# Visualize y with the upper & lower quantiles
plot(c(train$y), xlim = c(0, 1000))
points(QR_fit_1$fitted.values, col=2)
points(QR_fit_9$fitted.values, col=4)
abline(QR_fit_1, col=2)
abline(QR_fit_9, col=4)
axis(1, xlim = c(0, 1100))
# Make predictions with another model, i.e linear model
fit_mod <- lm(y ~., data = train)
y_pred <- predict(fit_mod, test)
# Add predicted data points to graph
test_fit_1 <- predict(QR_fit_1, test)
test_fit_9 <- predict(QR_fit_9, test)
points(c(901:1000), y_pred, col = 3)
## Compute PICP
???
```
Any suggestions on how I could do this would be greatly appreciated. As the ??? indicates, I'm a bit unsure of how to calculate the PICP, as I'm not sure how to obtain the lower and upper bounds.
|
Quantile regression visualization and evaluation
|
CC BY-SA 4.0
| null |
2023-04-17T14:28:03.247
|
2023-04-17T14:28:03.247
| null | null |
320876
|
[
"r",
"machine-learning",
"data-visualization",
"prediction-interval",
"quantile-regression"
] |
613224
|
1
| null | null |
1
|
45
|
Let's say we have an univariate dataset x that follow a gaussian with parameters (m, s).
Under a frequentist methodology, m and s are estimated using MLE and x is modeled as N(m_hat, s_hat).
Using the bayesian approach, lets say we put a prior on m. Then the model for x becomes f(x|m) ~ N(m, s). Computing the joint distribution f(x, m) and then the marginal distribution of f(x), wouldn't we get a fatter tail marginal distribution of f(x) than N(m_hat, s_hat) obtained from the frequentist approach?
If x is truly normal, why are we modeling it using a fat-tail distribution? Or am I missing anything?
|
Does Bayesian modeling result in fat-tail distributions?
|
CC BY-SA 4.0
| null |
2023-04-17T14:42:12.383
|
2023-04-17T14:42:12.383
| null | null |
81308
|
[
"bayesian"
] |
613225
|
1
| null | null |
0
|
20
|
I’ve read that partial pooling (multilevel / hierarchical models) can balance the extremes: on one hand, zero pooling where every group receives its own parameters, non influenced by other groups. And full pooling, where is one group has more data (or outliers) it can dominate the posterior distribution.
I’m curious how partial pooling doesn’t revolve into the full pooling scenario?
For example, consider that one group produces 5x the data that other groups produce. As I understand it, this group would not dominate the posterior distribution to the extend that would be realized in a fully pooled model.
However, I don’t understand why. Each group has its own parameters and they’re drawn from common prior distributions.
But wouldn’t the observations with more data points dominate these inferred prior distributions, devolving into the fully pooled model?
|
How multilevel Bayesian models handle group imbalance
|
CC BY-SA 4.0
| null |
2023-04-17T14:56:22.533
|
2023-04-17T14:56:22.533
| null | null |
288172
|
[
"bayesian",
"hierarchical-bayesian"
] |
613226
|
2
| null |
612991
|
7
| null |
The idea is clear in a picture: if we were to sketch the graph of the distribution function $F_X$ of $X,$ we may conceive of $g$ as locally shifting all horizontal points various amounts (but always consistently, never allowing any overlapping), thereby distorting the graph of $F_X$ into the graph of $F_Y$. The condition $F_X=F_Y$ means that this distortion is purely horizontal: the height $F_X(t)$ at any point $t$ must remain the same as the height $F_Y(t).$ Thus, if $g(t)\ne t,$ $(t,F_X(t))$ and $(g(t),F_X(g(t))$ are always part of a horizontal line segment in the graph of $F_X$: but over that segment, $X$ has zero probability (because its distribution function $F$ does not change over that segment).
The only real issue is showing that it's legitimate to sum these zero probabilities over potentially infinitely many such segments. This is related to a basic property of real numbers.
---
Let's reason from the definitions.
- $X$ is a real-valued random variable. Let $F_X(x) = \Pr(X\le x)$ be
its distribution function.
- We are given $g:\mathbb R\to \mathbb R$ where $s\lt t$ implies
$g(s)\lt g(t)$ ($g$ is increasing) and $Y = g\circ X$ is also a
random variable.
The condition in the statement, that $F_X = F_Y,$ therefore means that for all numbers $t,$
$$\Pr(X\le t) = F_X(t) = F_Y(t) = \Pr(Y\le t) = \Pr(g(X)\lt t).\tag{*}$$
To adopt an economical notation, when $a$ and $b$ are real numbers, $(a,b)$ is the open interval with endpoints at $a$ and $b$ (even when $ b\lt a$). When $a=b,$ this is the empty set. Condition $(*)$ implies that for all numbers $t$ where $g(t)\le t,$
$$\Pr(X\in (g(t), t]) = \Pr(X\in (-\infty, t] \setminus (-\infty, g(t)]) = F_X(t) - F_Y(t) = 0.\tag{**}$$
The same result obtains when $g(t)\gt t.$ Thus, writing $\mathcal A$ for the event $X\ne Y,$ we may characterize it as
$$X\ne Y:\ X\in \mathcal A = \bigcup_{t\in\mathbb R}\, (t, g(t))$$
This is an uncountable union of open intervals: no axioms or theorems of probability permit us to draw any conclusion about its probability directly. The key is to recall that $\mathbb R$ is locally compact: this implies that on any bounded closed interval, such as $[-n,n]$ for positive integers $n,$ a finite number of real numbers $t_{n,i},$ $i=1,2,\ldots, N(n),$ can be found for which
$$\mathcal A \cap [-n,n] = \bigcup_{i=1}^{N(n)}\, (t_{n,i}, g(t_i)).$$
(See the [Heine-Borel theorem](https://en.wikipedia.org/wiki/Heine%E2%80%93Borel_theorem).)
Therefore the probability of this event is not greater than the sum of the probabilities of the intervals of which it is comprised and since by $(**)$ each of those intervals is contained within a zero-probability event,
$$\Pr(\mathcal A \cap [-n,n]) = 0.$$
Taking the countable union of these sets for $n=1,2,3,\ldots,$ and applying the sigma-additivity property of probability shows
$$\Pr(\mathcal A) = 0 = \Pr(X\ne Y),$$
QED.
| null |
CC BY-SA 4.0
| null |
2023-04-17T14:59:56.427
|
2023-04-17T18:20:26.843
|
2023-04-17T18:20:26.843
|
919
|
919
| null |
613228
|
1
| null | null |
0
|
63
|
I want to do some basic OLS regression analysis but I am confused about the output given by the `margins()` function from the [margins package](https://cran.r-project.org/package=margins). If I calculate the results by hand, I get completely different results and now I don't know what result to trust.
Here is an example code:
```
tab <- lm(populism_avg_std ~ eligibility_dummy *
core_globalizationwinners + male + east_europe,
data = tomodel_pid_full_pop)
margins <- lm(populism_avg_std ~ eligibility_dummy *
core_globalizationwinners + male + east_europe,
data = tomodel_pid_full_pop) %>%
margins(at = list(core_globalizationwinners = c("0", "1"))) %>%
summary() %>%
filter(factor == "eligibility_dummy")
```
In the regression I get the following (relevant) coefficients:
```
eligibility_dummy = -0.24
core_globalizationwinners = -0.27
eligibility_dummy:core_globalizationwinners = 0.22
```
Both `eligibility_dummy` and `core_globalizationwinners` are dummy variables. Calculating the marginal effect by hand for `eligibility_dummy = 1` and `core_globalizationwinners = 0` yields:
```
-0.24*1 + (-0.27)*0 + 0.22*0 = -0.24
```
and I get exactly that result from the `margins()` output. Doing it for `core_globalizationwinners = 1`, I expect to get about
```
-0.24*1 + (-0.27)*1 + 0.22*1 = -0.29.
```
However, the output of the `margins()` function yields an estimate of -0.015.
As far as I understood, and to [quote the author](https://cran.r-project.org/web/packages/margins/vignettes/Introduction.html):
>
The at argument allows you to calculate marginal effects at representative cases (sometimes “MERs”) or marginal effects at means - or any other statistic - (sometimes “MEMs”), which are marginal effects for particularly interesting (sets of) observations in a dataset. This differs from marginal effects on subsets of the original data (see the next section for a demonstration of that) in that it operates on a modified set of the full dataset wherein particular variables have been replaced by specified values. This is helpful because it allows for calculation of marginal effects for counterfactual datasets (e.g., what if all women were instead men? what if all democracies were instead autocracies? what if all foreign cars were instead domestic?).
This seems possible because the "at = " creates counterfactuals. But why does only one coefficient change (the one for `core_globalizationwinner = 1`) and not the other one?
I also tried to subset the data:
```
margins_sub <- lm(populism_avg_std ~ eligibility_dummy *
core_globalizationwinners + male + east_europe,
data = tomodel_pid_full_pop) %>%
margins(data =
tomodel_pid_full_pop[to_model_pid_full_pop$core_globalizationwinners
== 1, ]) %>%
summary() %>%
filter(factor == "eligibility_dummy")
```
but this also gave me the same result as in the "at =" configuration?
Any clarification on why this is the case would be highly appreciated.
|
Understanding the output of margins() in R?
|
CC BY-SA 4.0
| null |
2023-04-17T15:02:05.707
|
2023-05-24T13:54:32.227
|
2023-05-24T13:54:32.227
|
11887
|
385917
|
[
"r",
"interaction",
"interpretation"
] |
613229
|
1
| null | null |
0
|
21
|
I have a dataset with several variables with average prices from week to week over many years. The problem is that each variable autocorrelates with itself heavily and I want to decorrelate variable for variable each by itself so that I can actually use different statistical tests and actually get meaningful answers on each variable. But I am having trouble finding what type of technique or methods I can use to achivie this type of decorrelation since the variable autocorrelates differently based on time. So for example it heavily autocorrelates week for week, but for each week going by it autocorrelates less and less. So I checked first out PCA, but that doesn't do that, I got mentioned mahanalanobis decorrelation, but I am having a bad time actually finding how to perform that decorrelation in excel, stata or python. So if anyone have any teqnique or methods to share that would be greatly appreciated.
|
Decorrelation of a singular autocorrelated variable
|
CC BY-SA 4.0
| null |
2023-04-17T15:17:48.060
|
2023-04-17T15:17:48.060
| null | null |
385918
|
[
"autocorrelation"
] |
613230
|
2
| null |
612800
|
0
| null |
You have two predictors that seem to fail the proportional hazards (PH) assumption, one of which is time-varying. I'll describe other ways that might handle the PH problems better, then end with some suggestions for the approach you describe if that's still necessary.
Much can be learned from the [time-dependence vignette](https://cran.r-project.org/web/packages/survival/vignettes/timedep.pdf) of the R [survival package](https://cran.r-project.org/web/packages/survival/index.html), and from Frank Harrell's [Regression Modeling Strategies](https://hbiostat.org/rmsc/). It's also worth getting access to the classic text by [Therneau and Grambsch](https://www.springer.com/us/book/9780387987842), which goes into detail about many applications of Cox models. In particular, these references discuss different ways to handle violations of PH.
Model continuous predictors flexibly
It sounds like you tried a single linear term for `age` in your model, did a PH test, and found a PH violation. If `age` has a more complicated association than that with outcome, however, then the improper modeling of `age` can show up as an apparent violation of PH. Specifying the functional form of the association properly might [fix the PH problem](https://stats.stackexchange.com/q/379416/28500) on its own. A regression spline for `age` is a good way to let the data tell you the appropriate functional form. I find the `rcs()` function in the R [rms package](https://cran.r-project.org/package=rms) to have more useful defaults than the `ns()` function in the `splines` package.
The same approach could fix the problems with your time-varying covariate, if it's a continuous variable. There is no fundamental difference between time-fixed and time-varying covariates in terms of how the regression coefficients are estimated in a Cox model, except that with a time-varying covariate the algorithm picks out, at each event time, the covariate values that happen to be in place for each at-risk individual at that specific event time. Fitting the proper functional form for the association with outcome thus might also fix the PH issue for your time-varying covariate.
Does lack of PH matter?
With a large data set it's quite possible to have a "statistically significant" violation of PH that doesn't matter in practice. That's a judgment call, to be made based on your understanding of the subject matter. Even if PH is violated, you end up with a type of event-averaged coefficient estimate that might be adequate for some purposes.
Handling time-varying covariates
As noted above, at each event time the algorithm looks only at the covariate values in place at that specific time for all individuals still at risk. There is no consideration of past covariate history, just the present values. That might not adequately describe the association of a covariate with outcome. For example, current blood glucose levels might not be associated so strongly with cardiovascular events as are hemoglobin A1C levels, which represent time-averaged blood glucose. Think very carefully about the biology underlying the time-varying covariate, to see if using only instantaneous values makes sense. In some circumstances you might want to model both the trajectory of the time-varying covariate and the time to event; see the [survival task view](https://cran.r-project.org/web/views/Survival.html) for suggestions about joint modeling.
Your approach
There is no problem (in principle) building a Cox model with time-varying covariates and adjusting those covariates via a function of time to handle a violation of PH. Adjusting time-varying covariates as a function of time requires specifying separate time-adjusted covariate values for each individual at risk at each event time in the data set. The footnote to [this answer](https://stats.stackexchange.com/a/478957/28500) mentions 2 ways to start to do that, although you might need to do some data manipulation yourself to ensure the correct format. You have to make sure that, at each event time, the algorithm can find the correct time-adjusted value of the time-varying covariate for each individual still at risk. The potential problem in practice is that you can end up with extremely large data sets, as a single individual will have one row for each event time in the entire data set during which the individual is at risk.
Stratification by another variable (`ageGroup` here) adds no additional problem; you simply set `ageGroup` as a multi-level categorical predictor and specify a term `strata(ageGroup)` in the `coxph()` function of the `survival` package (or `strat(ageGroup)` if you use the `cph()` function of the [rms package](https://cran.r-project.org/package=rms)). At each event time, the comparisons among covariate values are restricted to individuals within the same stratum as the individual having the event. Sometimes having a large number of strata can lead to practical problems arising from small numbers of individuals within a stratum. Thus, if a spline doesn't fix the PH problem for `age`, I'd recommend modeling a time-varying coefficient for `age` instead as described in the [time-dependence vignette](https://cran.r-project.org/web/packages/survival/vignettes/timedep.pdf) rather than breaking `age` up into multiple strata.
In response to comments
The hazard in a Cox model for an individual $i$ with time-varying covariate values $X_i(t)$ can be written:
$$h_i(t)= h_0(t) \exp(X_i(t) \beta) ,$$
where $h_0(t)$ is the baseline hazard and $\beta$ is the vector of regression coefficients (coefficients assumed for now to be constant in time). That is the form handled directly by the `coxph()` function in the R `survival` package via the counting-process format, with outcomes coded as `Surv(startTime, stopTime, status)`. That form allows for time-fixed covariates too; you just code the same value of a time-fixed covariate into each data row for an individual.
For stratification of such a model, you have two choices. You could assume that only the baseline hazards are different among strata, but the $\beta$ coefficients are shared among strata. Then the above equation, for individual $i$ in stratum $s$, becomes:
$$h_i(t | s)= h_{0,s}(t) \exp(X_i(t) \beta) ,$$
where $h_{0,s}(t)$ is the baseline hazard for stratum $s$. In the `coxph()` function you specify such a fit for strata defined by `ageGroup` by adding a term `+strata(ageGroup)` to the predictors. That's how stratification is usually handled. Again, there is no problem with incorporating time-varying covariates via the counting-process data format, or specifying a time-constant covariate by simply repeating the same value for each data row corresponding to an individual.
It's possible also to allow one or more $\beta$ coefficients to differ among strata. For that, you add an interaction term between the predictor of interest and the strata. For example, if you want the coefficient for `cov1` to vary among age-group strata, include a predictor term `+cov1*strata(ageGroup)`. The statements above for incorporating both time-fixed and time-varying covariates into the model still hold.
| null |
CC BY-SA 4.0
| null |
2023-04-17T15:25:16.517
|
2023-05-06T22:03:06.090
|
2023-05-06T22:03:06.090
|
28500
|
28500
| null |
613231
|
2
| null |
613101
|
2
| null |
You could use the [generalized Normal distribution](https://en.wikipedia.org/wiki/Generalized_normal_distribution)
$$
P(x) \propto \exp(-|(x-\mu)/s|^\theta))
$$
for $\theta>2$.
You could also use a scaled Beta distribution, which would have no probability outside of the range, e.g. $\textrm{Beta}(3,3)$ scaled by a factor of 100.
| null |
CC BY-SA 4.0
| null |
2023-04-17T15:26:03.117
|
2023-04-18T12:42:05.563
|
2023-04-18T12:42:05.563
|
2126
|
2126
| null |
613232
|
1
| null | null |
0
|
14
|
I'm attempting to compare some economic forecast errors across institutions. I want to see if the mean absolute error is different from one institution to the other. Since the forecast errors apply to the same year for each institution, I was thinking about using a paired t-test, but I'm not sure if I should here. Is it appropriate to use in this scenario, or would a two sample t-test assuming unequal variance be better?
|
Paired t-test or two sample t-test with unequal variance?
|
CC BY-SA 4.0
| null |
2023-04-17T16:01:06.640
|
2023-04-17T16:01:06.640
| null | null |
385921
|
[
"hypothesis-testing",
"t-test",
"paired-data"
] |
613233
|
1
| null | null |
0
|
12
|
When constructing an ADF unit root test, is it Ok that input data is a growth variable? I was thinking about when you compute growth rate $\left(\frac{Y_t - Y_{t-1}}{Y_{t-1}}\right)$ and when differencing that's the expression above the line ($Y_t - Y_{t-1}$)..
|
Unit root test on growth variable?
|
CC BY-SA 4.0
| null |
2023-04-17T16:04:01.903
|
2023-04-17T18:36:29.337
|
2023-04-17T18:36:29.337
|
44269
|
383188
|
[
"unit-root",
"augmented-dickey-fuller"
] |
613234
|
1
|
613258
| null |
0
|
22
|
I am currently unsure about a three-level variable in an linear mixed effects analysis in R (lme4) - I should add that I am new to LMEs and would very much appreciate some very basic advice!
I am trying to compare the results of an ANOVA with a corresponding LME analysis.
Regarding the 3x2x2 ANOVA, I was interested in how the 2x2 interaction might differ across the three levels of a third variable, where the three-way interaction was significant (outcome variable was response time in a behavioural experiment).
Regarding the LME, I sum-to-zero-coded all categorical predictors (since I was told that this would be preferable when assessing such interactions).
In the LME output (table obtained through the summary function/lmerTest package), there are now always two lines provided for the three-level predictor (predictor1.1 and predictor1.2), rather than one (as in the ANOVA analysis).
My question is whether this is correct or whether I made a mistake? Also, in case this is correct, how would one interpret such an outcome? For instance, if I was trying to assess the three-way interaction originally observed in the ANOVA, and the interaction predictor1.1 : predictor2 : predictor3 would turn out significant, whilst the interaction predictor1.2 : predictor2 : predictor3 would not reach significance, how would I report/interpret this finding?
My apologies in advance in case this is a question with a very obvious answer; as I said, I am used to ANOVAs and don't really have anyone around who is familiar with LMEs. I would really appreciate your advice!
Kind regards!
|
Three-level variable in LME analysis - output
|
CC BY-SA 4.0
| null |
2023-04-17T16:10:22.737
|
2023-04-17T20:39:00.450
| null | null |
385920
|
[
"r",
"mixed-model",
"lme4-nlme",
"interaction",
"contrasts"
] |
613235
|
2
| null |
613203
|
0
| null |
Let's go through each part one-by-one.
#### Call
The call section simply spits the formula back at you that was used when modeling your regression. There isn't really much to say about that other than it gives you what you already know.
#### Residuals
Recall that residuals are the errors in your model, or how badly your model guesses. These are standard for regression like the one you have. The minimum is the furthest your residuals are below the regression line, in other words how much your regression line overshoots it's prediction. The maximum is the opposite, which shows the value which undershoots the prediction the most. 1Q and 3Q are just quartile ranges of these residuals. The median should usually be somewhere close to zero (rarely is it exactly zero), as the average residuals should over time be close to where the regression line is fit.
#### Coefficients
These are simply all of the terms entered in the model. These usually include the intercept and your slopes (from the predictors you entered). The intercept for a Gaussian family regression such as this is generally understood as a conditional mean which is contingent upon the other terms in the model. We normally don't invest a lot of time in the point estimates in this row, as they don't usually in this case provide a lot of value.
Your slopes however tell you whether or not your predictors have some utility. For the "personality" predictor here, the estimate is the slope. For every $1$ unit increase in this predictor, we get a $-.005$ decrease in your outcome of "change RT". The standard error and t value here are both used to derive the significance of the slope, which is indicated by the last output in the row. For "personality" the p value is $.7329$ and consequently is not significant. Age however has a low p value and can be deemed "significant" (though this doesn't say much about how strong that effect is).
These coefficients can also be used to build a linear equation. Now that we have all of our coefficient values, we simply put them together into an equation like so:
$$
RT = .074 + (-.005 * Personality) + (.006 * Age)
$$
#### Signif. Codes
You can ignore these, as they simply designate which values are significant and are self-evident.
#### The Rest...
I imagine your confusion is the bottom output. The residual standard error is essentially how "precise" the model’s predictions are. A lower RSE is generally more precise, indicating your model is good at predicting. Multiple $R^2$ is the proportion of variance in the outcome explained by the predictors in your model. We typically use the adjusted $R^2$ instead if we have multiple predictors, as the first value can be inflated by simply adding a bunch of predictors in the same model. Here we can see in either case that the model predicts very little variance in the outcome. The bottom values are the model significance terms, which show the f value and degrees of freedom, which are used to test overall model significance. As indicated by Dimitri's link:
>
"The F-test of overall significance indicates whether your linear
regression model provides a better fit to the data than a model that
contains no independent variables."
So this should give you an idea of what your output here is saying.
| null |
CC BY-SA 4.0
| null |
2023-04-17T16:12:12.363
|
2023-04-17T16:12:12.363
| null | null |
345611
| null |
613236
|
1
| null | null |
0
|
13
|
My question is very very similiar to the on asked here: [How much variance is explained by a given subset of variables?](https://stats.stackexchange.com/questions/279843/how-much-variance-is-explained-by-a-given-subset-of-variables)
However, I have mixed data, e.g. both categorical variables (i.e. location) and numerical variables (i.e. team size). This applies to both the complete dataset and the subset.
With qualitative data in the dataset, is multivariate regression as suggested in the previous question still applicable? I'm doubtful, but thankful for your thoughts.
If multivariate regression isn't applicable, do you know how to calculate the explained variance of the subset?
Thanks!
|
How much variance is explained by a given subset of variables (mixed data, both quantitative and qualitative)
|
CC BY-SA 4.0
| null |
2023-04-17T16:17:06.460
|
2023-04-17T16:17:50.580
|
2023-04-17T16:17:50.580
|
385923
|
385923
|
[
"variance",
"categorical-data",
"numerics"
] |
613237
|
1
| null | null |
0
|
11
|
I'm building linear mixed effect models with lmer. At first I was writing the formula like this:
```
model_1 <- lmer(response ~ A + B + C + A:B + A:C + B:C + A:B:C + (1|Subject), data)
```
But I have been adding more terms to the fixed-effects parts so I tried this way I saw:
```
model_2 <- lmer(response ~ A*B*C + (1|Subject), data)
```
As I had read that using * would provide the same result as combining + and :, but I have been getting different models and also different results.
Could anyone point me to some resources where I can learn how to do this properly? Or explain the differences between the two models. Thanks!
PS: It's my first time usin SE, I tried searching for something similar but could not find it. Sorry if it has been asked before.
|
Difference in operators (lmer)
|
CC BY-SA 4.0
| null |
2023-04-17T16:50:39.400
|
2023-04-17T16:50:39.400
| null | null |
385926
|
[
"r",
"lme4-nlme",
"modeling"
] |
613238
|
1
| null | null |
0
|
95
|
I am modeling treatment effect in a hypothetical case where only a subset of the sample has disease-related impairment on the outcome of interest. I only expect treatment effects in this subset, but due to variability in the outcome (natural variability + measurement noise), the baseline value of the outcome measure gives me imperfect information about which subjects are impaired (but lower values reflect higher probability of impairment).
My model is `val ~ TRT*baseline`, and fitting this model gives 1) a positive main effect of treatment, 2) a negative main effect of baseline, and 3) a negative TRT:baseline interaction (larger effect of TRT for those with low baseline scores).
Using this model, I would like to run a post hoc test that gives me a single effect estimate (& hypothesis test) for individuals that are impaired. I think that would be `TRT + -1*TRT:baseline`, which would be interpreted as the estimated effect size for a subject with a baseline score of -1. Is that right?
I'm open to suggestions for very different ways to model this scenario, but I'm primarily interested to know that I am not misinterpreting this model.
Here is a simulation in R:
```
library(MASS)
library(multcomp)
library(tidyverse)
# Parameters
n = 10000
trt_efct = 0.5
impairment_prob = 0.5
impairment_ef_sz = 1
res_corr = matrix(
c(
1.0, 0.8,
0.8, 1.0
),
2,2
)
# Simulate data
df = data.frame(MASS::mvrnorm(
n,
mu=c(0,0),
Sigma=res_corr
)) %>%
dplyr::mutate(
subject = 1:n,
arm = c(rep("TRT", n/2), rep("PBO", n/2)),
im.flag = stats::runif(n, min = 0, max = 1) >= impairment_prob,
baseline = dplyr::if_else(im.flag, X1 - impairment_ef_sz, X1),
val = dplyr::case_when(
im.flag == 1 & arm == "TRT" ~ (X2 - impairment_ef_sz*(1 - trt_efct)),
im.flag == 1 & arm == "PBO" ~ (X2 - impairment_ef_sz),
im.flag == 0 ~ X2
)
)
# Fit baseline interaction model: arm * baseline
model_fit = stats::lm(
val ~ arm*baseline,
data = df,
)
summary(model_fit)
summary(multcomp::glht(model_fit, linfct = matrix(c(0,1,0,-1), nrow=1)))
```
|
Subgroup analysis: post hoc test interpretation
|
CC BY-SA 4.0
| null |
2023-04-17T17:02:33.657
|
2023-04-24T14:18:47.133
|
2023-04-24T14:18:47.133
|
85888
|
85888
|
[
"interaction",
"linear-model",
"post-hoc",
"clinical-trials"
] |
613239
|
2
| null |
612829
|
4
| null |
To me, this seems to come from an over-reliance on interpreting the output of regression models, in contrast to estimating and interpreting specific contrasts and relationships from a model. By this I mean that a researcher should specify the contrasts they want to make (which could be all pairwise comparisons between race categories) and then compute those from the model in a model-agnostic way. Identifying estimands (i.e., quantities to estimate) that are functions of the predicted values of the model means that the model parameters themselves do not need to be interpreted.
Some examples of model-agnostic estimands include the following:
- $E[Y|\text{race} = B, X = x]$ (the predicted value of the outcome for those with $\text{race} = B$ and covariates $X$ set to a specific profile $x$)
- $E[Y|\text{race} = B, X = x] - E[Y|\text{race} = W, X = x]$ (the contrast between the predicted value of the outcome for those with $\text{race} = B$ and $\text{race} = W$ with $X$ set to a specific profile $x$)
- $E\left[E[Y|\text{race} = B, X]\right]$ (the average predicted outcome for those with $\text{race} = B$)
- $E\left[E[Y|\text{race} = B, X]\right] - E\left[E[Y|\text{race} = W, X]\right]$ (the contrast in average predicted outcomes between those with $\text{race} = B$ and $\text{race} = W$)
None of these quantities reference the type of model being fit or the parameterization of the model. In that sense, they are model-agnostic. Depending on the parameterization of the model, they may be equal to familiar model outputs, as I demonstrate below.
We'll use the `lalonde` dataset from `MatchIt`, which contains observations from a study of the effect of a job training program (`treat`) on 1978 earnings (`re78`) and includes several covariates, including `race` (with three categories, `"black"`, `"hispan"`, and `"white"`). We'll consider the relationships between the racial categories and the outcome in the control group.
```
library("marginaleffects")
data("lalonde", package = "MatchIt")
lalonde_c <- subset(lalonde, treat == 0)
```
First, let's fit a model for the outcome given race and the covariates in the control group. We'll set the reference category of `race` to `"white"` to match your example.
```
lalonde_c <- transform(lalonde_c,
race = relevel(race, "white"))
fit <- lm(re78 ~ race + age + educ + married + I(re74/1000),
data = lalonde_c)
summary(fit)
#>
#> Call:
#> lm(formula = re78 ~ race + age + educ + married + I(re74/1000),
#> data = lalonde_c)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -15538 -4674 -1305 4277 18486
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 2795.48 1705.87 1.639 0.1020
#> raceblack -1054.18 832.24 -1.267 0.2060
#> racehispan 685.42 949.68 0.722 0.4709
#> age -35.51 33.59 -1.057 0.2911
#> educ 248.43 118.86 2.090 0.0372 *
#> married 353.07 744.98 0.474 0.6358
#> I(re74/1000) 458.51 55.38 8.279 1.65e-15 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 6524 on 422 degrees of freedom
#> Multiple R-squared: 0.2113, Adjusted R-squared: 0.2001
#> F-statistic: 18.84 on 6 and 422 DF, p-value: < 2.2e-16
```
We get coefficients comparing the expected values of the outcome between `black` and `white` and between `hispan` and `white`. One might ask whether the expected values of the outcome differ between `hispan` and `black`; the model doesn't answer this question. Failing to report his contrast is precisely the bias the reviewer is identifying. Instead, we can compute the expected value of the outcome for an average covariate profile (i.e., for units that are average on the other variables) and contrast them.
```
predictions(fit, newdata = datagrid(race = levels)) |>
summary(by = "race")
#>
#> race Estimate Std. Error z Pr(>|z|) 2.5 % 97.5 %
#> white 7043 400 17.62 <0.001 6260 7826
#> black 5989 718 8.34 <0.001 4582 7396
#> hispan 7728 848 9.12 <0.001 6067 9390
#>
#> Columns: race, estimate, std.error, statistic, p.value, conf.low, conf.high
comparisons(fit, variables = list(race = "pairwise"),
newdata = "mean")
#>
#> Term Contrast Estimate Std. Error z Pr(>|z|) 2.5 % 97.5 %
#> race black - white -1054 832 -1.267 0.205 -2685 577
#> race hispan - black 1740 1107 1.572 0.116 -430 3909
#> race hispan - white 685 950 0.722 0.470 -1176 2547
#>
#> Columns: rowid, term, contrast, estimate, std.error, statistic, p.value, conf.low, conf.high, predicted, predicted_hi, predicted_lo, re78, race, age, educ, married, re74
```
Note that no matter which contrast coding scheme you use or which category you choose to be the reference category, you will get exactly the same predicted values and contrasts between categories. For example, below we set the reference category to `"black"` and fit the same model and compute the same quantities.
```
fit <- lm(re78 ~ race + age + educ + married + I(re74/1000),
data = transform(lalonde_c, race = relevel(race, "black")))
predictions(fit, newdata = datagrid(race = levels)) |>
summary(by = "race")
#>
#> race Estimate Std. Error z Pr(>|z|) 2.5 % 97.5 %
#> black 5989 718 8.34 <0.001 4582 7396
#> white 7043 400 17.62 <0.001 6260 7826
#> hispan 7728 848 9.12 <0.001 6067 9390
#>
#> Columns: race, estimate, std.error, statistic, p.value, conf.low, conf.high
comparisons(fit, variables = list(race = "pairwise"),
newdata = "mean")
#>
#> Term Contrast Estimate Std. Error z Pr(>|z|) 2.5 % 97.5 %
#> race hispan - black 1740 1107 1.572 0.116 -430 3909
#> race hispan - white 685 950 0.722 0.470 -1176 2547
#> race white - black 1054 832 1.267 0.205 -577 2685
#>
#> Columns: rowid, term, contrast, estimate, std.error, statistic, p.value, conf.low, conf.high, predicted, predicted_hi, predicted_lo, re78, race, age, educ, married, re74
```
You might notice that the contrasts between `black` and `white` and between `hispan` and `white` estimated from `comparisons()` are the same as those computed from the model coefficients. So this method isn't necessarily doing anything new when the model is simple like this one. But another benefit of this approach is that you can compute quantities that have the same interpretations as the ones above but come from models that are completely uninterpretable. For example, let's say we fit the model below, which contains interactions and polynomial terms:
```
fit <- lm(re78 ~ race * (poly(age, 2) + educ * married + I(re74/1000)),
data = lalonde_c)
```
The coefficients are uninterpretable, so would you present them in a regression table? Would you give the reader a deluge of meaningless coefficient values with meaningless tests associated with them with the hopes they would find a way to interpret them correctly? I wouldn't. And yet this model is more likely to capture the true relationship between the predictors and the outcome because it is more flexible and makes fewer restrictions (i.e., it doesn't assume the relationships are purely linear and additive). Still, though, we can compute quantities with exactly the same interpretations as those above:
```
predictions(fit, newdata = datagrid(race = levels)) |>
summary(by = "race")
#>
#> race Estimate Std. Error z Pr(>|z|) 2.5 % 97.5 %
#> white 7176 665 10.79 <0.001 5872 8479
#> black 5313 1237 4.29 <0.001 2889 7738
#> hispan 5960 1261 4.73 <0.001 3489 8430
#>
#> Columns: race, estimate, std.error, statistic, p.value, conf.low, conf.high
comparisons(fit, variables = list(race = "pairwise"),
newdata = "mean")
#>
#> Term Contrast Estimate Std. Error z Pr(>|z|) 2.5 % 97.5 %
#> race black - white -1862 1404 -1.326 0.185 -4615 890
#> race hispan - black 647 1766 0.366 0.714 -2815 4108
#> race hispan - white -1216 1425 -0.853 0.394 -4009 1578
#>
#> Columns: rowid, term, contrast, estimate, std.error, statistic, p.value, conf.low, conf.high, predicted, predicted_hi, predicted_lo, re78, race, age, educ, married, re74
```
Note that this strategy also obviates the need to decide on centered vs. uncentered, standardized vs. unstandardized, raw polynomial vs. orthogonal polynomial, and any other regression adjustments that are made to enhance the interpretability of the model but don't change its fit. The model simply isn't meant to be interpreted; rather, it should be probed for interpretable quantities that could be computed no matter how the model is parameterized.
So, to summarize, you (and your colleague) should do the following:
- Fit a model that is likely to capture the true outcome-generating process, regardless of whether it is interpretable or not. In fact, it would be better to choose an uninterpretable model (e.g., one with many interactions and polynomials) to fit the data better.
- Do not report the model coefficients and tests, except possibly as a table deep in the appendix. Nothing in this table should be interpreted, but it might be useful for those seeking to replicate your results to see what values your estimated coefficients take. Note that it doesn't matter how nominal variables are coded; all codings yield the same model fit and predicted values. Fitting an uninterpretable model can help sell the decision not to rely on model coefficients.
- Report model-agnostic quantities that answer the specific substantive questions you want to answer, e.g., whether there are disparities between race groups. Because these quantities are model-agnostic, it doesn't matter which category is the baseline category in the model. You would report all comparisons that make sense to make, whether between the majority category and minority categories or between minority categories. Only if you selectively report comparisons will bias show through; the choice of how the model is parameterized reveals no bias because it has no effect on how the reported quantities are estimated.
This last point is the key point. The reviewer is commenting that how the model is parameterized reveals bias by the analyst when prioritizing a majority category as "baseline" or "normal". But this only occurs when the analyst only reports and interprets the comparisons that involve the majority category because those happen to correspond to coefficients in the model when parameterized in a specific way. Severing the relationship between the model parameters and the interpretation of the model results by choosing model-agnostic estimands eliminates this bias.
Using simple models because they are more interpretable is no excuse for bad statistics practices, so don't prioritize fitting interpretable models. Fit good models, and interpret quantities that are model-agnostic in a just way.
| null |
CC BY-SA 4.0
| null |
2023-04-17T17:06:57.873
|
2023-04-17T17:06:57.873
| null | null |
116195
| null |
613240
|
1
| null | null |
0
|
27
|
I have a dataset of images $X$ and numerical values $Y$, corresponding to them. Each numerical value comes with a reliability $r$ flag (either 0 or 1). The dataset is imbalanced: for each unreliable value there are over 30 reliable ones.
I want to train a model, that would be able to predict the values $y$ and return the reliability $r$ (or confidence) of the answer. Naturally, I want to make as many reliable predictions $y$ as possible.
However, since the dataset is so imbalanced, the model does not predict reliability $r$ of the answers $y$ well. I considered adding synthetic unreliable values to the dataset, but then I realized that this would lower the model's performance when it comes to predicting the values $y$. I also tried penalizing the model for predicting the answer as "reliable", but I do not think it affects the result performance much.
Is there a way to achieve it?
|
Learning classification on an imbalanced dataset alongside regression
|
CC BY-SA 4.0
| null |
2023-04-17T17:10:59.727
|
2023-04-18T12:35:48.770
|
2023-04-18T12:35:48.770
|
384379
|
384379
|
[
"regression",
"machine-learning",
"unbalanced-classes"
] |
613241
|
1
| null | null |
0
|
90
|
Can anyone explain in simple terms why the variance of a Bernoulli distribution is: Var[X] = p(1 – p)? More generally, why is the binomial distribution's variance the product of the number of trials, probability of success, and probability of failure, i.e. Var[X] = npq?
Thank you for explaining this in the most pragmatic way possible. It's very appreciated
|
Why is the variance of a Bernoulli distribution the product of its probability of 'success' and 'failure'?
|
CC BY-SA 4.0
| null |
2023-04-17T17:15:52.963
|
2023-04-19T15:31:36.920
| null | null |
302349
|
[
"variance",
"binomial-distribution",
"bernoulli-distribution",
"variability"
] |
613242
|
1
| null | null |
0
|
22
|
I hope this isn't too simple of a question, but I am having difficultly choosing which test to use in my analysis.
As for background, I am studing the migration patterns of different species of birds and relating that information to the average temperature in the area to see if the migration patterns have been effected by increasing temperatures.
My basis for mapping out the migration pattern is through observed sigtings of the bird. So, if there are more sightings in a certain area, more of the population is in that area.
Currently, my data is split into three data sets, each for a different time period (1955-1975, 1975-1995, and 1995-2015). Each dat set includes all of the regions in which a particular species is found for each month of the year, the number of sightings in each region per month, and the average temperature in each region and month.
I have used t-tests just to confirm that the average temperature in each region and month is increasing as you progress through each twenty year time period; however, I'm struggling to determine if that would be an approprate test to determine if the migration pattern is changing due to the increasing temperatures.
Any recommendations are greatly appreciated.
|
When to use what statistical test: 1 qualitative 2 quanitiative variables
|
CC BY-SA 4.0
| null |
2023-04-17T17:56:43.347
|
2023-04-17T17:56:43.347
| null | null |
385931
|
[
"hypothesis-testing",
"anova",
"t-test",
"data-mining"
] |
613243
|
2
| null |
613241
|
2
| null |
It is intuitive that the expectation of the variable is $p$, so that the deviations from $0$ and $1$ are linear functions of $p$. As the variance is the weighted sum of the squared deviations, the expression of the variance should be a cubic polynomial in $p$.
On another hand, when the probability is $0$, the distribution degenerates to a constant and the variance cancels. Also, the expression must be symmetrical around $p=\frac12$. So you can expect the variance to be proportional to $pq=p(1-p)$.
Then again by symmetry, the polynomial cannot be cubic. Taking $p=\frac12$, the variance is computed to be $\frac1{2^2}$. Finally,
$$\text{Var}[X]=pq$$
is the only possibility.
---
A Binomial variable is the sum of $n$ i.i.d. Bernouilli. As the variance is additive, that of the Binomial is
$$\text{Var}[Y]=n\,\text{Var}[X]=npq.$$
| null |
CC BY-SA 4.0
| null |
2023-04-17T18:03:20.957
|
2023-04-19T15:31:36.920
|
2023-04-19T15:31:36.920
|
37306
|
37306
| null |
613244
|
1
| null | null |
1
|
15
|
I have measurements of resin production of pine, which are taking tapping the tree, that is, making a physical wound and collecting the resin in a pot. When the pot is full we replace it with an empty one and we weigh the produced resin. As the rate of resin production decreases with time after wounding, the period among measurements is increasingly longer.
I would like to model resin production as a function of time from wounding (and treatments, which correspond to different tapping methods) in a generalized linear mixed models framework (better with lme4 in R), but I realize I have really estimates of the cumulative distribution of resin production but not good estimates of daily resin production.
Is there a way to model that in an glmm framework?
I've been looking to several distribution functions, but none of them seem to approach the general shape of a cumulative function. I really would need, for example to fit the model to the CDF of a Gaussian or Gamma distribution, instead of fitting the model to the PDF of a Gaussian or Gamma distribution. Can that be acomplished in any way? Or am I wrong in trying to do that?
Thanks!!
Asier
|
Fit generalized linear mixed model (with lme4 or other) to cumulative data of a continuous variable
|
CC BY-SA 4.0
| null |
2023-04-17T18:26:24.120
|
2023-04-17T18:26:24.120
| null | null |
385936
|
[
"lme4-nlme",
"glmm",
"fitting",
"cumulative-distribution-function"
] |
613245
|
1
| null | null |
0
|
11
|
I'm training a text similarity model using a two tower approach. The data I'm dealing with has a lot of unusual words that are important (names of people, places) that also don't appear in any pre-trained corpus. I'm finding that using a pre-trained word embedding such as glove is out performing my own trained embedding layer (trained on about 500k short texts), but I would also like to incorporate information about those new words since they are important. What would be the best way to go about doing this? Gensim allows to load pre-trained models, but not to update their vocabulary.
|
Training a text model for similarity
|
CC BY-SA 4.0
| null |
2023-04-17T18:36:41.953
|
2023-04-17T18:36:41.953
| null | null |
322418
|
[
"machine-learning",
"neural-networks",
"natural-language",
"word2vec"
] |
613246
|
1
|
613319
| null |
2
|
54
|
What does the p-values besides the edf means in generalized additive model (GAM) mean?
Does that mean that the non-linearity is significant if p < 0.05 cause I have an edf of bigger than 2? Package used is `mgcv`
```
Family: Beta regression(1.35)
Link function: logit
Formula:
outcome ~ s(week, k = 4, fx = TRUE, by = food) + food
Parametric coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.72943 0.08153 8.947 <2e-16 ***
typeFruits -1.56222 0.10075 -15.507 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Approximate significance of smooth terms:
edf Ref.df Chi.sq p-value
s(week):typeFruits 3 3 6.504 0.089509 .
s(week):typeVege 3 3 16.802 0.000776 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
R-sq.(adj) = 0.233 Deviance explained = 38.2%
-REML = -312.88 Scale est. = 1 n = 612
```
```
|
what does p-values beside edf in Generalized Additive Model mean?
|
CC BY-SA 4.0
| null |
2023-04-17T18:38:16.740
|
2023-04-18T11:37:37.240
|
2023-04-17T22:04:26.387
|
362150
|
362150
|
[
"r",
"regression",
"generalized-additive-model",
"mgcv"
] |
613247
|
1
| null | null |
0
|
26
|
Have that
$$
\text{Corr}(X_t,X_{t+h}) = \frac{\text{Cov}(X_t,X_{t+h})}{\sqrt{\text{Var}(X_t)\text{Var}(X_{t+h})}}
$$
and
$$
\rho(h) = \frac{\gamma_X(h)}{\gamma_X(0)}.
$$
Those are both ways to express the ACF, so they're equivalent, right? And that would mean the denominators are the same for both, so
$$
\sqrt{\text{Var}(X_t)\text{Var}(X_{t+h})} = \gamma_X(0).
$$
Is this equality here true always, even if the process is not stationary? Also, I'm not completely understanding where this equality comes from.
|
Equivalence between two expressions for autocorrelation
|
CC BY-SA 4.0
| null |
2023-04-17T18:40:26.187
|
2023-04-17T18:53:39.737
|
2023-04-17T18:53:39.737
|
53690
|
365933
|
[
"time-series",
"correlation",
"variance",
"covariance",
"autocorrelation"
] |
613248
|
1
| null | null |
0
|
56
|
I wanted to implement linear regression with gradient descent from scratch and demonstrate how you can overfit when using too many polynomials. Unfortunately my model does not really overfit the data. Can someone explain why?
```
import numpy as np
import matplotlib.pyplot as plt
# make example reproducable
np.random.seed(42)
# Create data set
X = np.linspace(-5, 5, n).reshape(-1, 1)
y = 2 * X**3 + 5 * X**2 + 10 * X + np.random.randn(n,1) * 30
plt.scatter(X, y, s=15)
plt.xlabel('X')
plt.ylabel('y')
plt.title('Nonlinear Data')
plt.show()
# Define polynomial features function
def polynomial_features(X, degree):
"""Add polynomial features to X up to the specified degree."""
X_poly = np.ones((len(X), 1))
for d in range(1, degree+1):
X_poly = np.hstack((X_poly, X**d))
return X_poly
# Feature scaling
def feature_scaling(X):
"""Normalize input features by scaling."""
# mean = np.min(X, axis=0)
# std = np.std(X, axis=0)
min_ = np.min(X, axis=0)
max_ = np.max(X, axis=0)
range_ = min_ - max_
range_[range_ == 0] = 1e-8
# Replace zero standard deviation with a small constant value
# std[std == 0] = 1e-8
# X_scaled = (X - mean) / std
X_scaled = (X - min_) / range_
return X_scaled
def gradient_descent(X, y, theta, alpha, num_iterations):
"""Perform gradient descent to optimize theta parameters."""
m = len(y)
J_history = np.zeros((num_iterations, 1))
for i in range(num_iterations):
# Compute predicted values
y_hat = np.dot(X, theta)
# Compute derivative
derivative = np.dot(X.T, (y_hat - y)) / m
# Update theta using gradient descent
theta = theta - alpha * derivative
# Compute MSE (cost function)
J = (1 / (2 * m)) * np.sum(np.square(y_hat - y))
J_history[i] = J
return theta, J_history
degree = 50
X_poly_train = polynomial_features(X, degree)
X_poly_train_scaled = feature_scaling(X_poly_train)
alpha = 0.1 # The learning Rate
num_iterations = 5000 # The number of iterations to perform gradient descent
theta = np.zeros((degree + 1, 1))
theta_fitted, J_history = gradient_descent(X_poly_train_scaled, y, theta, alpha, num_iterations)
plt.plot(np.arange(num_iterations), J_history)
plt.xlabel('Iteration')
plt.ylabel('Cost (MSE)')
plt.title('Gradient Descent: Cost (MSE) vs. Iteration')
plt.show()
y_train_pred = np.dot(X_poly_train_scaled, theta_fitted)
m = len(y)
in_smpl_fit = np.round((1/m) * np.sum((y - y_train_pred)**2), 0)
plt.figure(figsize=(10,5))
plt.scatter(X, y_train_pred, color='red', label='Fitted Values', s=15) # Specify label for legend
plt.scatter(X, y,s=15)
plt.title(f'in_smpl_fit = {in_smpl_fit}',fontsize=16)
plt.xlabel('Predictor',fontsize=16)
plt.ylabel('Target',fontsize=16)
# Add legend
plt.legend()
plt.show()
```
The result looks like this:
[](https://i.stack.imgur.com/AItga.png)
Whereas I would expect something like this (this example is taken from a book):
[](https://i.stack.imgur.com/1jSgD.png)
Can someone please enlighten me?
|
Why is my polynomial regression with gradient descent not overfitting?
|
CC BY-SA 4.0
| null |
2023-04-17T18:43:30.227
|
2023-04-17T21:23:56.520
|
2023-04-17T19:57:52.643
|
22311
|
40101
|
[
"linear-model",
"gradient-descent",
"overfitting",
"polynomial"
] |
613249
|
1
| null | null |
5
|
172
|
## Background
Under certain assumptions we know that being given the posterior mean and a family of conditional distributions, we can uniquely determine the joint distribution. I quote one of the theorems specifying appropriate conditions below. The theorem comes from [this book](https://link.springer.com/book/10.1007/b97592) and provides the conditions for power series distributions $X|Y \sim PSD(Y)$.
>
Theorem 7.2
Let $(X,Y)$ be a random vector such that either:
(a) $S(X) = \{0,1,2,...,n\}$ for some integer $n$ and the cardinality of $S(Y)$ is $\leq n+2$; or
(b) $S(X)= \{0,1,2,...\}$ and $S(Y) \subseteq \{0,1,2,...\}$.
Assume that the supports $S(X)$ and $S(Y)$ are known and that for any $x \in S(X), y\in S(Y)$ we have
$$P(X=x|Y=y) = c(x)y^x/c^*(y),$$
where $c$ and $c^*$ are known. In addition, if $S(Y)$ is not bounded assume that
$$\sum_{x \in S(X)} \sqrt[2x]{c(x)} = \infty.$$
Then the distribution of $(X,Y)$ is uniquely determined by $\mathbb{E}(Y|X=x) = \psi(x), x\in S(x)$.
## Example
Let $X|Y=y \sim Pois(\lambda y)$. Then $X|Y \sim PSD(Y)$ with $c(x) = \lambda^x/x!$ and $c^*(y) = \exp(\lambda y)$. $\sum_{x=0}^{\infty} \sqrt[2x]{c(x)} \geq \sqrt{\lambda} \sum_{x=0}^{\infty} \frac{1}{x} = \infty$, so the conditional expectation of $Y$ given $X$ uniquely determines the joint distribution.
We don't know anything about the joint distribution other than it exists and is unique. The proof is not constructive. I would like to find $Y|X$ by simulation methods.
In general let's assume we have the following setting - tractable likelihood $X|\theta$ and the functional form of the posterior mean $\mathbb{E}(\theta|X) = \phi(X)$.
## Question
Is it possible to simulate observations from the posterior and utilize the information about the posterior mean in an MCMC setting?
|
Sampling from the posterior with a constraint on the posterior mean
|
CC BY-SA 4.0
| null |
2023-04-17T18:53:17.770
|
2023-05-10T21:36:55.370
|
2023-04-21T07:37:20.257
|
255290
|
255290
|
[
"bayesian",
"simulation",
"markov-chain-montecarlo",
"monte-carlo",
"metropolis-hastings"
] |
613250
|
1
| null | null |
1
|
38
|
I have an issue with a regression problem. Indeed, I need to fit a linear regression on this data. The problem is more than 50% of the data points are located in the origin (0,0) of the graph (because the experience produces a lot of points in this specific location). I have shown the histogram of X.
It is for medical publication. In our field, everyone uses the Passing-Bablok regression but it fails to be computed in this case (indeed the slope is equal to 0).
[](https://i.stack.imgur.com/wxC1z.png)
I can't use the Deming regression because my data is unfortunately not normal.
The classical linear regression is not accepted either because it is not a "robust regression".
Thank you very much
Have a nice day
|
Robust regression (Passing-Pablok) with more than 50% of the points on the coordinate(0,0)
|
CC BY-SA 4.0
| null |
2023-04-17T19:03:09.380
|
2023-04-19T14:48:35.533
| null | null |
311434
|
[
"regression",
"robust"
] |
613251
|
1
| null | null |
2
|
55
|
I have a bunch of samples that, empirically speaking, appear to be drawn from an unknown log-normal distribution, with an unknown constant offset applied to the data points. That is, $\ln(X_{i} + C) \sim \mathcal{N}(\mu, \sigma^2)$, assuming I remember my terminology.
(Actually, I have a bunch of bunches of samples.)
Given the method this data was collected with, this model appears plausible. And a naive empirical fit of said model appears to fit the data well enough.
The above being said, I am not a statistician.
Is this a sane model, generally speaking? (I am aware that a specific answer can't be had without details of the data & methods used.) If it is, is there a MLE for this? If there isn't, is there a more appropriate model to use (if so, which?)
Ditto, is there a name for this sort of model?
An example sample (inline, because I don't like link rot):
```
-8.21 -7.87 -0.56 -1.73 -9.36 -4.99 -6.97 -7.37 -8.79 -9.12 -7.01 -4.82 -9.63 -8.22 -3.75 -6.49 -7.85 -5.38 -5.79 -2.01 -7.41 -1.29 -6.44 -3.34 -6.11 -9.10 -6.54 -8.09 -2.41 -4.39 -4.99 -7.07 -5.34 -4.13 -4.86 2.07 -5.07 -6.90 -6.53 -6.46 -2.16 -5.85 -6.02 -0.96 -3.75 -7.58 -8.93 -8.53 -5.95 -4.16 -5.95 -9.24 -6.59 -6.04 -7.47 -6.73 -7.17 -8.76 -6.57 -8.42 -8.17 -6.01 2.88 0.88 -5.70 -5.66 -5.46 -7.26 -0.98 -6.15 -8.73 -4.11 -0.91 -9.54 -3.64 -8.67 -6.19 -7.25 -6.28 -6.74 -8.99 -5.29 -7.55 -6.85 -7.39 -8.65 -6.33 -8.77 -7.19 -4.37 -8.14 -6.47 -7.57 -7.51 -6.39 -9.55 -6.90 -4.00 -5.36 -8.94 -6.88 -6.74 -5.80 -5.19 -5.94 -7.82 -6.36 -1.77 -1.52 -6.29 -7.27 -9.28 -7.65 -7.64 -4.81 -1.44 -4.63 -4.84 -8.19 -7.65 -6.57 -4.07 -8.40 -8.06 -7.25 -3.48 -9.15 -7.43 -6.92 -6.81 -6.94 -6.11 -4.88 -6.81 -4.14 -8.63 -1.97 -7.40 -5.80 -8.37 -2.96 0.57 -3.54 -7.81 -5.54 0.06 -6.33 -8.39 -6.94 -7.16 -0.83 -5.77 -5.40 -6.82 -4.03 -8.25 -9.62 -7.33 -8.54 -5.07 -8.46 -5.21 -7.21 -7.52 -2.47 -8.46 -3.51 -5.95 -3.08 -8.48 -7.86 -3.15 -3.41 -7.08 -6.95 -0.23 -6.91 -7.82 -8.73 -1.39 -8.51 -6.21 -4.23 2.42 -5.79 -6.46 -6.88 -6.65 -9.76 -7.20 -3.33 -7.57 -7.67 -4.74 -4.85 -7.16 -4.80 -9.08 -8.41 -5.71 -5.89 -7.81 -7.97 -9.14 -9.71 -7.37 -6.95 -5.27 -2.37 -3.78 -6.68 -4.44 -7.67 -6.96 -8.46 -7.21 -2.81 -2.23 -5.23 -6.39 -5.42 -7.67 -5.71 -8.08 -3.54 -9.46 -8.82 -9.79 -6.84 -5.89 -8.65 -5.76 -3.44 -9.95 1.25 -8.33 -4.32 4.66 -5.86 -6.66 -8.02 -8.12 -7.89 -6.64 -7.25 -7.88 -6.31 -7.53 -7.56 -7.02 -7.80 -7.44 -5.48 -5.79 -6.82 -1.94 -5.11 -8.51 -4.49 -8.91 -7.31 -5.82 -6.62 -8.43 -3.52 -6.19 -2.72 -5.88 -7.27 -8.41 -7.48 -8.49 -8.44 -8.33 -4.24 -6.77 -7.82 -5.55 -9.73 -4.70 -7.22 -2.33 -7.13 -3.83 -4.32 -6.13 -5.63 -6.91 -7.49 -7.58 -7.02 -4.74 -7.75 -9.31 -5.07 -7.80 -9.56 -0.48 1.02 -5.54 -2.71 -6.07 -2.01 -7.23 -5.10 -9.09 -8.18 -7.89 -9.19 -3.04 -8.16 -2.04 -8.51 -7.25 3.20 -3.20 -6.45 -8.70 -9.08 -5.90 -5.43 -7.17 -7.31 -3.88 -8.46 -5.64 -1.60 -6.01 -6.81 -6.25 -4.81 -8.34 -7.32 -9.03 -5.94 -5.78 -8.74 -6.12 -7.51 -3.92 -8.89 -8.78 -2.85 -6.76 -4.94 -4.33 -5.48 -4.86 -5.62 -5.62 -4.93 -9.58 -8.75 -6.95 -2.97 -4.76 -8.04 -7.82 -3.07 -6.52 -8.31 -8.93 -7.03 -9.06 -8.87 -8.83 -6.57 0.13 -8.42 -5.90 -7.15 -6.14 -8.39 -3.66 -6.50 -8.81 -6.74 -4.71 -7.31 -9.42 -8.53 -5.17 -4.68 -9.29 -7.29 -9.27 -4.87 -2.81 -7.25 -2.31 -7.29 -6.43 -7.34 2.41 -8.26 -3.70 -7.01 -8.57 -4.25 -6.03 -7.87 -7.80 -2.97 -5.61 -6.94 -7.11 -8.92 -6.79 -0.67 -7.78 -4.02 -7.81 -6.16 -7.40 -9.46 -8.62 -7.14 -4.12 -8.40 -2.20 -8.58 -3.16 -8.44 -8.98 -9.16 -6.11 -7.83 -4.60 -7.04 -6.52 -7.38 -6.03 -6.18 -5.90 -4.99 -9.09 1.53 -0.42 -7.54 -8.41 -3.17 -4.66 -4.00 -8.88 -10.21 -8.54 -8.11 1.33 -6.36 -5.63 -6.51 -2.17 -8.79 -7.75 -5.83 -1.42 -4.55 -8.11 -6.70 -8.37 -8.44 -5.79 -6.46 -6.19 -7.57 -8.09 -7.44 -7.43 -7.77 -6.19 -8.19 -9.84 -3.91 -5.36 -8.35 -6.74 -5.33 -7.97 -2.41 -8.15 -7.61 -7.09 -7.88 -5.54 -3.84 -5.81 -10.10 -7.84 -5.44 -2.94 -5.44 -8.82 -5.61 -7.79 0.12 -6.90 -0.89 -8.12 -8.07 -2.20 0.28 -6.24 -6.19 -4.92 -7.77 -8.82 -6.79 -5.12 -3.95 -6.43 -7.89 -6.41 -7.04 -8.68 -5.18 -1.56 -7.29 -8.59 -8.01 -9.01 -9.12 -0.86 -4.11 -7.18 -6.91 -4.85 -8.30 -6.79 -6.83 -8.09 -7.58 -5.93 -7.54 -5.60 -7.58 -7.49 -3.96 7.04 -6.38 -7.44 -7.18 -6.05 -7.15 -6.03 -5.45 -6.14 -7.75 -6.99 -4.98 -5.96 -9.38 -8.28 -5.68 -7.48 -8.41 -7.79 -2.81 -6.65 -4.30 -5.57 -0.90 -6.06 -6.88 -7.34 -9.71 -5.64 -5.18 -7.05 -7.96 -6.47 -3.15 -3.53 -2.52 -4.59 -2.24 0.70 -6.99 -1.38 -6.39 -3.80 -8.43 -5.27 -8.30 -3.20 -3.99 -4.05 -5.76 -4.03 -8.15 -3.09 -6.98 -9.70 -4.14 -8.54 -5.74 0.80 -3.33 -3.25 -6.13 -6.68 -1.82 -1.42 -8.15 -6.07 -7.89 -7.41 -9.22 -8.01 -4.02 -6.13 1.24 -5.72 -6.53 -5.00 -4.51 -6.36 -7.19 -8.03 0.75 -6.03 -6.65 -5.62 -4.82 -1.91 -8.05 -8.45 -2.02 -5.90 -7.33 -7.89 -7.95 -4.97 -3.30 -8.62 -7.40 -7.29 3.15 -2.97 -5.95 -3.61 -4.66 -4.52 -7.18 -8.67 -7.28 -3.95 -5.24 -8.34 -9.45 -5.46 -8.92 -3.54 -5.30 -7.85 -9.30 -6.69 -5.54 -7.46 -7.15 -6.34 -8.73 -5.20 -8.84 -9.63 0.44 -9.00 -5.02 -6.04 -4.15 -7.90 -7.24 -4.32 -2.50 -4.70 -6.87 -6.54 2.20 -8.62 -5.95 -3.48 -1.95 -7.18 -7.71 -6.25 -8.56 -3.42 -4.95 -7.14 -8.42 -7.53 -2.72 -7.10 -6.68 -8.17 -2.78 -6.92 -7.63 -7.58 0.49 -6.64 -4.53 -4.79 -7.40 -8.21 -6.98 -7.38 -8.08 -6.26 -6.93 -6.91 -8.55 -7.14 -9.87 -8.45 -4.15 -7.87 -6.14 -6.69 -5.73 0.59 -5.08 -8.38 -7.24 -1.38 -8.39 -5.40 -6.52 -3.56 -4.52 -6.08 -7.65 -7.34 -3.02 -6.62 -2.22 -0.01 -6.13 -5.93 -8.09 -9.20 -4.36 -7.58 -5.88 -2.92 -7.39 -3.54 -7.98 -7.88 -0.42 2.88 -5.54 -8.06 -5.76 -8.76 -5.93 -8.29 -7.52 -8.24 -7.96 -8.99 -8.82 -5.07 -6.45 0.95 -0.38 -4.01 -7.63 -9.12 -8.05 -4.00 -6.92 -8.37 -5.98 -6.41 -7.32 -7.63 -7.33 -5.97 -7.70 -4.09 -3.25 -0.68 -0.91 -7.69 -4.52 -8.27 -6.96 -8.32 -2.93 -3.52 -5.71 -9.13 -6.99 1.62 -5.20 -8.83 -5.03 -7.14 -8.67 -4.48 -6.94 -1.49 -4.99 -8.00 -7.12 -8.71 -8.11 -2.68 -9.02 -7.46 -0.22 -7.82 -6.28 -10.01 -4.82 -5.25 -4.52 -8.61 -8.30 -5.92 -4.22 -8.99 -6.96 -3.36 -9.76 -8.52 -7.33 -8.25 -0.32 -8.36 -5.59 -5.61 -7.67 -8.42 -6.64 -8.86 -8.55 -7.55 -4.13 -8.14 -7.68 -9.57 -6.68 -0.05 -5.59 -4.42 -1.79 -4.50 -7.68 -7.16 -5.39 -4.57 -5.32 -8.37 -6.55 -6.52 -8.41 -6.75 -3.39 -5.58 -7.97 -8.42 -9.11 -5.90 -8.43 -6.74 -6.29 -1.15 -8.35 -7.66 -7.87 -8.01 -6.61 -3.04 -9.08 -8.72 -2.59 -4.60 -3.37 -8.20 -0.93 -7.53 -3.85 -5.35 -7.44 -4.61 -7.25 -8.16 -8.44 -6.91 -8.39 0.03 -7.54 -6.05 -5.04 -7.85 -7.37 -8.53 -7.65 -9.35 -7.99 -8.27 -7.03 3.24 -4.82 -7.87 -7.18 -5.19 -8.71 -8.46 0.39 -5.95 -4.92 -7.38 -9.33 -4.05 -4.58 -4.74 -7.11 -6.39 -8.06 -7.61 -4.06 -4.70 -4.04 -7.28 -3.62 -6.89 -8.28 -2.61 -3.13 -8.76 -6.88 -6.62 -3.92 -8.24 -3.82 -0.91 -7.27 -7.82 -7.82 -7.14 -4.34 -7.99 -4.38 -5.67 -4.93 -6.21 -3.30 -4.22 -9.50 -6.71 -5.75 -6.76 -7.44 4.76 -7.00 -5.50 -7.48 -2.91 -8.52 -4.86 -8.09 -4.40 -7.82 -4.15 -7.03 -4.97 -4.71 -5.82 -1.23 -6.44 -3.95 -6.67 -2.81 -7.04 -2.39 -6.96 -6.24 -5.93 -3.56 -7.14 -7.97 -6.84 -6.34 0.24 -4.16 -8.59 -4.72 -8.66 -7.31 -6.74 -4.97 -6.07 -4.78 -6.31 -5.47 -3.42 -5.28 0.52
```
|
Log-normal model of data with unknown offset
|
CC BY-SA 4.0
| null |
2023-04-17T19:13:44.187
|
2023-05-02T20:23:23.783
|
2023-04-22T23:05:23.613
|
100205
|
100205
|
[
"maximum-likelihood",
"fitting",
"lognormal-distribution"
] |
613253
|
2
| null |
613248
|
1
| null |
To eliminate any potential coding bugs, I've rewritten the script to use standard python libraries for machine learning.
I also changed the data generation process to have a gap in the middle, similar to the diagram in OP's question.
The data in OP's "expected" diagram has a big gap in the middle, but OP's code uses `linspace`, so there are no big gaps in OP's data.
Also, the gaps are the places where the polynomial interpolation swings wildly, buy OP has systematically excluded some of the swings by only plotting the function at the training points. So I generate random data and also plot the true function at regular intervals.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.linear_model import LinearRegression
np.random.seed(42)
n = 500
# Create data set
# X = np.linspace(-5, 5, n).reshape(-1, 1)
X_small = np.random.randn(5) + 4
X_large = np.random.randn(n - X_small.size) - 3
X = np.hstack((X_small, X_large))
def true_fn(x):
return 2.0 * x**3.0 + 5.0 * x**2.0 + 10.0 * x
def noisy_fn(x):
return true_fn(x) + 30.0 * np.random.randn(x.size)
y = noisy_fn(X).reshape((-1, 1))
X = X.reshape((-1, 1))
print(X.shape)
print(y.shape)
plt.scatter(X, y, s=15, alpha=0.8)
plt.xlabel("X")
plt.ylabel("y")
plt.title("Nonlinear Data")
plt.show()
degree = 13
X_poly = PolynomialFeatures(degree=degree, include_bias=False)
X_scaler = StandardScaler()
X_poly_train = X_poly.fit_transform(X)
X_poly_train = X_scaler.fit_transform(X_poly_train)
lingreg = LinearRegression()
lingreg.fit(X=X_poly_train, y=y)
theta_fitted = lingreg.coef_
print(f"rank: {lingreg.rank_}")
print(f"singular: {lingreg.singular_}")
print(theta_fitted)
x_test = np.linspace(-5, 5, 1000).reshape((-1,1))
x_test_poly =X_poly.transform(x_test)
x_test_poly = X_scaler.transform(x_test_poly)
y_test_pred = lingreg.predict(X=x_test_poly)
plt.figure(figsize=(10, 5))
plt.plot(
x_test, y_test_pred, color="red", label="Fitted Values"
) # Specify label for legend
plt.scatter(X, y, s=15)
plt.xlabel("Predictor", fontsize=16)
plt.ylabel("Target", fontsize=16)
# Add legend
plt.legend()
plt.show()
```
The data looks like this.
[](https://i.stack.imgur.com/sjTX4.png)
We can get the desired result with a degree 13 polynomial.
[](https://i.stack.imgur.com/YhzW1.png)
The nice thing about this plot is that it also shows that the polynomial is fine in the places where the data are most dense. But the interpolation behavior in sparse regions oscillates.
I was never able to get the GD to work -- it always diverged. However, I suspect that it does not produce an optimal solution.
You can take a moment to compare the coefficients that you get from gradient descent to a reference implementation, such as `sklearn`'s `LinearRegression`. If your GD results have a larger RSS than the reference implementation, then you know you haven't found the optimal fit to the training data!
Relatedly, stopping optimization before the optimizer has found the exact minimum is called [early-stopping](/questions/tagged/early-stopping), and can have a regularizing effect similar to penalizing the $L^2$ norm of the coefficient vector.
| null |
CC BY-SA 4.0
| null |
2023-04-17T19:40:40.143
|
2023-04-17T21:23:56.520
|
2023-04-17T21:23:56.520
|
22311
|
22311
| null |
613254
|
1
| null | null |
5
|
58
|
I'm building an app to help players improve the accuracy of their shots in pool (pocket billiards). Someone might miss shots to either side of the centerline (overcut or undercut), and that's shown on a scatterplot as positive or negative degrees. I want to overlay the scatterplot with a line of moving average error to show progression.
[](https://i.stack.imgur.com/iTRIo.png)
Naively plotting a moving average won't produce good results because a player who misses wildly on either side will have an average close to zero even though they are very inaccurate.
Ideally, I'd want the visualization to show some sort of moving average of the magnitude of error as well as the side the player tends to miss on.
|
How should I visualize errors that can be on either size of mean
|
CC BY-SA 4.0
| null |
2023-04-17T19:41:09.620
|
2023-04-18T18:54:40.153
| null | null |
206527
|
[
"data-visualization",
"error"
] |
613255
|
1
| null | null |
0
|
27
|
I am currently working on a statistical analysis project where I implemented a linear mixed model with only a random intercept and several fixed effects. I am considering visualizing the results by plotting both predicted values and estimated marginal means (along with their confidence intervals ) on the same graph. However, I am not sure if this is a valid and acceptable approach.
Therefore, I wanted to ask for some insights and opinions on this matter. Would it make sense to display both predicted values and estimated marginal means on the same graph? Are there any issues or limitations that I should be aware of?
Any advice or suggestions would be greatly appreciated.
|
Predicted values and estimated marginal means
|
CC BY-SA 4.0
| null |
2023-04-17T19:44:28.967
|
2023-04-17T19:44:28.967
| null | null |
375245
|
[
"mixed-model",
"data-visualization",
"lsmeans"
] |
613256
|
1
| null | null |
2
|
55
|
The survival function for the Weibull distribution is:
$S(t) = exp(-\lambda t^\alpha)$, $\lambda$ = scale parameter, $\alpha$ = shape parameter
If I wanted to calculate the median survival time, I would set $S(t) = 0.5$. Rearranging terms, this means the median survival time would be calculated as follows:
$t_{median} = (\frac{-log(0.5)}{\lambda})^{(1/\alpha)}$
Let's say I set $\lambda = 3$ and $\alpha = 2$. Then the median survival time is clearly:
$t_{median} = (\frac{-log(0.5)}{3})^{(1/2)} = 0.481$
However, when I do a sanity check on this and run this in R, the median survival for a Weibull distribution with scale = 3 and shape = 2 is clearly not 0.481 and is more like 2.5:
```
Time <- sort(rweibull(1000, shape = 2, scale = 3))
Time[500]
```
Any time I run this code, I am getting somewhere right around 2.5, +/- 0.1 or so. But definitely nowhere near 0.481.
So what do I have wrong here? Why is the actual median time for simulated data nowhere near the theoretical median time?
|
Why am I not able to correctly calculate the median survival time for the Weibull distribution?
|
CC BY-SA 4.0
| null |
2023-04-17T19:59:40.450
|
2023-04-21T19:49:41.230
| null | null |
347818
|
[
"survival",
"median",
"weibull-distribution"
] |
613258
|
2
| null |
613234
|
0
| null |
For a categorical predictor, the summary of a regression model like this will report a number of lowest-level coefficients one less than its number of levels, and also for higher-level interactions of that predictor with other predictors.
It's risky to evaluate any single coefficient associated with that predictor (whether at the individual level or in an interaction), as the difference of its value from 0 (and thus its apparent "significance") depends on how the predictor was coded: sum-to-zero versus treatment/dummy coding, or the choice of reference level in treatment coding. Trying to interpret individual coefficients tends to lead to extensive confusion, often represented in questions on this site.
To compare against the ANOVA output you need a joint test on all the levels of the 3-level categorical predictor. The `Anova()` function (note the capital "A") in the R [car package](https://cran.r-project.org/package=car) is a common choice for that.
| null |
CC BY-SA 4.0
| null |
2023-04-17T20:39:00.450
|
2023-04-17T20:39:00.450
| null | null |
28500
| null |
613260
|
1
| null | null |
2
|
35
|
As titled,I had some confusion related to statistical power calculation.
The calculation is done using
```
from statsmodels.stats.power import tt_ind_solve_power as SolPower
np.round((SolPower(effect_size=None, # normalized difference between Group 1 & 2: min effect size / standard deviation
nobs1=None, # size of Group 1 (i.e. Treatment)
alpha=0.05, # significance: prob. of Type I error
ratio=1, # ratio: Group 2 size/Group 1 size
alternative='two-sided'))*100
,2)
```
this function in python.
The question given to us is as follows:
There are 60 stores in an area and divided into treatment & control group equivalently.
The Average Treatment Effect is \$4000. Each store has the same standard deviation of \$30000 and mean of \$200000.
The first part needs us to calculate statistical power.I plugged in effect size = \$4000/30000$,nobs = $30 and got 14.39% as a result(which is the same answer from instructor.)
The next question assumes the previous experiment was done in one week.Now if we want statistical power to reach 50%,how many weeks do we need to run the experiment.
The answer given by instructor simply assumes number of observations = 30*n,where n= number of weeks and effect size doesn't change. I don't think this is rigorous, as I would assume the observations to be i.i.d, but apparently they are not(because they are dynamic data from the same store).I think this question should be done by calculating the new effect size, holding number of observations fixed.
Or maybe this question doesn't make any sense in a statistical way.If anyone could help me verify how should I solve this problem,I would really appreciate it!
|
Statistical power of an experiment calculation
|
CC BY-SA 4.0
| null |
2023-04-17T20:49:42.937
|
2023-04-17T21:16:32.010
| null | null |
277148
|
[
"experiment-design",
"statistical-power",
"effect-size",
"iid"
] |
613261
|
2
| null |
613132
|
0
| null |
A survival model is important when there is censoring of times to events. You don't have that here.
What you have is a count outcome for each observation day, and associated predictor variables. A count-based model such as a Poisson or negative-binomial generalized linear model would be a useful choice.
| null |
CC BY-SA 4.0
| null |
2023-04-17T21:02:07.707
|
2023-04-17T21:02:07.707
| null | null |
28500
| null |
613262
|
2
| null |
613260
|
1
| null |
On one hand your objection makes sense to me. On the other hand, I don't see how dependence can be credibly modelled based on the given information so that the model could be used to estimate power, so the idea may just be to assume independence for the sake of simplicity. Note that models are always idealisations and often simplify situations. Sometimes this is troublesome and something should be done about it, sometimes not so much, or it isn't at all clear what to do about it (as here, based on very limited information).
In fact, adding further weeks of observations from the same stores, even if there is dependence, the effective sample size will grow and not be fixed, and it depends on the strength of dependence (of which we have no idea) by how much. Note also that in reality there may even be dependence between stores, but as said before, we need to ignore some issues (particularly if they can't be identified from existing information) to be able to use a model that is simple enough to arrive at something.
| null |
CC BY-SA 4.0
| null |
2023-04-17T21:16:32.010
|
2023-04-17T21:16:32.010
| null | null |
247165
| null |
613263
|
1
| null | null |
0
|
14
|
Say I want to estimate $$I=\int f\:{\rm d}\lambda+\int g\:{\rm d}\mu.$$ Now, I'm using Metropolis-Hastings to sample from $\lambda$ and from $\mu$, separately. Assume $\mu$ is defined on a product space and whenever $(X,Y)\sim\mu$, then $X\sim\lambda$.
My question is: Instead of constructing Markov chains $(X_n)_{n\in\mathbb N_0}$ and $(Z_n)_{n\in\mathbb N_0}$ using the Metropolis-Hastings algorithm with target distribution $\lambda$ and $\mu$, respectively, and use these chains to build independent estimators for both integrals on the right-hand side of the definition of $I$, would there be any drawback (for example, higher variance) if I only compute $(Z_n)_{n\in\mathbb N_0}$, which is of the form $((X_n,Y_n))_{n\in\mathbb N_0}$, and use $$\frac1n\sum_{i=0}^{n-1}(f(X_i)+g(X_i,Y_i))$$ as an estimator for $I$?
|
Estimate $\int f\:{\rm d}\lambda$ and $\int g\:{\rm d}\mu$, where whenever $(X,Y)\sim\mu$, then $X\sim\lambda$, by a single Markov chain
|
CC BY-SA 4.0
| null |
2023-04-17T21:23:53.277
|
2023-04-17T21:23:53.277
| null | null |
222528
|
[
"markov-chain-montecarlo",
"unbiased-estimator",
"metropolis-hastings"
] |
613264
|
2
| null |
613198
|
1
| null |
Your `simulated_data` are just 20 draws from a single fixed Weibull distribution having the specified `shape` and `scale` values. They are just samples of individual survival times drawn from that single distribution. This shows the associated survival function:
```
plot(1:70,1-pweibull(1:70,shape=1.5, scale=1/0.03),type="l",bty="n",xlab="Time",ylab="Survival probability")
```
If the Weibull distribution is fixed, then the survival function is also fixed and there would be no variability in survival curves.
If you want to evaluate uncertainty in survival-curve estimates based on modeled data, then you want to draw from distributions of the `shape` and `scale` values consistent with the variance-covariance matrix of their modeled estimates, for example with the `mvrnorm()` function of the R [MASS package](https://cran.r-project.org/package=MASS), and display the set of survival curves in the way indicated above.
| null |
CC BY-SA 4.0
| null |
2023-04-17T21:33:55.753
|
2023-04-17T21:33:55.753
| null | null |
28500
| null |
613265
|
1
| null | null |
1
|
44
|
I made a Monte Carlo based hypothesis test, which starts comparing N values in M simulations to a confidence interval, giving me a binary N*M matrix. Then I calculate the percent of values equal to 1 for every simulation.
So the final M values I'm going to use for the hypothesis test are between 0 and 1.
However, I can't fit the values to any distributions I know using Kolmogorov-Smirnov, even Box-Cox transformation (car package in R) fails to give me a normal distribution. Below are two pictures of the non-transformed distributions to help visualize.
[](https://i.stack.imgur.com/4gyFa.png)
[](https://i.stack.imgur.com/jaXq2.png)
What are the suggested distributions for my case study? Preferably those I can fit to any single mode or bi-modal data from my procedure and not only the example distribution.
|
Need help finding a corresponding distribution
|
CC BY-SA 4.0
| null |
2023-04-17T21:50:50.147
|
2023-04-17T21:52:50.240
|
2023-04-17T21:52:50.240
|
260956
|
260956
|
[
"distributions",
"data-transformation",
"monte-carlo",
"boxcox-transformation"
] |
613266
|
2
| null |
613241
|
2
| null |
The variance of a random variable is defined to be the expected value of the squared deviation of that variable from the mean. In order to calculate it, we first need to know the mean of a Bernoulli variate with probability $p$:
$$\mathbb{E}(x) = 0\cdot(1-p) + 1\cdot p = p$$
as the probability of seeing a $1$ is $p$ and the probability of seeing a $0$ is therefore $1-p$.
Now for the expected value of the squared deviations from the mean:
$$\begin{eqnarray}
\mathbb{E}(x-p)^2 &=& (0-p)^2\cdot(1-p) + (1-p)^2\cdot p \\
&=& p^2(1-p)+(1-p)^2p\\
&=&p(1-p)\cdot[p+(1-p)]\\
&=&p(1-p)
\end{eqnarray}$$
As for the Binomial case - the Bernoulli case with multiple observations - we know that the variance of the sum of independent variables is equal to the sum of the variances of the independent variables. In this case, all those independent variables have the same variance, so the sum is just $N$ times the variance:
$$\text{Var}\left(\sum_{i=1}^Nx_i\right) = \sum_{i=1}^N\text{Var}(x_i) = \sum_{i=1}^Np(1-p) =Np(1-p) $$
| null |
CC BY-SA 4.0
| null |
2023-04-17T22:13:49.873
|
2023-04-17T22:13:49.873
| null | null |
7555
| null |
613268
|
1
| null | null |
0
|
51
|
I want to test whether whether a dispositional risk factors moderates the relation between a situational risk factor and a negative outcome in a regression model (including several control variables).
I found the moderation effect (predictor*moderator) in the expected direction using OLS Regression. However, the distribition of the criterion variable is heavily skewed and seems to be better described with a gamma distribution than with the gaussian form. Moreover, some requirements for OLS regression (Homoscedasticity, normal distribution of residuals) are not met.
Therefore, I calculated a generalized linear model with the glm-package in R, chosing "gamma" as the family-parameter. Since I was unsure about whether the moderation effect can be interpreted in a similar way when a non-linear "log"-link or "inverse"-link is used, I also calculated the same model with the "identity"-link, assuming that this ensures a "normal" interpretation of the predictor*moderator coefficient.
I printed fit-criteria for all 4 models (see image).
```
Model1: family = gaussian (link = "identity") = OLS
Model2: family = gamma (link = "identity")
Model3: family = gamma (link = "log")
Model4: family = gamma (link = "inverse")
```
[](https://i.stack.imgur.com/qxSV2.png)
The moderation coefficient is only significant in Model 1 and Model 2, but not in Model 3 and Model 4.
- Is it justified to report the results for Model 2, since it seems to have the best fit-criteria? Or are there other important criteria I have to consider? My Moderation-Hypothesis was formulated in a linear way (I did not think about any potential non-linear relationship a priori).
- Does the fact that I found no significant moderation coefficient in Model 3 and 4 weaken the evidence for my moderation-hypothesis? Would one expect consistent results across the different link-functions?
|
GLM: Differing results for Interaction Effects depending on the link-function
|
CC BY-SA 4.0
| null |
2023-04-17T22:42:04.957
|
2023-04-18T07:37:35.560
|
2023-04-18T07:37:35.560
|
362671
|
385930
|
[
"generalized-linear-model",
"interaction",
"gamma-distribution",
"link-function"
] |
613269
|
2
| null |
600614
|
1
| null |
I just read your question. Maintaining a random variant is always interesting because it is a great feedback input without any presentation bias. I am working in a company as a ML Engineer in which I have developed a MAB based on Thompson Sampling and I came across a very similar situation to the one you are talking about. It seems logical to think that the random variant is the right one to feed our MAB because of its lack of bias and exploration. But, 90% of the MABs I have seen have already some exploration! Many of them are based on the explore-exploit tradeoff. So, to answer your second question, if your MAB has an exploration part, you can use its feedback as input (you probably need to do some transformation to the data or weighting in some way).
Regarding your first question, if you force the MAB to exploit, it is no longer a MAB. As I said before, these algorithms are based on explore-exploit. If you remove one of those parts, you are doing the same thing you would do if you had a model that maximised the probability of having a favourable event after the MAB output. It would be the same thing.
I hope you find it helpful or that you have found the solution to your questions!
| null |
CC BY-SA 4.0
| null |
2023-04-17T22:47:45.933
|
2023-04-17T22:47:45.933
| null | null |
385952
| null |
613270
|
2
| null |
613137
|
1
| null |
>
If I have a cross sectional data, is it possible to include fixed effects?
A fixed effects analysis is attempting to exploit variation within a unit, usually over time. Without serial observations for units, there is no within-unit variation to exploit. If you're going to estimate fixed effects with dummy variables for each cross-sectional unit, then these are singletons. In simple terms, the dummies index each unit at a single point in time. Your model would be estimating just as much parameters as there are individual measurements.
>
So is it correct that it's impossible?
Correct.
Even absent the inclusion of additional covariates, you would not have enough residual degrees of freedom to estimate a model.
| null |
CC BY-SA 4.0
| null |
2023-04-17T23:07:10.317
|
2023-04-17T23:07:10.317
| null | null |
246835
| null |
613272
|
2
| null |
367539
|
0
| null |
>
What do these two plots mean? Does the same set of assumptions (normality of residuals; homogenity of variance) apply for linear mixed effects model? Am I right in reading that this model is not properly specified as it violates normality assumptions?
The residual plots reflect that the assumptions of residual normality and homogeneity are violated. This affects inference but not point estimates of the model: The p-value and confidence intervals of each coefficient, as well as confidence and prediction intervals of predicted means, are doubtful. The point estimates of fixed effects' coefficients and predicted random effects are still unbiased. This is because the standard errors of each fixed-effect coefficient is biased, despite its consistency if the number of groups (country and sector in your case) and the number of customers clustered in each group are large [https://thestatsgeek.com/2014/08/17/robustness-of-linear-mixed-models/](https://thestatsgeek.com/2014/08/17/robustness-of-linear-mixed-models/).
In addition to a limited range of the dependent variable, these violations can be caused by other peculiarity, such as the functional form of predictors. You should make a plot between each response-predictor pair to help determine the function form. Try at least logarithm, quadratic and cubic terms of loan amount and interaction terms between gender and loan amount to remove part of the nonnormality and heterogeneity in the error term. Coefficients in a fixed-effect specification in the form of `log(y) ~ log(x)` measures elasticity directly. Add additional predictors may substantially improve the residual diagnosis results. Check the fixed-effect predictor significance by `anova()` with estimation by ML.
The clustering structure also needs attention, as I don't think that Sector S in Country A is comparable with Sector S in Country B to share the same random effects of sector. Perhaps clustering by country-specific sector alone is better, which requires coding each country-sector pair as a unique group. Further, variables with random effects also need investigation. The intercept has random effects in your current model specification. However, this within-country and within-sector variability might differ by gender. And there might be a random slope if the effect of loan amount on time to loan deviates by country-sector pair, and this random slope might be correlated with the random intercept. Therefore, a tentative random effect specification might be `(0 + borrower.Gender + log(borrowing.Amount) | interaction(borrower.Country, borrower.Sector))`, where `0 + borrower.Gender` moves random intercepts to around the gender indicator, so the standard deviation of random intercepts differ by gender. You can compare this random-effect specification with your current crossed clustering by a likelihood ratio test, using `anova()` with estimation by REML, `varCompTest()` in {varTestnlme}, `exactRLRT()` in {RLRsim}, and `PBmodcomp()` in {pbkrtest}.
If your purpose is to make predictions, then there is no need to correct for the assumption violations, as the coefficient point estimates are unbiased. If you need to conduct hypothesis testing, determine predictor significance, and build confidence intervals, then consider modeling heterogeneity with `lme(weights =)` in Package {nlme} and using cluster-robust standard errors through `vcovCR()` in {clubSandwich}. As your sample size is huge in terms of the numbers of customers, countries, and sectors, the difference in standard errors between a standard linear mixed model and heterogeneity-corrected, cluster-robust results might be minimal.
>
My dataset consists of only those that have received loans. So I assume right censoring does not apply for Time.to.obtain.loan.
If you intend to make any causal inference, such as whether gender and requested amount affect loan processing time, using data of only loan receivers constitutes a sample-selection bias. Instead, you should use data of all loan requesters although the processing time might be only observable on loan receivers. Check package {sampleSelection}. See Toomet, O., & Henningsen, A. (2008). Sample selection models in R: Package sampleSelection. Journal of Statistical Software, 27(7). [https://doi.org/10.18637/jss.v027.i07](https://doi.org/10.18637/jss.v027.i07) and Bushway, S., Johnson, B. D., & Slocum, L. A. (2007). Is the magic still there? The use of the Heckman two-step correction for selection bias in criminology. Journal of Quantitative Criminology, 23(2), 151–178. [https://doi.org/10.1007/s10940-007-9024-4](https://doi.org/10.1007/s10940-007-9024-4)
If you are instead interested in observational relationship, using records of loan receivers alone is fine. I agree with Dimitris Rizopoulos that the data structure calls for survival analysis, as the dependent variable is a duration. Ideally, you should acquire data of those still under processing, so the time to obtain loans is not completely finished observing and involves right censoring. Even if the data do not exhibit right censoring, using survival analysis is still a good choice. However, survival analysis has its own assumptions, such as proportional hazards in a Cox model `coxph()`. Package {coxme} may not be necessary, as the regular Package {survival} can deal with clustered errors through `coxph(..., cluster())` and random intercepts through `coxph(..., frailty())`, but I think it could handle only one level of clustering. See an example of mixed-effect Cos regression from UCLA [https://stats.oarc.ucla.edu/r/dae/mixed-effects-cox-regression/](https://stats.oarc.ucla.edu/r/dae/mixed-effects-cox-regression/)
>
Do you perhaps have (many) zeros in the Time.to.obtain.loan? If this is the case, indeed assuming a normal distribution would not be optimal. You could give a try to a Beta mixed effects models.
Beta regression is for proportional response, where the dependent variable is a fraction between zero and one, and there is zero- and one-inflated beta regression. The data structure does not appear to align with that assumed by beta regression. Instead, if the response is positive, gamma regression can be used. The result might be very similar to a linear regression with logarithm-transformed response. See a tutorial of gamma regression with shape and scale parameters' meaning and calculation explained [https://data.library.virginia.edu/getting-started-with-gamma-regression/](https://data.library.virginia.edu/getting-started-with-gamma-regression/). Still, the best option for a duration response is survival analysis.
>
although the Time.to.obtain.loan is not a count variable, can one use poisson model from GLMMAdaptive?
Yes, you may need special packages to achieve it, as the regular Poisson regression model in `glm(family = poisson)` allows only integers in the dependent variable. You can of course round the time to obtain loans to make it integer. Package {glmmTMB} worked best for me so far in count data modeling. It allows a noninteger dependent variable, zero inflation, dispersion effects, and random effects for repeated measurements and clustered errors. You could use clustered standard errors for `glmmTMB()` models though, perhaps through `vcovHC()` in {sandwich} and `vcovCR()` in {clubSandwich}. Again, the best option for a duration response should be survival analysis.
| null |
CC BY-SA 4.0
| null |
2023-04-18T00:55:17.713
|
2023-04-18T01:14:52.547
|
2023-04-18T01:14:52.547
|
284766
|
284766
| null |
613273
|
2
| null |
610541
|
1
| null |
To be clear, my response is assuming you have a single event of what appears to be a known and varying treatment intensity. In some applications, say a series of minimum wage hikes or exposure to different concentrations of particulate matter in the air, we can treat the increases/decreases over time as multiple events, especially when we're dealing with multiple level changes over time. It doesn't appear your data fits this pattern entirely, but there are some similarities. You know the event years and the precise exposure post-event. Thus, you're exploiting variation in treatment timing and intensity within and across individuals.
>
My first question is whether Equation (1) is an appropriate generalized DD equation that I can use to estimate the "treatment" effect?
Yes.
The variable $CT_{it}$ is a policy variable and represents a treatment status change. Include it as you would any other variable.
>
My second question is how can I estimate a dynamic period by period coefficient version of Equation (1)?
Assuming you have a lot of observations with treatment histories as defined above, then individuals $i$ may experience no exposure, constant exposure, and/or fluctuating exposure over time, even before the event starts. For some units, the exposure isn't even permanent; it appears the intensity "dies out" as you suggest with unit 1 so long as spending goes to zero. In the extreme case as with unit 4, the variable is a constantly changing continuous variable pre- and post-event.
To achieve "time-varying" effects, you need to substitute different time configurations into the model. When we lead and/or lag in a setting like this, we a lose a period. Depending upon the context, we can get away with replacing a missing value with 0, but this is only justified in settings where we know a priori that individuals do not have any treatment intensity before the policy (or after it ends); they are essentially untreated pre-event.
In a setting with a continuous policy variable (i.e., constantly changing numeric variable), here is one way to go about estimating time-varying effects:
$$
\begin{array}{ccc}
i & t & CT_{it} & start & CT_{i,t+2} & CT_{i,t+1} & CT_{it} & CT_{i,t-1} & CT_{i,t-2} \\
\hline
1 & 2000 & 0 & 2004 & 0 & 0 & 0 & \text{NA} & \text{NA} \\
1 & 2001 & 0 & 2004 & 0 & 0 & 0 & 0 & \text{NA} \\
1 & 2002 & 0 & 2004 & 0.3 & 0 & 0 & 0 & 0 \\
1 & 2003 & 0 & 2004 & 0.4 & 0.3 & 0 & 0 & 0 \\
1 & 2004 & 0.3 & 2004 & 0.42 & 0.4 & 0.3 & 0 & 0 \\
1 & 2005 & 0.4 & 2004 & 0.2 & 0.42 & 0.4 & 0.3 & 0 \\
1 & 2006 & 0.42 & 2004 & 0 & 0.2 & 0.42 & 0.4 & 0.3 \\
1 & 2007 & 0.2 & 2004 & 0 & 0 & 0.2 & 0.42 & 0.4 \\
1 & 2008 & 0 & 2004 & \text{NA} & 0 & 0 & 0.2 & 0.42 \\
1 & 2009 & 0 & 2004 & \text{NA} & \text{NA} & 0 & 0 & 0.2 \\
\hline
2 & 2000 & 0 & \text{Inf} & 0 & 0 & 0 & 0 & 0 \\
2 & 2001 & 0 & \text{Inf} & 0 & 0 & 0 & 0 & 0 \\
2 & 2002 & 0 & \text{Inf} & 0 & 0 & 0 & 0 & 0 \\
2 & 2003 & 0 & \text{Inf} & 0 & 0 & 0 & 0 & 0 \\
2 & 2004 & 0 & \text{Inf} & 0 & 0 & 0 & 0 & 0 \\
2 & 2005 & 0 & \text{Inf} & 0 & 0 & 0 & 0 & 0 \\
2 & 2006 & 0 & \text{Inf} & 0 & 0 & 0 & 0 & 0 \\
2 & 2007 & 0 & \text{Inf} & 0 & 0 & 0 & 0 & 0 \\
2 & 2008 & 0 & \text{Inf} & 0 & 0 & 0 & 0 & 0 \\
2 & 2009 & 0 & \text{Inf} & 0 & 0 & 0 & 0 & 0 \\
\hline
3 & 2000 & 0.1 & 2003 & 0.1 & 0.1 & 0.1 & \text{NA} & \text{NA} \\
3 & 2001 & 0.1 & 2003 & 0.5 & 0.1 & 0.1 & 0.1 & \text{NA} \\
3 & 2002 & 0.1 & 2003 & 0.6 & 0.5 & 0.1 & 0.1 & 0.1 \\
3 & 2003 & 0.5 & 2003 & 0.4 & 0.6 & 0.5 & 0.1 & 0.1 \\
3 & 2004 & 0.6 & 2003 & 0.2 & 0.4 & 0.6 & 0.5 & 0.1 \\
3 & 2005 & 0.4 & 2003 & 0.1 & 0.2 & 0.4 & 0.6 & 0.5 \\
3 & 2006 & 0.2 & 2003 & 0.3 & 0.1 & 0.2 & 0.4 & 0.6 \\
3 & 2007 & 0.1 & 2003 & 0.1 & 0.3 & 0.1 & 0.2 & 0.4 \\
3 & 2008 & 0.3 & 2003 & \text{NA} & 0.1 & 0.3 & 0.1 & 0.2 \\
3 & 2009 & 0.1 & 2003 & \text{NA} & \text{NA} & 0.1 & 0.3 & 0.1 \\
\hline
4 & 2000 & 0.3 & 2006 & 0.4 & 0.2 & 0.3 & \text{NA} & \text{NA} \\
4 & 2001 & 0.2 & 2006 & 0.2 & 0.4 & 0.2 & 0.3 & \text{NA} \\
4 & 2002 & 0.4 & 2006 & 0.3 & 0.2 & 0.4 & 0.2 & 0.3 \\
4 & 2003 & 0.2 & 2006 & 0.5 & 0.3 & 0.2 & 0.4 & 0.2 \\
4 & 2004 & 0.3 & 2006 & 0.1 & 0.5 & 0.3 & 0.2 & 0.4 \\
4 & 2005 & 0.5 & 2006 & 0.12 & 0.1 & 0.5 & 0.3 & 0.2 \\
4 & 2006 & 0.1 & 2006 & 0.13 & 0.12 & 0.1 & 0.5 & 0.3 \\
4 & 2007 & 0.12 & 2006 & 0.14 & 0.13 & 0.12 & 0.1 & 0.5 \\
4 & 2008 & 0.13 & 2006 & \text{NA} & 0.14 & 0.13 & 0.12 & 0.1 \\
4 & 2009 & 0.14 & 2006 & \text{NA} & \text{NA} & 0.14 & 0.13 & 0.12 \\
\hline
\end{array}
$$
Please note the endpoints. If you do not know individual spending beyond these limits, then they should be treated as missing (i.e., $\text{NA}$ = "Not Available"). With more and more leads and/or lags, a researcher will either restrict the effect window by either binning at the last estimated lead/lag or drop unit-time observations beyond the effect window. By "binning" we assume constant treatment effects beyond the effect window in one, or both, directions. For example, in the case with a binary treatment variable, a final binned lag just changes from 0 to 1 in that period then stays equal to 1 for the remainder of the panel. In the continuous case, a binned lag is forward cumulated (e.g., $CT_{it} = CT_{it} + \Delta CT_{i, t-1}$). I rarely observe researchers explain how they treat the end points in their papers, especially in cases with continuous policy variables, so I can't even direct you to a good resource.
In most event study applications, it's not uncommon to simply ignore the estimated effects beyond the effect window. If you're working with a panel that is wider than it is long, then I would consider a shorter effect window. On the other hand, if you're working with a much longer time series, then losing a few periods at either endpoint isn't going to matter much.
| null |
CC BY-SA 4.0
| null |
2023-04-18T00:59:32.707
|
2023-04-18T08:20:06.620
|
2023-04-18T08:20:06.620
|
246835
|
246835
| null |
613274
|
1
| null | null |
0
|
17
|
[](https://i.stack.imgur.com/TOFzT.jpg)
[](https://i.stack.imgur.com/fyaWy.jpg)
Here's the sample data I got from Kaggle. This is a sample of daily temperature data for about 20 years. In my initial attempt I tried to simplify this as monthly data and attempted to use a seasonal Arima model. From the acf and pacf I can see that this model is an AR processing with AR tail off and PACF cut off. I attempted model with `sarima(tempts_selected, 0, 0, 1, 7, 0, 1, 12)`, and I tried to use it to predict the previous year's temperature. I calculated the rmse and it is about four. Therefore, that's a large difference and I plan to improve this.
My question is that:
- It is difficult for me to find the parameters in seasonal Arima model. I found from resources to count the number of spikes from existing data's PACF but I am not sure if my approach is correct. If I use an SARIMA model with seasonal component around AR(2) or AR(1) then I will get a result that's very inaccurate (incorrect temperature pattern). I am not sure if the parameters selected here are the best ones.
- I am considering to use the Long Short Term Memory (LSTM) process on this data to make improvements, therefore I am asking if Long Short Term Memory typically work best in here with a long daily data.
[](https://i.stack.imgur.com/R37XQ.jpg)
|
Time Series Forecasting Improving Accuracy
|
CC BY-SA 4.0
| null |
2023-04-18T01:18:25.053
|
2023-04-18T01:18:25.053
| null | null |
385958
|
[
"machine-learning",
"arima",
"seasonality"
] |
613275
|
1
| null | null |
1
|
15
|
Respected sir/ madam
When we perform QSAR say for some Biological activity prediction
For the features that is on the independent variables
Say in my case the independent variables are topological indices
Is normalization of the features permitted to get QSAR MLR model equation
|
Scalings that can be used during QSAR
|
CC BY-SA 4.0
| null |
2023-04-18T01:24:05.940
|
2023-04-18T11:55:51.480
| null | null |
325928
|
[
"machine-learning",
"multiple-regression",
"qsar"
] |
613276
|
1
|
613603
| null |
0
|
46
|
I saw a comment here by @gavin-simpson [y-axis values in plot(gam)](https://stats.stackexchange.com/questions/557804/y-axis-values-in-plotgam) , I don't understand why `trans = exp` is used instead of `trans = plogis `, how do u decide which one to use?
The code I am using
```
library(mgcv)
b <- gam(outcome ~ s(week, k = 4, fx = TRUE, by = food) + food, data = df1, family = betar(link="logit"), method = "REML")
summary(b)
```
My dependent variable is in proportion from 0 to 1. Week is a form of timeline measure, and food is a categorical variable.
When I do `trans = plogis`, I get the plot below
```
plot(b, pages = 1, trans = plogis, shift = coef(b)[1])
```
How shall I interpret this plot here with `trans = plogis` ?
[](https://i.stack.imgur.com/XPAVp.png)
When I do `trans = exp`, I get the plot below
```
plot(b, pages = 1, trans = exp, shift = coef(b)[1])
```
My question is why am I getting values larger than 1 when I do `trans = exp` ?
[](https://i.stack.imgur.com/6d1mk.png)
I am new to GAM and still learning, any guidance is appreciated.
|
y-axis in plot(gam) , difference between trans = plogis and trans = exp
|
CC BY-SA 4.0
| null |
2023-04-18T01:36:27.403
|
2023-04-20T18:12:15.663
| null | null |
362150
|
[
"r",
"regression",
"generalized-additive-model",
"mgcv"
] |
613278
|
1
| null | null |
1
|
40
|
I was hoping to get thoughts on my logic/thought process here
I'm confused when it comes to running an A/B test on CTR. Experiments are typically allocated on a user-level, while CTR is typically an impression level metric (i.e. # of clicks / # of impressions)
How do we reconcile this?
My thoughts...
Aggregate data on a user-level, meaning each row is now a user_id with columns impressions, clicks, and ctr.
We can then calculate some weighted mean of ctr and weighted variance/stdev of ctr across users.
Does this make sense? Am i overcomplicating things? Would love some thoughts here
|
A/B Test - User-level CTR
|
CC BY-SA 4.0
| null |
2023-04-18T02:07:02.773
|
2023-04-18T06:01:18.083
| null | null |
326432
|
[
"hypothesis-testing",
"statistical-significance",
"experiment-design"
] |
613279
|
1
|
613309
| null |
2
|
71
|
Here is the question:
>
Suppose $X, Y$ are independent $N(0,1)$ random variables. And take the regression of $Y$ against $X.$ What is the relationship between $R^2$ and sample size approximately?
First I am not sure if "independent" is a typo in the original question. In ordinary least squares (OLS), there is no specific relation between $R^2$ and sample size.
Here R2 is defined as:
$$R2 = 1 - \dfrac{RSS}{TSS}.$$
$$RSS = \sum(y_i-\hat{y})^2,\ TSS = \sum(y_i-\bar{y})^2.$$
And in the simple linear regress with interception, R2 is exactly correlation between X and Y.
|
relation between R-squared and sample size
|
CC BY-SA 4.0
| null |
2023-04-18T02:38:35.703
|
2023-04-18T12:25:41.063
|
2023-04-18T12:25:41.063
|
255075
|
255075
|
[
"regression",
"least-squares",
"sample-size",
"r-squared"
] |
613280
|
1
| null | null |
0
|
18
|
Recently I've been learning Partial Least Squares (PLS), and learned that there are some variations of PLS, namely PLS-Canonical, PLS-SVD, PLS2 and PLS1 (mainly from the [User Guide of PLS classes](https://scikit-learn.org/stable/modules/cross_decomposition.html#cross-decompositionhttps://) in the python's scikit-learn library. I could understand most of it, and have read the reference that page has provided ([A survey of Partial Least Squares (PLS) methods, with emphasis on the two-block case, by JA Wegelin](https://stat.uw.edu/sites/default/files/files/reports/2000/tr371.pdf). Nevertheless, I still have questions about the details of predicting $Y$ based on new samples.
For example, assume we have two matrix blocks $\mathbf{X}$ and $\mathbf{Y}$, and I want to formulate a regression model based on PLS-Canonical. In PLS-Canonical, we first calculate the first first pair of latent scores of $\mathbf{X}$ and $\mathbf{Y}$, using the first pair of left and right singular vectors of $\mathbf{X}^T\mathbf{Y}$ associated with the largest singular value, $\boldsymbol{u}_1$ and $\boldsymbol{v}_1$. The calculated latent scores would be $\boldsymbol{\xi}_1 = \mathbf{X} \boldsymbol{u}_1$ and $\boldsymbol{\omega}_1 = \mathbf{Y} \boldsymbol{v}_1$; the rest pairs of $\boldsymbol{\xi}_r$ and $\boldsymbol{\omega}_r$ where $r > 1$ would be calculated using the same approach based on $\mathbf{X}^{(r)}$ and $\mathbf{Y}^{(r)}$, where $\mathbf{X}^{(r)}$ is the residual after the deflation of $\mathbf{X}^{(r-1)}$ based on $\boldsymbol{\xi}_{r-1}$, and $\mathbf{Y}^{(r)}$ is obtained similarly, and $\mathbf{X}^{(1)} = \mathbf{X}$, $\mathbf{Y}^{(1)} = \mathbf{Y}$.
No problems so far, but how to regress $\mathbf{Y}$ on $\mathbf{X}$? Based on my understanding (or my guess), we do a one-to-one regression of each of the pairs
$$
\hat{\boldsymbol{\omega}}_r = \frac{\boldsymbol{\xi}_r \boldsymbol{\xi}_r^T \boldsymbol{\omega}_r }{\boldsymbol{\xi}_r^T \boldsymbol{\xi}_r} = \alpha_r \boldsymbol{\xi}_r,
$$
and then
$$
\hat{\mathbf{Y}} = \sum_{r=1}^R \alpha_r \boldsymbol{\xi}_r \boldsymbol{\delta}_r^T,
$$
where $\boldsymbol{\delta}_r$ is the loading of $\boldsymbol{\omega}_r$. In matrix form, it would be
$$
\hat{\mathbf{Y}} = \mathbf{\Xi}
\left[ \begin{matrix}
\alpha_1 & & \\
& \ddots & \\
& & \alpha_R
\end{matrix} \right]
\mathbf{\Delta}^T.
$$
I thought I had been correct until I saw [the source code of scikit-learn's implementation of PLS-Canonical](https://github.com/scikit-learn/scikit-learn/blob/364c77e04/sklearn/cross_decomposition/_pls.py#L667). In the `predict` method, it appears that
```
Ypred = X @ self._coef_.T
```
and `self._coef_` is computed as follows:
```
# here self.x_rotations_ is a transformation matrix such that X @ self.x_rotations_ would be Xi.
self._coef_ = np.dot(self.x_rotations_, self.y_loadings_.T)
self._coef_ = (self._coef_ * self._y_std).T
```
which seems to be
$$
\hat{\mathbf{Y}} = \mathbf{\Xi} \mathbf{\Delta}^T,
$$
where no involvement of $\alpha_r$'s is found.
I expect scikit-learn to be correct, so I am confused by the difference between my understanding and the implementation of sklearn. Have I got anything wrong? Any help would be appreciated!
|
How to predict Y using PLS-Canonical model given the X matrix?
|
CC BY-SA 4.0
| null |
2023-04-18T02:53:15.983
|
2023-04-18T02:53:15.983
| null | null |
267818
|
[
"regression",
"machine-learning",
"partial-least-squares"
] |
613281
|
2
| null |
612396
|
3
| null |
The Wilcoxon test is locally asymptotically optimal for location shift in a logistic distribution. Because it's based on ranks, it is also locally asymptotically optimal whenever a monotone one:one transformation of the data would produce a location shift in a logistic distribution.
One place this is derived is example 13.14 of Asymptotic Statistics by van der Vaart.
| null |
CC BY-SA 4.0
| null |
2023-04-18T03:28:56.700
|
2023-04-18T03:28:56.700
| null | null |
249135
| null |
613282
|
1
|
613411
| null |
0
|
20
|
I know how `glmnet` re-scales the `penalty.factor` with a sum `nvar` as discussed in this [post](https://statisticaloddsandends.wordpress.com/2018/11/13/a-deep-dive-into-glmnet-penalty-factor/).
$$
\underset{\beta}{\operatorname{minimize}} \frac{1}{2} \frac{\mathrm{RSS}}{n}+\lambda \sum_{j=1}^p \frac{c_j}{\bar{c}}\left(\frac{1-\alpha}{2}\left\|\beta_j\right\|_2^2+\alpha\left\|\beta_j\right\|_1\right)
$$
But when `penalty.factor` take `Inf` values, that is excluding some variables, how the re-scaling works?
Does it re-scale it to a sum to be the number of included variables, say `nvar.included`.
And how could I verify that?
|
How `glmnet` re-scales `penalty.factor` with `Inf` values
|
CC BY-SA 4.0
| null |
2023-04-18T03:49:16.137
|
2023-04-19T02:16:30.133
| null | null |
204986
|
[
"glmnet"
] |
613283
|
1
| null | null |
6
|
1200
|
Does higher variance usually mean lower probability density? Despite the type of distribution. Thank you.
Update:
Sorry for confusion. Please allow me to clarify. If I sample the same number of data points from two distributions of the same type, but one distribution with a lower variance and another one with a higher variance, would the former sample have higher likelihood?
An example with Gaussian distributions:
- Sample $\mathbf{X_1}$ from $N(\mu_1, \sigma_1^2)$
- Sample $\mathbf{X_2}$ from $N(\mu_2, \sigma_2^2)$, where $\sigma_1 < \sigma_2$,
is it true that $P(\mathbf{X_1}; N(\mu_1, \sigma_1^2)) < P(\mathbf{X_2}; N(\mu_2, \sigma_2^2))$?
Thank you!
|
Does higher variance usually mean lower probability density?
|
CC BY-SA 4.0
| null |
2023-04-18T04:50:53.770
|
2023-04-19T02:46:25.353
|
2023-04-18T18:20:08.857
|
360550
|
360550
|
[
"probability",
"distributions",
"mathematical-statistics"
] |
613284
|
2
| null |
613283
|
15
| null |
Up to a point. Because the density integrates to 1, the typical value of the density will be higher if the distribution has a lower variance and lower if it has a higher variance. For example, the maximum density of a Normal distribution with variance $\sigma^2$ is $1/(\sigma\sqrt{2\pi}$, which gets lower as $\sigma$ gets higher. On the other hand, because the distribution with larger variance is more spread out, it has higher density out in the tails. For example, here are two Normal distributions, with variance 1 and 4.
[](https://i.stack.imgur.com/PWnrv.png)
The distribution with variance 1 has higher density in the middle, but lower density at the edges.
However, if you have two distributions with different shapes, it's harder to make generalisations.
| null |
CC BY-SA 4.0
| null |
2023-04-18T05:13:53.770
|
2023-04-18T05:13:53.770
| null | null |
249135
| null |
613285
|
1
| null | null |
0
|
32
|
To assess the impact of temperature and humidity on CPU performance, we conducted an experiment where we manipulated temperature and humidity levels. There were two within-subjects variables in the experiment: temperature (with high and low levels) and humidity (with high, medium, and low levels). We measured the performance of massive CPUs using two metrics: response speed and average speed of long-term calculations.
Then we conducted two repeated measures ANOVAs (2x3) with response speed and average speed of long-term calculations as dependent variables. Both ANOVAs revealed significant interaction effects.
I want to test to what extent these two indicators (response speed and average speed of long-term calculations) can be used interchangeably in detecting the interaction effects between temperature and humidity , in order to draw the conclusion that future research can use only one indicator to detect the interaction effect because it can predict the other indicator to a large extent.
How can I do this? Any suggestions would be appreciated.
|
How can I test the relevance between two interaction effects from two repeated measures ANOVAs?
|
CC BY-SA 4.0
| null |
2023-04-18T05:45:17.223
|
2023-04-23T18:33:15.423
|
2023-04-18T13:48:16.487
|
385972
|
385972
|
[
"mathematical-statistics",
"correlation",
"anova",
"repeated-measures",
"interaction"
] |
613286
|
2
| null |
613278
|
0
| null |
One solution is to eschew sessions and instead use a boolean value for if the user clicked on your CTA/button/whatever within some specified time range from their initial exposure.
For example, You could measure if users click on the CTA within 24 hours of their initial exposure. Any click, would yield a 1 in their row. No click in 24 hours would yield a 0. This would yield one row per user, with a treatment status indicator and and indicator for if the outcome was observed within the desired time. Note that "power users" can not influence this metric like they can when analyzing session level outcomes; the outcome for each user is either a 0 or a 1. There are no outliers in this respect.
The problem is then the comparison of success rates for two binomial random variables and you can use a z test of proportions, or a logistic regression, or whatever you want. Appropriate techniques for statistical power can be taken from there depending on which analysis you intend to perform.
What you lose in this approach is the interpretation. If CTR is defined as # clicks per # impressions then we are analyzing something different (namely, the risk of observing the outcome within time t, which is of course different). Maybe that doesn't matter, maybe it does, but it is important to know you are answering a slightly different question.
| null |
CC BY-SA 4.0
| null |
2023-04-18T06:01:18.083
|
2023-04-18T06:01:18.083
| null | null |
111259
| null |
613288
|
1
| null | null |
3
|
71
|
I am a bit confused by [Classical CLT section](https://en.wikipedia.org/wiki/Central_limit_theorem#Classical_CLT) of the central limit theorem on Wikipedia. It basically says at the sample size gets larger, the difference between the sample mean and true mean approximates the normal distribution. But from the law of large numbers we know that as sample size approaches infinity, the sample mean becomes the true mean. In other words, it is a single point instead of a distribution. So this result cannot be talking about the limiting case. So at which point can we regard the difference between the sample mean and true mean to be normal?
|
Asymptotic normality in central limit theorem
|
CC BY-SA 4.0
| null |
2023-04-18T06:37:00.677
|
2023-04-18T23:33:44.993
|
2023-04-18T21:16:07.240
|
56940
|
272776
|
[
"probability",
"central-limit-theorem",
"asymptotics",
"law-of-large-numbers"
] |
613289
|
1
| null | null |
1
|
22
|
Somebody asked me for help to design an experiment with animals and its statistical analysis for a medical thesis.
She wants to "calculate" de sample size, and wants to minimize it.
She doesn't have any previous data yet.
The model is going to be a linear regression model with four covariates and repeated measures.
The participants will be randomly selected at the beginning. No further manipulation.
I know that calculating N depends on many subjective things.
I would like to suggest her to try with a small N instead, and increase it as needed, with corrections for the p-values.
I don't what corrections because some of the data is going to be new but we already know the old one.
Something like Bonferroni doesn't sound good because the denominator will increase as fast as the number of analysis.
What would be the simplest way to do it? (without bayesian analysis because I don't have experience with it).
Maybe keep doing new single experiments until she gets the desired effect?
Or maybe duplicate N at every step?
|
What is the simpler experiment design and corrections I can use for an adaptative model?
|
CC BY-SA 4.0
| null |
2023-04-18T07:24:51.967
|
2023-04-21T14:53:04.507
|
2023-04-20T08:08:23.087
|
23802
|
23802
|
[
"regression",
"p-value",
"experiment-design",
"adaptative"
] |
613290
|
1
| null | null |
1
|
36
|
I have a patient group (n=30) who all have a specific kidney disease. I have their kidney function at baseline and their kidney function at follow-up 1 year later, and some confounders: age, gender, and smoking-status (never, former, or yes).
I have the following hypothesis: The patients’ decrease in kidney function from baseline to follow-up are caused by a natural/expected decrease, and thus, not their disease.
I thought I would make a linear regression to show this, but I’m not so sure anymore - or maybe I’m just not sure how to interpret it. I’ve tried doing it like this:
```
regress kidney_function_FU age gender smoking kidney_function_base
```
This gives me the following output:
|kidney_function_FU |Coef. (95%CI) |p-value |
|------------------|-------------|-------|
|age |-0.47 (-0.77; -0.18) |0.003 |
|1.gender |-2.88 (-11.48; 5.73) |0.496 |
|2.smoking |12.45 (2.06; 22.84) |0.021 |
|3.smoking |0.03 (-9.26; 9.33) |0.994 |
|kidney_function_base |0.64 (0.30; 0.98) |0.001 |
|_cons |56.08 (14.3; 20.21) |0.011 |
R-squared = 0.72
Adj. R-squared = 0.68
So, I guess R-squared is quiet large, meaning that these variables (confounders + baseline kidney function) explains ≈ 70% of the (decreased) kidney function at follow-up. However, I'm not sure how I could really ever accept my hypothesis - this I could only do, I guess, if R-squared was close to 100?
Is it my interpretation that is wrong, or my choice of analysis?
|
(Expected) decrease from baseline to follow-up - linear regression?
|
CC BY-SA 4.0
| null |
2023-04-18T07:29:54.473
|
2023-04-18T15:35:07.533
|
2023-04-18T07:36:10.643
|
362671
|
322537
|
[
"regression"
] |
613291
|
1
| null | null |
0
|
12
|
We know that [PAC bounds](https://arxiv.org/pdf/2110.11216.pdf) tell us the empirical risk on training data cannot be too different from the true risk. My question is, does this result imply empirical risk minimizer also cannot be too far from the true minimizer in the parameter space? Personally, I don't think that is the case, as I don't see anything preventing the minimum on training loss landscape from having an arbitrary distance from the minimum on the loss of the entire data distribution while having a similar loss value.
Edit: [this paper](https://sites.ualberta.ca/%7Eszepesva/papers/GenLinBandits-NeurIPS2010.pdf) in section 4.2 states that maximum likelihood estimate converges to normal distribution is a standard statistical result. So given convex loss, if we sample multiple training datasets from the entire dataset and train the neural network to find their respective empirical minimizers, then those empirical minimizers approximately follow a normal distribution? And in non-convex setting, the distribution is locally approximately normal centered around every minimizer on the entire data distribution?
|
Does PAC bound or MLE imply the distribution of empirical risk minimizer?
|
CC BY-SA 4.0
| null |
2023-04-18T07:53:23.753
|
2023-04-18T10:46:32.297
|
2023-04-18T10:46:32.297
|
272776
|
272776
|
[
"pac-learning"
] |
613293
|
1
| null | null |
0
|
18
|
To find spatial patterns in my data, I performed an hotspot analysis (Getis-Ord Gi*). In order to explain the distribution of hotspots and coldspots, I want to use the z-score as dependent variables for an OLS regression. Is this allowed? (I know z-scores are used for indepedent variables.) And can I also run a GWR (geographically weighted regression) with this dependent variable (z-scores)?
Thanks!
|
Can I use z-scores from hotspot analysis (Getis-Ord Gi*) as dependent variables for regression model?
|
CC BY-SA 4.0
| null |
2023-04-18T08:09:09.120
|
2023-06-02T05:40:50.770
|
2023-06-02T05:40:50.770
|
121522
|
385983
|
[
"regression",
"spatial",
"z-score"
] |
613294
|
1
| null | null |
0
|
13
|
I have a set of data points which are a number of parameters and a discrete measure. The data is bifurcated (using decision tree regression on the parameters) to select a sample which has the highest mean. I want to run some hypothesis test on this to determine whether the selected sample is actually a population from a different distribution or if it is simply the tail end of a distribution describing all of the data.
Since the sample is not randomly selected I don't think something like a t-test would be correct to use, but not quite sure if there exists something that is suitable here. The number of possible splits somehow needs to be taken into account as well, but not sure if running some multiple hypothesis correction is enough either. What methods exist for this? Perhaps there is some Bayesian approach?
|
Hypothesis testing for non-randomly sampled data
|
CC BY-SA 4.0
| null |
2023-04-18T08:32:03.833
|
2023-04-18T08:32:03.833
| null | null |
385985
|
[
"hypothesis-testing",
"bayesian",
"multiple-comparisons"
] |
613295
|
1
|
613351
| null |
1
|
21
|
I'm looking for ways of fine tuning my FFN binary classifier, which operates on large, flat vector inputs. The theory behind ResNets seems promising to me, but I'm not confident about whether they are supposed to work with non-image data and without convolutional layers, or not. Even more, I'm unsure how to handle the dimension reduction with identity mapping between dense layers of different sizes. Any advice? Experience?
|
Does it make sense to use Residual Blocks in a simple FFN classifier without convolutions?
|
CC BY-SA 4.0
| null |
2023-04-18T08:43:17.533
|
2023-04-19T12:53:02.917
| null | null |
349552
|
[
"classification",
"residual-networks"
] |
613296
|
2
| null |
612970
|
3
| null |
### Q1
Yes, with the default univariate smooth `s(x)` there will always be one basis function that is a linear function and hence perfectly correlated with `x`. That this is the last function is, I think, implementational; nothing changes if you put this linear basis function first or anywhere in the set of basis functions.
Note, however, that reducing the default size of the penalty null space will remove this linear basis function: with `s(x, m = c(2, 0))`, we are requesting a low-rank thin plate regression spline (TPRS) basis with 2nd order derivative penalty and zero penalty null space. As the linear function is in the penalty null space (it is not affected by the wiggliness penalty as it has second derivative of 0), it will be removed from the basis.
If we have a bivariate low rank TPRS, then there will be a linear plane that is perfectly correlated with $x_1$ and another that is linearly correlated with $x_2$ for low rank TPRS $f(x_1,x_2)$.
### Q2
I'm not going to repeat Wood (2003) — if you want the math behind thin plate splines, read that paper or §5.5 of (the second edition of) Simon's book (2017) for the detail.
The raw basis functions for the univariate thin plate spline are given by
$$
\eta_{md}(r) = \frac{\Gamma(d/2 - m)}{2^{2m}\pi^{d/2}(m-1)!} r^{2m-d}
$$
for $d$ odd and here $d = 1$ as we are speaking about a univariate spline. $m$ is the order of the penalty, so be default $m = 2$. $r = \| \mathbf{x}_i - \mathbf{x}_j \|$, i.e. the Euclidean distance between the data $\mathbf{x}_i$ and the control points or knots $\mathbf{x}_j$, the latter being the unique values of $\mathbf{x}_i$.
These functions look like this
```
# definition from Wood 2017 and Wood 2003 JRSSB
# this is for a 1 dimensional smooth (d) with 2nd order derivative penalty (m)
eta <- function(x, x0) {
d <- 1
m <- 2
r <- sqrt((x - x0)^2)
top <- gamma((d/2) - m)
bot <- (2^(2*m) * pi^(d/2)) * factorial(m-1)
(top / bot) * r^((2*m) - d)
}
tprs <- function(x, knots = NULL, null_space = TRUE) {
require("tidyr")
# want a basis function per unique x if no control points provided
if (is.null(knots)) {
knots <- sort(unique(x))
}
bf <- outer(x, knots, FUN = eta)
if (null_space) {
bf <- cbind(rep(1, length(x)), x, bf)
}
# convert to a tibble for easy plotting in ggplot
n_knots <- length(knots)
n_bf <- ifelse(null_space, n_knots + 2, n_knots)
colnames(bf) <- paste0("bf", seq_len(n_bf))
bf <- bf |>
tidyr::as_tibble() |>
tibble::add_column(x, .before = 1L) |>
tidyr::pivot_longer(!all_of("x"), names_to = "bf", names_prefix = "bf") |>
dplyr::mutate(bf = factor(bf, levels = seq_len(n_bf)))
bf
}
set.seed(1)
x_ref <- seq(0, 1, length = 100)
x <- runif(20)
knots <- sort(unique(x))
bfuns <- tprs(x_ref, knots = x)
library("ggplot2")
bfuns |>
ggplot(aes(y = value, x = x, group = bf, colour = bf)) +
geom_line() +
theme(legend.position = "none") +
facet_wrap(~ bf, scales = "free_y")
```
[](https://i.stack.imgur.com/TM69L.png)
Basis functions 1 and 2 are the functions in the penalty null space for this basis (they have 0 second derivative).
For practical usage, the basis needs to have identifiability constraints applied to it; typically this is a sum-to-zero constraint. As a result the knot-based (1 basis function per $\mathbf{x}_j$) thin plate spline basis looks like this:
```
library("gratia")
x_red <- seq(0, 1, length = 50)
x <- runif(7)
knots <- sort(unique(x))
bfs <- basis(s(x, k = 7), data = data.frame(x = x), knots = list(x = knots),
at = data.frame(x = x.ref), constraints = FALSE)
draw(bfs) & facet_wrap(~bf)
```
[](https://i.stack.imgur.com/2tx2Q.png)
Here shown for 7 data, hence 7 basis functions. This is achieved in mgcv by passing in the `knots` argument and having it be the same length as `k`. If you want to do this with large `n` and hence `k` you will likely need to read `?tprs` and note the setting `max.knots`.
By default however, mgcv doesn't use this knot-based tprs basis. Instead it uses the low-rank approach of Wood (2004), but applies and eigendecomposition to the full basis, and retains the eigenvectors associated $k$-largest eigenvalues as a new basis. The point of this is that we can retain much of the original, rich basis in the low-rank one, thus providing a close approximation to the ideal spline basis, without needing $n$ basis functions (number of unique data points), i.e. covariates. This low-rank solution requires a computationally costly eigendecomposition, but mgcv uses an algorithm such that it only ever needs to find the eigenvectors for the $k$-largest eigenvalues, not the full set of eigenvectors. Even so, for large $n$ this is still computationally costly and `?tprs` suggests what to do in such cases.
These eigendecomposition-based basis functions, for the same 7 data as above, look like this
```
bfs_eig <- basis(s(x, k = 7), data = data.frame(x = x_ref), constraints = FALSE)
draw(bfs_eig) & facet_wrap(~bf)
```
[](https://i.stack.imgur.com/UkHBd.png)
Note that in these plots I'm showing the basis with the constant function included. In a typical model, the constant function is removed because it is confounded with the intercept.
### Q3
The only other basis in mgcv that has this property is the Duchon spline (`bs = "ds"`). This is not surprising as the thin plate spline is a special case of the more general class of Duchon splines.
This is not to say that other bases do not include the linear function in their span; they do, but this is achieved through a specific weighting of the individual basis functions, and as such there isn't a basis function that is linear in the other bases in mgcv.
### References
Wood, S.N., 2003. Thin plate regression splines. J. R. Stat. Soc. Series B Stat. Methodol. 65, 95–114. [https://doi.org/10.1111/1467-9868.00374](https://doi.org/10.1111/1467-9868.00374)
§
Wood, S.N., 2017. Generalized Additive Models: An Introduction with R, Second Edition. CRC Press.
| null |
CC BY-SA 4.0
| null |
2023-04-18T08:44:41.030
|
2023-04-18T08:44:41.030
| null | null |
1390
| null |
613297
|
1
| null | null |
0
|
44
|
Does outliers begin on the whisker limit or above it?
In the (Python) example below the calculcated upper whisker limit is `64.8125`. Is a value of `64.8125` an outlier (`>= upper_limit`)? Or the outliers are higher then the limit (`> upper_limit`)?
```
>>> import seaborn as sns
>>> df = sns.load_dataset('titanic)
>>> df.age.quantile([.25, .5, .75])
0.25 20.125
0.50 28.000
0.75 38.000
Name: age, dtype: float64
>>> q1, q2, q3 = df.age.quantile([.25, .5, .75])
>>> iqr = q3 - q1
>>> whisker_length = iqr * 1.5
>>> upper_limit = q3 + whisker_length
>>> lower_limit = q1 - whisker_length
>>> lower_limit, upper_limit
(-6.6875, 64.8125)
```
|
Do outliers begin from or above the whisker-limit?
|
CC BY-SA 4.0
| null |
2023-04-18T08:45:16.153
|
2023-04-18T13:18:12.067
| null | null |
126967
|
[
"python",
"outliers"
] |
613298
|
2
| null |
613297
|
2
| null |
Under Tukey's approach, the whiskers are specifically drawn in each direction to the furthest observation from the median that is still within (or equal to) the inner fence. If a point is exactly on the inner fence, the whisker includes it.
A marked point must therefore be outside the inner fences.
That is strictly above on the high side (above the upper inner fence) but strictly below on the low side (below the lower inner fence).
Here's an illustration where there's points above the upper inner fence.
[](https://i.stack.imgur.com/NJnB7.png)
| null |
CC BY-SA 4.0
| null |
2023-04-18T08:50:40.837
|
2023-04-18T11:39:42.203
|
2023-04-18T11:39:42.203
|
805
|
805
| null |
613299
|
2
| null |
612778
|
4
| null |
Computational complexity has nothing to do with being a Bayesian model. Bayes theorem is a mathematical concept, it has no computational complexity whatsoever. There are different Bayesian models, where some have [closed-form solutions](https://stats.stackexchange.com/questions/70848/what-does-a-closed-form-solution-mean) so can be solved "instantly", some would need you to take complicated integrals. For the latter, we have many algorithms for finding the, usually approximate, solutions, where each of the algorithms has different computational complexity.
As you can learn from [Who Are The Bayesians?](https://stats.stackexchange.com/questions/167051/who-are-the-bayesians) there is no clear definition of "Bayesianism" but one of the key concepts is the [subjectivist interpretation of probability](https://stats.stackexchange.com/questions/173056/how-exactly-do-bayesians-define-or-interpret-probability). [The Bayesian model is the one defined in terms of priors and likelihood and using Bayes theorem](https://stats.stackexchange.com/questions/129017/what-exactly-is-a-bayesian-model), which is clearly the case of Gaussian processes. Notice however that there are generalizations of the Bayesian approach that would be considered by many still as Bayesian approaches, e.g. ABC (see [approximate-bayesian-computation](/questions/tagged/approximate-bayesian-computation)) that do not even try to directly calculate the Bayes theorem and consider scenarios where this is not possible.
Finally, you seem to be sticking to the idea of Bayesian updating but notice that for some Bayesian models, it would not be possible in practice to do such an update at all. For example, if you are using Markov Chain Monte Carlo for sampling from the posterior distribution to get an approximation of it, there is no simple way of using those samples as a prior for another model.
| null |
CC BY-SA 4.0
| null |
2023-04-18T08:59:14.493
|
2023-04-18T10:43:01.007
|
2023-04-18T10:43:01.007
|
35989
|
35989
| null |
613300
|
2
| null |
588224
|
0
| null |
For qq plots, use standardized and normalized residuals, and compare with a diagonal line. The raw residual type = "r" will not align with the diagonal y = x.
```
qqnorm(residuals(mod1 , type = "p")) # pearson residual
qqline(residuals(mod1 , type = "p")) # add trend line
abline(0, 1)
qqnorm(residuals(mod1 , type = "n")) # standardized residual
qqline(residuals(mod1 , type = "n")) # add trend line
abline(0, 1)
```
Mishra, P., Pandey, C. M., Singh, U., Gupta, A., Sahu, C., & Keshri, A. (2019). Descriptive statistics and normality tests for statistical data. Annals of Cardiac Anaesthesia, 22(1), 67–72. [https://doi.org/10.4103/aca.ACA_157_18](https://doi.org/10.4103/aca.ACA_157_18) says that "Shapiro–Wilk test is more appropriate method for small sample sizes (<50 samples)...while Kolmogorov–Smirnov test is used for n ≥50." Although nonnormality in residuals does not affect unbiased point estimates of coefficients, it does affect inference, such as p value and confidence intervals. For that, consider cluster-robust standard errors to replace those in standard lmer(). See `vcovCR()` in package {clubSandwich}.
Despite the loess curves built by dipetkov, the relationship between SCORE and X1 is approximately linear. However, you should still try other function forms of X1, such as log(X1), I(X1^2), and I(X1^3) and interact with X2 to see which fixed-effect specification is most likely (using ML for maximum likelihood estimator instead of REML). With the correct function form, residual distribution problems should be alleviated.
Both residuals() ~ fitted() plots made by you and dipetkov show that the RAW residual variance increases with the response. However, you should check whether the standardized Pearson residual and normalized residual after error correlation are still heterogeneous.
```
plot(Model_Temp[[1]], resid(., type = "r") ~ fitted(.), abline = 0)
plot(Model_Temp[[1]], resid(., type = "p") ~ fitted(.), abline = 0)
plot(Model_Temp[[1]], resid(., type = "n") ~ fitted(.), abline = 0)
```
If all residual variances increase with the response no matter the type, they should also increase with X1, as the response increases with X1. Therefore, you can model the residual heterogeneity using X1 in nlme::lme(), possibly by different relationship depending on X2. This gives freedom to incorporate random effects, as opposed to Gls() in {rms} that builds upon nlme::gls(). You could also try a random effect of X1 and see if it correlated with random intercept, such as
```
model <- lme(
SCORE ~ (X1 + I(X1^2)) * X2,
data = data,
random = ~ 1 + X1 | PARTICIPANT,
correlation = corSymm(form = ~ 1 | PARTICIPANT),
weights = varPower(form = ~ X1 | X2))
```
You should compare different variance specification such as `varPower(form = ~ log(X1) | X2)`, `varExp(form = ~ X1 | X2)`, `varConstPower(form = ~ X1 | X2)`, and `varConstPropvarExp(form = ~ X1 | X2)`, and see whether the variance parameters differ by X2 through a likelihood ratio test by `anova()`. If the random effects of intercept and X1 have nonsignificant correlation by checking intervals(model) to have a CI of the correlation containing zero, then a restrictive model coercing uncorrelated random effects has `random = list(PARTICIPANT = pdDiag(~ 1 + X1))`.
| null |
CC BY-SA 4.0
| null |
2023-04-18T09:07:07.380
|
2023-04-18T09:07:07.380
| null | null |
284766
| null |
613303
|
1
| null | null |
0
|
22
|
Suppose we have 52 decks with 26 red and 26 black cards. We shuffle them at a random order. Then we define a "block" as cards with same colors, for example, BRRB has 3 blocks and BRRRBBRRRR has 4 blocks. So how many blocks we would have, in the sense of expectation?
My though is that, we set indicator function $I_i$ representing the `i-th` card has different colors with the card on the left of it. So $I_i=1$ if it is different, and is zero if it's same. So the final number of blocks would be equal to $\sum_{i=1}^{52}I_i$, and $E\sum_{i=1}^{52}I_i = \sum_{i=1}^{52} P(i)$, where P(i) is the probability that i-th card has different color comparing to his left one. But I got stuck at this step cuz I don't know how to calculate the probability of $i-th$ term. Is there anybody who can help me?
|
Expectation of number of different "blocks"
|
CC BY-SA 4.0
| null |
2023-04-18T09:29:14.833
|
2023-04-18T09:31:46.063
|
2023-04-18T09:31:46.063
|
362671
|
303835
|
[
"probability",
"expected-value",
"indicator-function"
] |
613304
|
2
| null |
552173
|
1
| null |
Here is a link to some code for centering categorical variables. Note in the comments on this blog that this only works for predictors that have 2 levels.
Hope this helps!
[https://hlplab.wordpress.com/2009/04/27/centering-several-variables/](https://hlplab.wordpress.com/2009/04/27/centering-several-variables/)
| null |
CC BY-SA 4.0
| null |
2023-04-18T09:56:29.033
|
2023-04-18T09:56:29.033
| null | null |
379020
| null |
613305
|
1
| null | null |
0
|
41
|
I am trying to run an ANCOVA in R with a binary treatment variable, a continuous covariate, and their interaction (response ~ cont*bin). My understanding has always been that AN(C)OVA and regression are based on the same underlying model. In line with this, if I apply Anova() from the car package to a linear regression object, then the resulting p-values are the same as for the original regression.
However, if I use the anova_test() function from the rstatix package, the p-value and F statistic for the main effect of the continuous variable are different than those obtained using the Anova() function (the interaction and the main effect of the binary treatment are the same in all models). I am using type 3 sums of squares in both cases.
Why do these p-values differ? I am aware that anova_test() in rstatix uses orthogonal contrasts, whereas Anova() uses treatment contrasts (see this post by the rstatix package author: [https://github.com/kassambara/rstatix/issues/74](https://github.com/kassambara/rstatix/issues/74)). However, I do not understand what this means when calculating F statistics for a continuous variable.
Here is a minimal working example:
```
set.seed(123)
bin <- rbinom(100, 1, 0.5) # binary treatment
cont <- rnorm(100, 0, 1) # continuous covariate
response <- 3*cont + 5*bin + 2*cont*bin + rnorm(100, 0, 20)
data <- data.frame(cont, bin = as.factor(bin), response)
reg.model <- lm(response ~ cont*bin, data = data)
summary(reg.model)
library(car)
Anova(reg.model, type = "III")
library(rstatix)
anova_test(response ~ cont*bin, data = data, type = 3)
```
I would be grateful for any help!
|
Inconsistency in ANCOVA results between rstatix and car
|
CC BY-SA 4.0
| null |
2023-04-18T10:03:12.043
|
2023-04-18T12:25:18.233
|
2023-04-18T12:25:18.233
|
385989
|
385989
|
[
"r",
"ancova",
"contrasts"
] |
613306
|
1
| null | null |
0
|
13
|
Is it always necessary to perform a two-way ANOVA for assessing differences between groups in response to a stimulus?
One specific example: we look at the response of a nematode to a stimulus of 3% CO2.
In the assay,
- worms are being exposed to 0% CO2 for three minutes,
- then get challenged with a three minute 3% CO2 stimulus,
- after which CO2 levels return to baseline.
A MATLAB script calculates the speed of these worms. Control worms speed up when exposed to CO2. We want to conclude whether mutant worms that lack specific proteins respond in the same way as control worms.
To me, the response variable is speed, one independent variable is worm strain (mutant or control), the other independent variable is stimulus/baseline.
I would argue that to state that `worm strains respond differently to the stimulus`, you always need to perform a two-way ANOVA and state a significant interaction effect between the strain and stimulus variables: different worm strain respond differently to differing concentrations of CO2, or, a difference in speed increase (i.e. change) of worms depends on the strain.
Colleagues suggested only quantifying differences during the 3% CO2 interval. I would say that a one-way ANOVA allows you to state whether `the speed of different worm strains differs during the time in which worms were exposed to 3% CO2`, but that we cannot conclude anything about the response of worms to the stimulus, as a response infers a change in behaviour as compared to a baseline level, which is only included in two-way ANOVA.
This is a specific example to a general format of experiment that is often performed in the lab. Standard practices seem to differ across assays and field, which is why I am asking this possibly redundant question. Feel free to simply confirm whether my reasoning is right or wrong if you get annoyed by the simplicity of this question (, yet to my abilities, I didn't find a question that relates closely enough to mine on this platform).
|
Is a two-way ANOVA always necessary for stimulus-evoked responses? (with specific example)
|
CC BY-SA 4.0
| null |
2023-04-18T10:15:46.823
|
2023-04-19T14:00:28.637
|
2023-04-19T14:00:28.637
|
385982
|
385982
|
[
"anova",
"two-way"
] |
613308
|
1
|
613311
| null |
4
|
139
|
I proved that given a sequence of Bernoulli distributed random variables $ X_n$ with parameter $1/n$ they do NOT converge to $X=0$ a.s. using the Borel-Cantelli lemma.
My doubt is: If I have the space $\Omega=[0,1]$ with the Lebesgue measure and a sequence of random variables such that $$X_n(x)=\begin{cases}1 &~ x \in [0,1/n]\\ 0 &\textrm{otherwise}, \end{cases}$$ this is a sequence of Bernoulli random variables with parameter $1/n$ and they also converge to $X=0 $ almost sure using the definition of convergence almost sure. But this is a contradiction to what I said above. Please help me to understand where i'm wrong.
|
Convergence almost surely of Bernoulli distributed random varibles
|
CC BY-SA 4.0
| null |
2023-04-18T10:57:45.297
|
2023-04-18T11:48:05.347
|
2023-04-18T11:03:26.510
|
362671
|
385990
|
[
"probability",
"random-variable",
"convergence",
"bernoulli-distribution"
] |
613309
|
2
| null |
613279
|
3
| null |
As sample size increases, $R^2$ decreases toward zero.
Since the two variables are independent, they have zero correlation and, thus, zero squared correlation. With a small sample size, some bad luck might result in there being a high empirical correlation. As the sample size increase, however, the empirical correlation will tend toward the true value of zero, hence the squared correlation tending toward zero.
Let’s look at it in a simulation.
```
library(ggplot2)
set.seed(2023)
Ns <- rep(seq(3, 550, 1), 4)
r2 <- rep(Ns)
for (i in 1:length(Ns)){
x <- rnorm(Ns[i])
y <- rnorm(Ns[i])
r2[i] <- (cor(x, y))^2
}
d <- data.frame(
sample_size = Ns,
r2 = r2
)
ggplot(d, aes(x = sample_size, y = r2)) +
geom_point()
xlab(“Sample Size”) +
ylab(“R^2”)
```
[](https://i.stack.imgur.com/lAaWc.png)
| null |
CC BY-SA 4.0
| null |
2023-04-18T11:00:34.503
|
2023-04-18T11:00:34.503
| null | null |
247274
| null |
613310
|
1
|
613314
| null |
2
|
41
|
I have some results from a study pre- and post-intervention. It's for an underpowered feasibility study, hence I am not performing formal hypothesis testing but I'd like to calculate mean difference (with 95% CI) as per CONSORT 2010 guidelines.
My question is, what is the best approach for this? I am struggling for a couple of reasons:
- I don't believe my data is normally distributed (it is blood test results and survey scores)
- My data is paired, i.e., it's the same group of participants pre- and post.
And example of data is as follows:
|Pre-intervention |Post-intervention |
|----------------|-----------------|
|37.14 |33.21 |
|34.69 |31.25 |
|65.52 |43.73 |
|40.56 |38.26 |
|41.32 |41.72 |
|39.34 |38.14 |
|43.95 |41.07 |
|44.26 |39.49 |
|35.28 |28.50 |
|37.12 |32.40 |
|37.82 |35.87 |
|34.71 |33.79 |
|34.08 |30.49 |
|33.08 |36.29 |
|36.89 |33.18 |
|41.38 |37.70 |
|39.29 |33.86 |
|41.62 |39.21 |
|39.36 |35.43 |
|
Mean difference (95% CI) between paired samples
|
CC BY-SA 4.0
| null |
2023-04-18T11:07:23.627
|
2023-04-18T11:41:51.010
|
2023-04-18T11:36:11.593
|
56940
|
385993
|
[
"confidence-interval",
"descriptive-statistics",
"paired-data",
"intervention-analysis",
"standardized-mean-difference"
] |
613311
|
2
| null |
613308
|
6
| null |
If you take the Wikipedia article about the [Borel–Cantelli lemma](https://en.wikipedia.org/wiki/Borel%E2%80%93Cantelli_lemma) then it is actually made up of two different statements.
- The first lemma applies if "the sum of the probabilities of the events $\{E_n\}$ is finite". That is not the case here as the sum is infinite.
- The second lemma applies if "$\sum\limits^{\infty}_{n = 1} \Pr(E_n) = \infty$ and the events $(E_n)^{\infty}_{n = 1}$ are independent". In your example the events are not independent: as an illustration you have $X_{n}=1 \implies x\le \frac1n \implies X_{n-k}=1$ for $0<k<n$
So neither Borel–Cantelli lemma applies to your example and you need to resolve it another way, as you have done.
| null |
CC BY-SA 4.0
| null |
2023-04-18T11:10:00.133
|
2023-04-18T11:48:05.347
|
2023-04-18T11:48:05.347
|
2958
|
2958
| null |
613312
|
1
| null | null |
0
|
17
|
I have been working with incremental parameter update, mainly incremental sample mean update. That is, for $k$-th iteration, the parameter $\theta^{(k)}$ is updated as
$$
\theta^{(k)} = \theta^{(k-1)} + \alpha^{(k)} (\hat{\theta}^{(k)} - \theta^{(k-1)})
$$
where $\hat{\theta}^{(k)}$ is the estimation target (or $k$-th observation of estimation) and $\alpha^{(k)}$ is the stepsize for $k$-th update.
In most literature, $\alpha^{(k)}$ is chosen to satisfy the Robbins-Monro conditions which is
$$
\lim_{k \to \infty} \alpha(k) = 0 \\
\sum_{k=1}^{\infty} \alpha(k) = \infty \\
\sum_{k=1}^{\infty} \alpha^{2}(k) = 0
$$
since these conditions ensure the convergence. Typically it is implemented as $\alpha(k)=k^{-p}$ where $p \in (0.5, 1]$.
However, in other literatures, I have seen using constant stepsize, which is often refered as an exponentially weighted mean or exponential moving average. Although it clearly violates Robbins-Monro conditions, it seems to work fine with most of them especially with non-stationary system cases
It is intuitively understandable since constant stepsize allows forgetting old values, however, I am curious if there is some mathematical derivation or theoretical studies of the validness of exponential moving average rather than 'it just works.'
Also, can I assume that it is always better to use constant stepsize than decreasing stepsize in, for example, deep learning situation where layrer inputs are non-stationary?
Moreover, if the stepsize selection differs depend on the situation, is there another considerable choice of stepsize that is also widely used?
|
Robbins-Monro condition and Exponential Moving Average
|
CC BY-SA 4.0
| null |
2023-04-18T11:14:29.680
|
2023-04-18T11:14:29.680
| null | null |
363043
|
[
"estimation"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.