Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
614919
|
1
|
614949
| null |
2
|
45
|
Someone I know are using linear regression to estimate differences-in-mean for parental stress, given a binary explanatory variable $X_i=[0,1]$. They have found a small, but significant effect.
They have a sample of 3000 parents, but the sample is very uneven, with 2700 parents with $X_i=0$ and 300 with $X_i=1$. So it's a 90/10 distribution. They are concerned that this could be a problem, and are considering randomly drawing 300 out of the 2700. I have argued against this, as a sample mean of 2700 parents will be closer to the population mean than with a sample of just 300 and the standard deviation will be smaller for a larger sample.
However, I have now read a very interesting old thread discussing this topic here on StackExchange, which raises the point that the power of a t-test is stronger for even samples, i.e., the risk of committing a type 2 error is smaller with 50/50 samples:
[How should one interpret the comparison of means from different sample sizes?](https://stats.stackexchange.com/questions/31326/how-should-one-interpret-the-comparison-of-means-from-different-sample-sizes)
The approved answer to the thread above shows how the power of a t-test can be stronger with a 50/50 sample than 75/25 and 90/10. The three samples have $N=100$. Increasing the power of the t-test is, as far as I can tell, the only point of insisiting on having equal samples. Now, I want to revisit this topic to ask whether or not this result is relevant for larger samples, such as $N=3000$.
The following R-code is lifted from the thread in the link above, with some alterations to include larger samples. [The original post is here](https://stats.stackexchange.com/q/31330).
```
set.seed(9) # To get the some of the same numbers
# as in the previous thread
power1090 = vector(length=10000) # Storing the p-values from each
power5050 = vector(length=10000) # simulated test to keep track of how many
power100900 = vector(length=10000) # are 'significant'
power3002700 = vector(length=10000)
for(i in 1:10000){ # Runnning the following procedure 10k times
n1a = rnorm(10, mean=0, sd=1) # Drawing 2 samples of sizes 90/10 from 2 normal
n2a = rnorm(90, mean=.5, sd=1) # distributions w/ dif means, but equal SDs
n1b = rnorm(50, mean=0, sd=1) # Same, but samples are 50/ 50
n2b = rnorm(50, mean=.5, sd=1)
n1c = rnorm(100, mean=0, sd=1) # A 90/10 sample, with more observations
n2c = rnorm(900, mean=.5, sd=1)
n1d = rnorm(300, mean=0, sd=1) # A 90/10 sample with 3000 total observations
n2d = rnorm(2700, mean=.5, sd=1)
power1090[i] = t.test(n1a, n2a, var.equal=T)$p.value # here t-tests are run &
power5050[i] = t.test(n1b, n2b, var.equal=T)$p.value # the p-values are stored
power100900[i] = t.test(n1c, n2c, var.equal=T)$p.value # for each version
power3002700[i] = t.test(n1d, n2d, var.equal=T)$p.value
}
mean(power1090<.05) # The powe for a 90/10 sample is 32%.
[1] 0.3203
mean(power5050<.05) # For the 50/50 sample, the power increases to 70%.
[1] 0.7001 # This is clearly an improvement.
mean(power100900<.05) # But with much larger samples, the power is close
[1] 0.9967 # to 100%, even with uneven samples.
mean(power3002700<.05)
[1] 1
```
The results show how a 50/50 sample is better than 90/10 with $N=100$. But as the number of observations grow, the power of the t-test approaches 100%, even with a 90/10 split.
This leads me back to my initial opinion, that there are no reasons to reduce a sample to produce even groups, assuming that we are talking about sufficiently large samples. Does the community agree?
|
The difference in means t-test with unequal sample sizes
|
CC BY-SA 4.0
| null |
2023-05-04T17:49:09.527
|
2023-05-05T07:57:24.027
|
2023-05-04T18:53:33.810
|
366596
|
366596
|
[
"r",
"regression",
"t-test",
"mean",
"sample-size"
] |
614921
|
1
| null | null |
0
|
25
|
I was hoping someone could help me confirm or correct me on my interpretation of my estimated coefficients of ARDL model.
All my variables are converted into natural log form, ln, both Y and Xs.
My interpretation of say a coefficient on varibale Y first lag, L1, of .726
will be: a 1% change in the first lag of Y is associated with a .73% increase in Y on average.
or a L2 of an X variable with coefficient -.432 will be interpreted as follows: a 1% change in the second lag of X is associated with a .43% decrease in Y on average.
|
Interpretation ARDL all variables in log
|
CC BY-SA 4.0
| null |
2023-05-04T18:05:10.273
|
2023-05-05T04:32:43.700
|
2023-05-04T19:54:04.200
|
53690
|
383188
|
[
"interpretation",
"logarithm",
"ardl"
] |
614923
|
1
| null | null |
0
|
12
|
One parameter survival distribution with increasing hazard rate is (for example) Rayleigh and Lindley, where Rayleigh's hazard rate increases linearly and Lindley's hazard rate increases with a concave trend.
The total time on test (TTT) transform procedure is used as a tool to identify the hazard behavior of the data distribution. As far as I know, the TTT transform can only identify constant, increasing, decreasing, bathtub, and inverse bathtub hazard rate.
I was wondering if there is any more advanced TTT transform that can identify (for example) increasing hazard rate also with their concave/convex trend?
|
Is there any advanced version of Total Time on Test (TTT) Transform?
|
CC BY-SA 4.0
| null |
2023-05-04T18:16:05.900
|
2023-05-04T18:16:05.900
| null | null |
386718
|
[
"hypothesis-testing",
"distributions",
"survival",
"order-statistics",
"hazard"
] |
614924
|
1
| null | null |
0
|
18
|
I am a bit confused with the following problem of probability:
"We select 3 phones of a sample in which we know that the 60% of them contain one mistake and the 10% contain more than one mistake. We define the variables $X$="number of phones with some mistake" and $Y$="number of products with less than two mistakes". Obtain the joint distribution table".
I have conluded from the information that $X$ is given by a binomial distribution of probability $0.7$ and $Y$ also by a binomial but with probability $0.9$. But I think I am not considering cases that are trivial and have probability zero. Maybe because the problem seems a bit ambiguous to me. Any help? Thanks.
|
Joint distribution of mistakes of phones
|
CC BY-SA 4.0
| null |
2023-05-04T18:16:30.677
|
2023-05-04T18:16:30.677
| null | null |
379161
|
[
"probability",
"binomial-distribution",
"joint-distribution"
] |
614925
|
1
| null | null |
0
|
16
|
Suppose I obtained two survey data $D_1,D_2$. (I did not design surveys and I am told to conduct analysis. The survey sample is not a probability sample but a "census" of convenient sampling.) In $D_1$, the survey collected population characteristics. I found some outcomes of interest is related to population characteristics. $D_2$ is collected at later time. Somehow some population who participated $D_1$ survey also participated $D_2$ survey. The goal is compare some outcomes in $D_2$ against those in $D_1$.
Firstly, the data is unbalanced in the sense that some people were measure twice and some were not. Secondly, in $D_2$, the designer forgot to collect characteristics. Thirdly, some outcomes of $D_2$ has >50% non-response rate. Lastly, some responses were multi-choice with multi-response. From what I saw on the survey question, it seems that the survey asked too many questions and prohibited a lot of people answering the question.(Mostly, they have lost patience for filling out the forms. If this is the case, I do not know whether I can guess this is MAR or not.) This leads to a natural missing data pattern as that in longitudinal setting for both $D_1$ and $D_2$. $D_1$ has very mild missing data pattern.(There is <5% missing data for each relevant variable.)
$Q:$ From $D_1$, I concluded possible association between characteristics and some related outcomes of $D_1$ to be compared with those in $D_2$. Without the knowledge of $D_2$'s characteristics, it seems that I am in no position to conduct multiple imputation. To complicates the story further, the data has repeated measurements for some subjects.(Even those subjects who had 2 measurements may have missing responses in $D_2$.) Is there a way to hopefully multiple impute the data? Currently, the missing data profile differs from those not missing for some characteristics. Thus I am leaning towards MNAR. My first thought is that I need some proxy for the missing data in $D_2$. However, I can only use $D_2$'s proxy variable to impute missing data and I do not have that proxy information.
$Q':$ I do not even believe what I am comparing will reflect populational level comparisons, as the survey sample is not a probability sample. How trustworthy are my analysis results?
|
missing data without baseline characteristics measurements in one survey data
|
CC BY-SA 4.0
| null |
2023-05-04T18:22:30.747
|
2023-05-04T18:32:39.543
|
2023-05-04T18:32:39.543
|
79469
|
79469
|
[
"regression",
"missing-data",
"survey",
"research-design"
] |
614926
|
2
| null |
614897
|
6
| null |
Suppose $(X_1,X_2,\ldots,X_n)$ has a standard multivariate normal distribution with common correlation $\frac12$. Then you have the following equality in distribution:
$$(X_1,X_2,\ldots,X_n) \stackrel{d} = \left(\frac1{\sqrt 2}Z_1+\frac1{\sqrt 2}Z_0,\frac1{\sqrt 2}Z_2+\frac1{\sqrt 2}Z_0,\ldots,\frac1{\sqrt 2}Z_n+\frac1{\sqrt 2}Z_0\right)\,,$$
where $Z_0,Z_1,\ldots,Z_n$ are i.i.d standard normal variables.
Using this, we can calculate the probability that exactly $k$ ($0\le k\le n$) of the variables are positive.
For example,
\begin{align}
P\left(\bigcap_{j=1}^k \{X_j>0\},\bigcap_{j=k+1}^n \{X_j<0\}\right)&=E\left[(P(Z_1>-Z_0\mid Z_0))^k (P(Z_{k+1}<-Z_0\mid Z_0))^{n-k}\right]
\\&=E\left[(1-\Phi(-Z_0))^k \cdot (\Phi(-Z_0))^{n-k}\right]
\\&=E\left[(\Phi(Z_0))^k \cdot ((1-\Phi(Z_0))^{n-k}\right]\,,
\end{align}
where $\Phi(\cdot)$ is the standard normal distribution function.
As $\Phi(Z_0)$ is uniform on $(0,1)$, this probability can be written in terms of the Beta function:
$$\int_0^1 y^k(1-y)^{n-k}\,\mathrm dy=B(k+1,n-k+1)$$
The desired probability is therefore $$\binom{n}{k}B(k+1,n-k+1)=\frac1{n+1}\,,$$
as in @Henry's answer.
| null |
CC BY-SA 4.0
| null |
2023-05-04T18:26:43.513
|
2023-05-04T20:15:23.690
|
2023-05-04T20:15:23.690
|
119261
|
119261
| null |
614927
|
1
| null | null |
1
|
30
|
Given a multivariate regression, in a form bellow, what would be the minimum number of observations ($n$)?
$$\mathbf{Y}=\mathbf{X}\mathbf{B}+\mathbf{E},$$
where $\mathbf{Y}, \mathbf{X}$ and $\mathbf{E} \in R^{n\times m}$ $(n>m)$ and $\mathbf{B}\in R^{m \times m}$. You can treat each column of $\mathbf{Y}$ and $\mathbf{X}$ as a time series.
Based on Frank Harrell's book, Regression Modeling Strategies, if you expect to be able to detect reasonable-size effects with reasonable power, you need 10-20 observations per parameter (covariate) estimated. I wonder if this would still hold in multivariate regression rather univariate? In the univariate the dependent variable $Y$ is one time series (vector format) and in multivariate one, it consists of more than one time series (matrix format).
For more info on multivariate reg: [This](http://users.stat.umn.edu/%7Ehelwig/notes/mvlr-Notes.pdf) starting from p.43.
|
Minimum number of observation in multivariate regression
|
CC BY-SA 4.0
| null |
2023-05-04T18:30:34.887
|
2023-05-04T19:18:54.017
| null | null |
312007
|
[
"regression",
"multiple-regression",
"multivariate-analysis",
"sample-size",
"multivariate-regression"
] |
614929
|
1
| null | null |
0
|
9
|
Suppose you have 3 or more tables with different observations or hypotheses to be merged with a inner join. The tables have partially overlapping columns, and when completely merged yield a reduced set of combined conditions. What are some algorithms for joining multiple tables (more than two at a time) so that they can be merged in a memory-efficient way? Below is a toy example for three observers with different sets of possible scenarios:
```
import pandas as pd
import itertools
set1 = pd.DataFrame([[who,how] for who,how in itertools.product(['Green','Scarlet','Mustard','Plum'],['Dagger','Candlestick','Rope','Wrench'])],columns=['who','how'])
set2 = pd.DataFrame([[where,how] for where,how in itertools.product(['Ballroom','Conservatory','Billiard Room','Study','Hall','Library'],['Revolver','Rope','Wrench','Pipe'])],columns=['where','how'])
set3 = pd.DataFrame([[where,who] for where,who in itertools.product(['Kitchen','Hall','Lounge','Dining Room','Study'],['Scarlet','Plum','Peacock','White'])],columns=['where','who'])
sets12 = pd.merge(set1,set2)
pd.merge(sets12,set3)
```
Doing an iterative merging of pairs of tables using `pd.merge` will create a temporary table (possibly much) larger than the inputs, even if the final output is small. Though I suspect it is a well-studied problem, I'm unable to find information on ways to efficiently arrive at the final table without creating the temporary table (`set12` in the example). Is this done automatically with proper database engines but is just missing in `pandas`?
|
Efficient merging of multiple tables
|
CC BY-SA 4.0
| null |
2023-05-04T18:40:19.610
|
2023-05-04T18:40:19.610
| null | null |
120229
|
[
"pandas",
"sql"
] |
614930
|
2
| null |
614756
|
0
| null |
It is best not to use those plots for non-OLS regression models (cf.: [Interpretation of plot (glm.model)](https://stats.stackexchange.com/a/139624/7290)). If you want a test of the assumption of linearity (on the transformed scale), you can add polynomial or spline terms beyond the linear term and test them as a set.
| null |
CC BY-SA 4.0
| null |
2023-05-04T18:47:34.103
|
2023-05-04T18:47:34.103
| null | null |
7290
| null |
614931
|
1
| null | null |
0
|
18
|
can someone direct me to a paper etc. that states the assumptions of ARDL that need to hold for parameters to be valid?
ARDL is an OLS based model , does that mean the regular assumptions of Gauss-Markov are also assumptions of ARDL?
|
ARDL OLS assumptions?
|
CC BY-SA 4.0
| null |
2023-05-04T18:57:01.450
|
2023-05-05T06:21:13.677
|
2023-05-05T06:21:13.677
|
383188
|
383188
|
[
"least-squares",
"assumptions",
"ardl"
] |
614932
|
1
| null | null |
0
|
14
|
I have a question about using the anova function and Anova from the car package. I know there is a question about Type 1, 2 and 3, but the question is this: I am doing an analysis where I want to compare the average interactions in networks of floral visitors in three treatments (three categories). I am using three covariates to control for residuals. I have 17 networks (sampling unit) in two categories and 15 in one of the categories. However, when I use the anova function, the treatment is significant but when I use car's Anova, it does not. I am in doubt to understand the difference and evaluate which is the most correct to use.
I really appreciate the help.
This is my model:
```
gram <- glm(mean_animal_interaction ~ treatment+net_size+flower_family+pollinator_group, family = Gamma, data = data)
```
|
Difference in results between anova and Anova (car)
|
CC BY-SA 4.0
| null |
2023-05-04T19:08:21.827
|
2023-05-04T19:22:41.273
|
2023-05-04T19:22:41.273
|
364724
|
364724
|
[
"r",
"anova",
"p-value",
"type-i-and-ii-errors",
"car"
] |
614933
|
1
| null | null |
1
|
39
|
I'm trying to detect anomolies in a dataset $i \in \{1,2,...,N\}$ where a random variable $y_i$ is expected to be drawn from a normal distribution with mean $\mu_i=0$ and variance $\sigma_i^2 (X_i)$ totally determined by (conditioned on) the multiple features $X_i$.
My hope is that I can use a Z-score threshold such that anomolies are marked by:
$$Anomolies=\{i \in \{1,2,...,N\} \space | \space |y_i|/\sigma_i > Z_{thresh} \} $$
I am wondering if there is a "good" (e.g. maximum likelihood) way to formulate this as a regression problem in which any machine learning algorithm could be fit on $y^2_i$ given $X_i$ with a suitably-chosen loss function. In this case, presumably the predictions would correspond to estimates of $\sigma_i^2$. But what loss function matches the probabilistic assumption that $y_i$ is drawn from $N(\mu_i=0, \sigma^2_i=f(X_i))$ for some function $f$?
The inspiration for this question is that I was using natural-gradient-based methods like [NGBoost](https://arxiv.org/abs/1910.03225) to simultaneously fit $\mu_i$ and $\sigma_i$. I've also tried quantile loss tree methods that use the quantile loss function. But here, since I only need to fit $\sigma_i$, it seems there should be a way to formulate fitting $\sigma_i$ as a regular regression problem with a suitable loss function.
A similar question has a [response](https://stats.stackexchange.com/a/287327/125259) that states Linex Loss with a chosen parameter yields a prediction corresponding to the sum of a mean and variance, but I'm looking for only the variance. This and other questions don't seem to assume that fitting is being done to the reformulated target $y_i^2$, or don't ask about a suitable loss function for this case. An argument against fitting to $y_i^2$ would also be a helpful answer.
|
Loss function for estimating the conditional variance by fitting $y_i^2$
|
CC BY-SA 4.0
| null |
2023-05-04T19:15:19.363
|
2023-05-05T10:37:45.233
|
2023-05-05T10:23:26.687
|
53690
|
125259
|
[
"likelihood",
"loss-functions",
"supervised-learning",
"density-estimation",
"conditional-variance"
] |
614934
|
2
| null |
614927
|
0
| null |
I believe that it is better to think in terms of power to detect effects of interest, rather than rules of thumb. That said, if you want a quick-and-dirty baseline value, the standard rule of thumb is $N=10$ per parameter. Note that in a multivariate context, the number of parameters is greater than the number of covariates. Consider a multiple regression with one $Y$ and three $X$'s: that would be a minimum of $N=30$. However, if there were three $Y$'s, that would be $N=90$.
Note further that this ignores covariance parameters (people often care primarily about the means). I don't know of an analogous rule of thumb for covariances, but if you wanted reasonable estimates of the covariances, they will grow much more rapidly as the number of $Y$'s increases: with $p$ $Y$'s, the variance-covariance matrix will have $^{p(p+1)}/_2$ parameters to estimate for each $X$ variable. Moreover, since these depend on the means already being stable, you should think of those extra tens being added on top of the original requirement. That is, in our three $Y$, three $X$ example, you would need $N = 60$ (per $X$), times three $X$'s, is $N = 180$, plus the original ninety, is $N=270$. That certainly seems like a lot, but if you care about those parameters, there's a lot of them.
| null |
CC BY-SA 4.0
| null |
2023-05-04T19:18:54.017
|
2023-05-04T19:18:54.017
| null | null |
7290
| null |
614935
|
1
|
616571
| null |
0
|
82
|
I am running a regression with a sparse rank-deficient matrix where many columns are correlated with others. At the moment, I remove all columns with a correlation over 0.8. The matrix has 12k columns and 35k observations, and I drop about 2k columns.
For example, my text may mention cities and I have three highly correlated n-grams: "New York", "York City", and "New York City". At the moment, I am dropping all three of them and I lose one dimension in the rank of the matrix. I am looking for a package that keeps makes the matrix invertible and keeps its rank as high as possible; and whose transformation on that matrix can be applied to a single row, because I want to predict the values when a single new observation arrives.
I found the [PyPi package collinearity](https://pypi.org/project/collinearity/). Converting the sparse matrix to a dense matrix is fast, but the collinearity package on it is prohibitively slow (several hours). This is probably because, from reading the description, it adds features one at a time and recomputes all correlations.
[This thread](https://stats.stackexchange.com/questions/476158/how-to-identify-which-variables-are-collinear-in-a-singular-regression-matrix) links to a paper with a QR-Column Pivoting algorithm to select the most linearly independent columns. Since Singular Value Decompositions and matrix operations are optimized for sparse matrices, does a package exist that already does this conversion of a rank-deficient sparse matrix to a full-rank, invertible matrix?
|
Python package for making a rank-deficient sparse matrix full rank
|
CC BY-SA 4.0
| null |
2023-05-04T19:34:28.927
|
2023-05-22T14:30:47.127
|
2023-05-22T08:02:56.420
|
241968
|
241968
|
[
"python",
"multicollinearity",
"scipy",
"sparse"
] |
614937
|
1
| null | null |
0
|
14
|
the spatial data including spatial data of point pattern, areal data and geo-statistical data.
I'm thinking about use K function, Moran's I Semivariogram but I'm not sure these are the right choice to describe the spatial autocorrelation of spatial data.
I'm finding the way how to describe the spatial autocorrelation or spatial dependencies of spatial data.
I want to make sure K function, Moran's I and Semivariogram is the right things to implement it.
|
What statistics do I need to use to describe the spatial autocorrelation or spatial dependencies of spatial data
|
CC BY-SA 4.0
| null |
2023-05-04T19:10:48.330
|
2023-05-04T20:06:35.583
| null | null | null |
[
"r"
] |
614940
|
1
| null | null |
0
|
7
|
ADF (Augmented Dickey-Fuller) test to check if an asset price is stationary or non-stationary using pure JavaScript without any library.
|
ADF (augmented Dickey-Fuller) in JavaScript without a library
|
CC BY-SA 4.0
| null |
2023-05-04T20:14:13.270
|
2023-05-04T20:14:13.270
| null | null |
387235
|
[
"time-series",
"augmented-dickey-fuller",
"javascript"
] |
614941
|
2
| null |
614842
|
3
| null |
The ROC curve does not "tell" you what threshold to use by itself. Setting the threshold is a business decision made by the users of the model based on the specifics of their application.
The business people's job is to know how many false positives and false negatives they can take. The ROC curve tells them what are the business implications of picking a given threshold. If there is a threshold value that suits their needs they will use the model.
Determining the threshold involves quantifying several items that depend on the application. Usually false positives carry a cost while false negatives carry a risk. Their impact changes according to what the whole process is like, its goal and where in the process the model is used (is it after a screening? is this the first screening? Is the purpose medical? Security? Financial?). All this info needs to be considered in conjunction with the ROC curve to reach a ddecision.
| null |
CC BY-SA 4.0
| null |
2023-05-04T20:20:22.353
|
2023-05-05T19:56:21.737
|
2023-05-05T19:56:21.737
|
134597
|
134597
| null |
614942
|
2
| null |
614916
|
0
| null |
Partial answer:
The distribution of the numbers of runs is easily justified by induction.
Consider all sequences of $n$ bits. To obtain all sequences of $n+1$ bits, you append $0$ or $1$. This either creates a new run, or lengthens the last one. Hence,
$$C_{n+1,k}=C_{n,k-1}+C_{n,k}$$
with $$C_{1,1}=2$$
which corresponds to Pascal's triangle.
| null |
CC BY-SA 4.0
| null |
2023-05-04T20:43:07.350
|
2023-05-05T09:43:12.390
|
2023-05-05T09:43:12.390
|
37306
|
37306
| null |
614944
|
1
| null | null |
2
|
69
|
After my previous question ([here](https://stats.stackexchange.com/questions/612405/how-to-generate-from-this-distribution-without-inverse-in-r-python/612688#612688)) I tried to improve my work with this distribution. I'm using the parametrization $$f_X(x) = \frac{\theta^2 x^{\theta-1}(\gamma-\log(x))}{1+\theta\gamma} \mathbb{I}(0<x<1), \theta \in (0, \infty), \gamma \in [0, \infty)$$ with CDF $$F(x) = \dfrac{x^\theta\cdot\left({\theta}{\gamma}-{\theta}\ln\left(x\right)+1\right)}{{\theta}{\gamma}+1}.$$ Using the answers in the post, the following functions can generate random values from this distribution:
```
def new_cdf(t, g, x):
num = x**(t)* (t*g-t*np.log(x)+1)
den = 1 + t*g
return num/den
def new_myqf(p, t, g):
f = lambda x: new_cdf(t, g, x) - p
res = root_scalar(f, bracket=[np.finfo(float).eps, 1-np.finfo(float).eps])
return res.root
def get_samples(n,t,g):
u_1 = np.random.uniform(size=n)
x_1 = np.array([new_myqf(p, t, g) for p in u_1])
return x_1
```
Since I wasn't able to find the maximum likelihood estimator analytically, I'm trying to do it numerically. The log-likelihood is given by $$\ell(\theta, \gamma| \mathbf{x}) = n\log\left(\frac{\theta^2}{1+\theta\gamma}\right) + (\theta-1) \sum_{i=1}^n \log(x_i) + \sum_{i=1}^n \log(\gamma-\log(x_i)).$$ How can I find the maximum of this (or the minimum, since I was told that this is the most efficient way)?
My attempt is to use the following function:
```
def MLE(n , theta, gamma):
samples = get_samples(n , theta, gamma)
sum_1 = np.log((theta**2)/(1+theta*gamma))
sum_2 = (theta-1)*np.sum(np.log(samples))
sum_3 = np.sum(np.log(gamma - np.log(samples)))
sum = sum_1 + sum_2 + sum_3
return sum
```
But I don't know how to find the maximum or plot it. Any help or content suggestion is appreciated!
|
Generating MLE in python - Problem witth the function
|
CC BY-SA 4.0
| null |
2023-05-04T20:44:41.230
|
2023-05-05T14:15:02.447
|
2023-05-05T14:15:02.447
|
56940
|
334650
|
[
"python",
"maximum-likelihood",
"data-visualization",
"likelihood",
"computational-statistics"
] |
614945
|
1
| null | null |
0
|
12
|
I'm doing a linear regression with data from 1980 to 2018.
Can you help me understanding why when I use the GDP in levels most of the others variables are significant and when I use the GDP growth rate they are not significant?
I used the growth rate for all the other variables.
This is the output I have with GDP in levels:[](https://i.stack.imgur.com/9gVBh.jpg)
And this is the output I have with gdp growth rate:[](https://i.stack.imgur.com/AqAPB.jpg)
Is there a way to justify I'm using in a regression a variable in levels and not its growth rate?
Thank you for your help.
|
Why the significance of other variables change when I put the GDP in levels or growth rate?
|
CC BY-SA 4.0
| null |
2023-05-04T21:21:45.280
|
2023-05-04T21:28:58.317
|
2023-05-04T21:28:58.317
|
387238
|
387238
|
[
"regression",
"eviews"
] |
614946
|
1
| null | null |
0
|
30
|
For a research project, I created a panel dataset that counts violent events in subnational administrative units from 2000 to 2015, thus my unit of analysis is district-year. Out of 20.000 district-years, only 2000 record violent events. Therefore I decided to proceed with a zero inflated negative binomial regression model. I understood that this model assumes that there is a reason why the population is zero-inflated, which you try to assess with the zero-inflated component of the regression model. Did I understand this correctly?
In my specific case, however, over the period of 15 years, 47% of my units of analysis experienced conflict at least once. This made me doubt if a zero-inflated model is actually necessary. I am grateful for any suggestions.
|
How to deal with zero inflated panel data?
|
CC BY-SA 4.0
| null |
2023-05-04T21:30:00.420
|
2023-05-05T00:32:14.063
| null | null |
305206
|
[
"regression",
"inference",
"negative-binomial-distribution",
"zero-inflation"
] |
614947
|
1
| null | null |
0
|
17
|
Looking to train a simple single-layered NN with a N-dim softmax output (and a relatively small feature vector size, ~2-10) in a streaming fashion accumulating K samples in a buffer and then performing batch gradient descent steps to update the weights. The samples have soft labels as outputs instead of one-hot encoding generated by an alternate algorithm.
I would like to prevent overconfident softmax predictions due to random weight initializations when the number of training steps are low. One way I can think of doing this is basically to have an uninformative prior over the softmax outputs in the face of a lack of data (I do not yet know what might constitute a "lack" of data but it could be as simple as a threshold on the dataset size).
How might I go about enforcing such an uninformative prior on the softmax output? The naive way I thought of approaching this was to generate a bunch of samples with an uninformative prior for all of them and then "pre-train" the NN to get it to that stage. But I imagine there is a more principled approach of getting this to work. Label smoothing is also something I've found but I'm not sure it would fit well with a dataset with soft labels. Any suggestions/techniques to look into would be appreciated.
|
Specifying priors over softmax outputs
|
CC BY-SA 4.0
| null |
2023-05-04T22:16:14.250
|
2023-05-04T22:16:14.250
| null | null |
351985
|
[
"neural-networks",
"prior",
"softmax",
"weight-initialization"
] |
614948
|
1
| null | null |
4
|
67
|
I am trying to use GAN model to generate N-dimensional samples with joint probability distribution that looks like some training data. I am having trouble getting the probability distribution of the generated data to match the training data.
As a training example, I created a 2-dimensional training dataset from bimodal Gaussian distribution as shown here:
[](https://i.stack.imgur.com/JMAwd.png)
I am using dense layers for both generator and discriminator networks. Like this
```
GenerativeAdversarialNetwork(
(generator): Sequential(
(0): Dropout(p=0.2, inplace=False)
(1): Linear(in_features=100, out_features=100, bias=True)
(2): Tanh()
(3): Linear(in_features=100, out_features=100, bias=True)
(4): Tanh()
(5): Linear(in_features=100, out_features=100, bias=True)
(6): Tanh()
(7): Linear(in_features=100, out_features=100, bias=True)
(8): Tanh()
(9): Linear(in_features=100, out_features=2, bias=True)
(10): Tanh()
(11): Linear(in_features=2, out_features=2, bias=True)
)
(discriminator): Sequential(
(0): Linear(in_features=2, out_features=32, bias=True)
(1): LeakyReLU(negative_slope=0.01)
(2): Linear(in_features=32, out_features=32, bias=True)
(3): LeakyReLU(negative_slope=0.01)
(4): Linear(in_features=32, out_features=32, bias=True)
(5): LeakyReLU(negative_slope=0.01)
(6): Linear(in_features=32, out_features=16, bias=True)
(7): LeakyReLU(negative_slope=0.01)
(8): Dropout(p=0.2, inplace=False)
(9): Linear(in_features=16, out_features=1, bias=True)
(10): Sigmoid()
)
)
```
The closest that I could get was some bimodal distribution that is densely located at the two centers of the bimodal Gaussian distributions from the real data.
[](https://i.stack.imgur.com/ngeyx.png)
I have tried using the following recommendations from [here](https://github.com/soumith/ganhacks) and other places without any success.
- Using Adam optimizer
- Using dropout layers
- Playing with bigger networks
- Playing with learning rates
- Playing with different number of iterations or thresholds for training generator and discriminator networks.
My question is twofold:
- Is there anything I am missing?
- I have mainly seen people using GANs for image generation and claiming that the network learns the PDFs. But all the claims about the learned PDFs are qualitative, such as claiming that the generated images look good enough. Is there any demonstration of the actually matching PDFs. Such as 1D or 2D datapoints that can be easily visualized. It is easy for me to imagine that the generated samples are from some areas in the total distribution (like the orange samples that I am generating above) but it does not capture the total distribution at all.
Sorry about the long question.
|
Using Generative Adversarial Networks for joint distribution estimation
|
CC BY-SA 4.0
| null |
2023-05-04T22:41:39.883
|
2023-05-05T13:31:49.497
| null | null |
268266
|
[
"machine-learning",
"probability",
"generative-models",
"gan"
] |
614949
|
2
| null |
614919
|
4
| null |
Your conclusion is correct (don't leave out subjects) but I'm concerned that you're considering that other answer in addressing it.
I think you're potentially taking the wrong lesson from the linked post (and if not you, someone else will). It's not at all relevant to the circumstance you're discussing.
There's two different situations. It would be a serious mistake to try to carry the lesson from one situation over to the other.
Situation 1: "I am planning an experiment. I can afford some number of subjects (e.g. 100), how should I split them across two treatment groups for the best power?"
Answer: If the variance in each group and cost per subject for each treatment group are assumed to be the same, split your available subjects equally to the groups.
Note that in no sense does this involve "leaving out data". You're allocating all subjects to groups.
Situation 2: "I have already observed two very different sample sizes. Should I leave out some data from the larger group at random to make the sample sizes more equal?"
Answer: No! Why would leaving out information be any help? You'll lose power, for no obvious gain.
You correctly identified that leaving out data won't help, but the considerations of situation 1 is a distraction from that conclusion.
---
The answer to Situation 1 (split equally) does not imply you should try to even up sample sizes post hoc in Situation 2. The answer to Situation 2 (don't even up unequal sample sizes) does not imply that you should prefer unequal sample sizes in Situation 1.
Imagine you were in Situation 1 and you allocated 50-50. However, due to circumstances completely unrelated to the values of the experimental response, the observations for 40 of the subjects in treatment group 1 were lost (that is, 80% of the values in that group were missing completely at random). Should we now leave out results for 40 of the subjects in group 2?
No! The happenstance of the missingness came after the experiment, leaving us in situation 2 (we observed $n_1=10$ vs $n_2=50$).
(However if the missingness were potentially related to the outcome, then you would have a problem. Not one solved by leaving out data, though.)
| null |
CC BY-SA 4.0
| null |
2023-05-04T22:51:59.617
|
2023-05-04T23:11:38.023
|
2023-05-04T23:11:38.023
|
805
|
805
| null |
614950
|
2
| null |
614948
|
0
| null |
>
I have mainly seen people using GANs for image generation and claiming that the network learns the PDFs. But all the claims about the learned PDFs are qualitative, such as claiming that the generated images look good enough. Is there any demonstration of the actually matching PDFs. Such as 1D or 2D datapoints that can be easily visualized. It is easy for me to imagine that the generated samples are from some areas in the total distribution (like the orange samples that I am generating above) but it does not capture the total distribution at all.
One of the issues with being strictly quantitative in the comparison of the PDFs is that they aren't the same, and [large sample sizes will catch small differences when they are brought to hypothesis tests](https://stats.stackexchange.com/questions/2492/is-normality-testing-essentially-useless) (as they should, [I argue](https://stats.stackexchange.com/a/602422/247274)). Thus, how close the distributions have to be for the GAN to be useful comes down to a judgment call. Sure, that can be quantified with something like KL divergence, but KL divergence lacks the intuition that you get from looking at synthetic images that a human cannot distinguish from real images, particularly when the litmus test probably is if a human can tell the difference.
| null |
CC BY-SA 4.0
| null |
2023-05-04T22:53:47.553
|
2023-05-04T22:53:47.553
| null | null |
247274
| null |
614951
|
2
| null |
614821
|
3
| null |
There is no contradiction here. If $l > u$ then the confidence interval for this experiment is empty. This may seem disturbing, but there is nothing in the definition to prevent it.
You can find more examples in the paper "Bayesian Intervals and Confidence Intervals" by Jaynes. Or even do this: after your experiment, take the empty interval with probability 0.01 and [0,1] with probability 0.99.
| null |
CC BY-SA 4.0
| null |
2023-05-04T23:10:50.700
|
2023-05-04T23:10:50.700
| null | null |
13818
| null |
614952
|
1
| null | null |
1
|
28
|
I conducted a survey: I asked 50 participants a question about a device they were using "is this device helpful".
30 said "Yes", 20 said "No".
How do I proceed with input into a contingency table or input to online Chi squared calculator (expected and observed results)?
|
I'm having trouble understanding how to perform a very basic chi squared test
|
CC BY-SA 4.0
| null |
2023-05-04T23:23:35.130
|
2023-05-04T23:23:35.130
| null | null |
387243
|
[
"chi-squared-test",
"binary-data",
"survey"
] |
614953
|
2
| null |
614946
|
0
| null |
Keep in mind, having lots of zeroes does not necessarily mean your data is zero-inflated. For example, a Poisson with mean 0.5 has 60% zeroes, but it's not zero-inflated - it's a just a distribution that has lots of zeroes.
If there truly is excess zeroes (you can check this with R `DHARMa` simulated residual plots and compare performance with non ZI distribution (e.g. ZANB vs NB), then there are two ways to deal with it, which have different One is a Zero-Inflated model, and the other is the Zero-Altered (or "hurdle) model.
The hurdle model interpretation assumes there is a process that generates zero, and if that's non-zero, there's a second process that is always positive. For example, say we were modelling "number of items bought at the supermarket in a day". If you don't go to the supermarket, you can't buy stuff. so that's the zero-generating process. If you do go (presumably) you do buy stuff, so that's the positive component (usually modelled with something like a truncated Poisson or truncate NB).
The zero-inflation model assumes there's a process that generates zeroes, and if that's non-zero, there's a second process that could be zero or non-zero. For example, say we were modelling "number of car trips in a day". Owning a car could be the zero-generating process. However, even if you own a car, you may make 0, 1, 2, ... car trips in a day.
Whether or not you need a model to deal with excess zeroes should hopefully both be informed by your theory of the behavior, and the empirical data.
| null |
CC BY-SA 4.0
| null |
2023-05-05T00:32:14.063
|
2023-05-05T00:32:14.063
| null | null |
369002
| null |
614956
|
1
| null | null |
0
|
48
|
For data in a negative binomial distribution, is it possible to derive the dispersion if you only have the mean and shape? I don't have the actual data, just the results summary of mean and shape, but I would like to have dispersion.
I have tried to read up on this but I have not had success.
|
Shape parameter vs dispersion in a negative binomial distribution
|
CC BY-SA 4.0
| null |
2023-05-05T02:33:04.950
|
2023-05-06T23:18:49.000
| null | null |
24404
|
[
"modeling",
"negative-binomial-distribution"
] |
614957
|
2
| null |
322523
|
0
| null |
Adding to the excellent comments already made on this great question, I'd like to highlight the fact that the power spectrum of the 1-dimensional Matérn covariance function is a non-standardized Student's t-distribution, offering another potential viewpoint for interpretation.
| null |
CC BY-SA 4.0
| null |
2023-05-05T02:43:04.087
|
2023-05-05T02:43:04.087
| null | null |
385850
| null |
614958
|
2
| null |
614921
|
0
| null |
If you have lags of $Y$ on the RHS of the ARDL, then the interpretation you describe is not correct because the ARDL is then dynamic. What you've given is the regression interepretation which doesn't hold for the dynamic ARDL.
Let me see if I can find a good reference for you because it will explain why it's not correct a lot more clearly than I could. The political science community has a couple of researchers who have some insightful papers on the ARDL. I just have to find what I'm thinking of.
Here is a paper ( link at bottom ) but it's not the one I was thinking of so let me explain it with ARDL(1,0) so really just an AR(1)
Suppose we have: $Y_t = \alpha Y_{t-1} + \epsilon_t$.
Note that any movement in $Y_{t-1}$ above has an infinite because the initial effect is $\alpha$, then, during the next period, it's $\alpha^2$, then $\alpha^3$ and so on and so forth.
Why ? The easiest way to see it is to re-write the AR(1) ( assume observations start at $t= 0$ ) as :
$Y_t = \sum_{i=1}^{\infty} \alpha^{i} \epsilon_{t-i}$.
So, using the re-written model, a shock, $\epsilon_0 = 1$ at time $t=0$ ( and assume no more new shocks after that ) causes effects that last as one goes further out in time but the effect gets smaller and smaller as one goes further out.
Notice that one can use this same argument, if one has an ARDL that includes past $X's$. The effect of any lagged $X$ will last out into the distant future because the lagged $Y$ carries the effect into
the future.
Still, the paper below discusses this is a little bit ( it's not the one I was looking for ) so might be worth looking at.
[https://www.researchgate.net/publication/44885834_Dynamic_Models_for_Dynamic_Theories_The_Ins_and_Outs_of_Lagged_Dependent_Variables](https://www.researchgate.net/publication/44885834_Dynamic_Models_for_Dynamic_Theories_The_Ins_and_Outs_of_Lagged_Dependent_Variables)
| null |
CC BY-SA 4.0
| null |
2023-05-05T04:14:00.750
|
2023-05-05T04:32:43.700
|
2023-05-05T04:32:43.700
|
64098
|
64098
| null |
614959
|
2
| null |
614875
|
0
| null |
Got it! I have further looked into the literature, and found that it was first described by F. M. Mosteller, and R. R. Bush in 1954 rather than Liptak, T. (1958). First, we known
$$z = \frac{Δx}{SD}$$
Note it is standardized z that we want to combine rather than unstandardized SD, so the SD can be dropped.
When weights were applied to Δxi, the unit has been changed because of the weights. To convert it back to SD unit, the Euclidean norm was used.
See the original explanation.
F. M. Mosteller, and R. R. Bush, Selected quantitative techniques, In: G. Lindzey (ed.), Handbook of Social Psychology: Vol. 1. Theory and Method, Addison-Wesley, 1954, 289--334.
[](https://i.stack.imgur.com/0jSKn.jpg)
| null |
CC BY-SA 4.0
| null |
2023-05-05T04:54:41.377
|
2023-05-05T04:54:41.377
| null | null |
169706
| null |
614960
|
1
| null | null |
1
|
17
|
Given that $X\sim\text{Bernoulli}(\nu)$ for some $\nu\in(0,1)$, and $Y\sim N(0,1)$ are independent random variables. I want to compute the mutual information $I(X;CX+Y)$, where $C$ is some known non-random constant. Is there a simple way to approximate this numerically? Thanks.
|
How do I numerically compute $I(X;CX+Y)$?
|
CC BY-SA 4.0
| null |
2023-05-05T05:22:20.027
|
2023-05-05T05:30:03.063
|
2023-05-05T05:30:03.063
|
217249
|
217249
|
[
"computational-statistics",
"information-theory",
"mutual-information"
] |
614962
|
1
| null | null |
0
|
11
|
I have 4 linear models, each generated from a separate sample. They all include the same predictor variables, however the outcome has been generated by dividing a continuous variable into groups with different mean and variance (imagine 15 - 30, 30 - 48, and so on).
I want to compare how well a given predictor variable performs in each model.
|
Comparing the performance of a predictor variable in different outcomes
|
CC BY-SA 4.0
| null |
2023-05-05T06:04:10.607
|
2023-05-05T06:04:10.607
| null | null |
294633
|
[
"regression",
"multiple-regression",
"multiple-comparisons"
] |
614963
|
1
| null | null |
5
|
481
|
Both `dpois` and `dnorm` in the code below sum to 1 (or thereabouts). This appears to confirm my understanding of the `dpois` and `dnorm` functions, which is that they represent proportions.
```
sum(dpois(x = 0:20, lambda = 5))
# 1
sum(dnorm(-10:30, mean = 10, sd = 2, log = FALSE))
# 1
```
But the output of `dbeta` does not sum to 1. Why is this?
```
sum(dbeta(c(0.1, 0.2, 0.3), shape1 = 2, shape2 = 4, ncp = 0, log = FALSE))
# 5.564
```
|
Why does dbeta not sum to 1?
|
CC BY-SA 4.0
| null |
2023-05-05T06:12:31.693
|
2023-05-05T06:32:00.210
| null | null |
12492
|
[
"r",
"distributions",
"normal-distribution",
"poisson-distribution",
"beta-distribution"
] |
614964
|
1
| null | null |
0
|
8
|
I am having some troubles with my scatterplot matrix.
I've already looked at the other similar questions but it seems that I still don't understand how to deal with the situation when you have too many points in your scatterplot matrix.
Here is my code below, Olten is a station in CH with 600 observations (going from january 1970 to december 2019) of 12 meteorological variables :
```
Olten = subset(OTL, time >= 197001 & time <= 201912)
pairs(~pva200m0+nto000m0+prestam0+tre200mx+tre200mn+tre200m0+rre150m0 +ure200m0 + ure200m0+su2000m0 +fkl010m0,data = Olten,main = "Scatterplot Matrix")```
my scatterplot matrix is really super dense with too many points, do you know how could I deal with that?
Thank you very much
```
|
Matrix Scatterplot with too many points
|
CC BY-SA 4.0
| null |
2023-05-05T06:30:05.233
|
2023-05-05T06:30:05.233
| null | null |
387257
|
[
"scatterplot"
] |
614965
|
2
| null |
614963
|
9
| null |
`d*` functions represent proportions only with a discrete response. In fact, your `dnorm` example just happens to sum to one, but
```
> sum(dnorm(seq(-10, 30, by = 0.01), mean = 10, sd = 2, log = FALSE))
[1] 100
```
is 100! The normal and beta distributions are continuous, not discrete. Therefore, instead of summing to 1, they must integrate to 1, which is different.
| null |
CC BY-SA 4.0
| null |
2023-05-05T06:30:05.743
|
2023-05-05T06:30:05.743
| null | null |
369002
| null |
614966
|
2
| null |
614963
|
12
| null |
The relevant property of a probability density is not that it sums (for evaluation on some particular $x$ values) to one, but that it integrates to one.
If you evaluate a density $f$ at $x$ values that form a regular grid with grid width $\Delta x$, then you have very approximately
$$ \int_{-\infty}^\infty f(x)\,dx \approx \sum_{i=1}^n f(x_i)\Delta x, $$
which is why your initial two examples sum to almost one: here we have $\Delta x=1$, so we have approximately
$$ 1=\int_{-\infty}^\infty f(x)\,dx \approx \sum_{i=1}^n f(x_i). $$
Compare a finer grid for the normal case:
```
> sum(dnorm(seq(-10,30,by=0.1), mean = 10, sd = 2, log = FALSE))
[1] 10
```
And of course, if we include $\Delta x$ in your calculation for `dbeta`, then we again approximate the integral correctly:
```
> xx <- seq(0,1,by=0.01)
> sum(dbeta(xx, shape1 = 2, shape2 = 4, ncp = 0, log = FALSE)*mean(diff(xx)))
[1] 0.9998333
```
| null |
CC BY-SA 4.0
| null |
2023-05-05T06:31:27.923
|
2023-05-05T06:31:27.923
| null | null |
1352
| null |
614967
|
2
| null |
614963
|
7
| null |
`dpois` is the [probability mass function (pmf)](https://en.wikipedia.org/wiki/Probability_mass_function) a discrete Poisson distribution that can take integer values in $(0, \infty)$. If you sum over the probabilities for all integers, you should indeed get 1.
`dnorm` is the [probability density function (pdf)](https://en.wikipedia.org/wiki/Probability_density_function) for the normal distribution over the real numbers in $(\infty, \infty)$ and `dbeta` the pdf for a Beta distribution over $(0, 1)$. These will sum to 1, if you integrate them over their domain.
pdfs are the continuous distribution equivalent to pmfs, but that `sum(dnorm(-10:30, mean = 10, sd = 2, log = FALSE))` sums to 1 is pure coincidence (or someone giving you a trick question).
| null |
CC BY-SA 4.0
| null |
2023-05-05T06:32:00.210
|
2023-05-05T06:32:00.210
| null | null |
86652
| null |
614968
|
1
| null | null |
0
|
18
|
What is Prediction(s.d.) in the summary output calculated based on? My understanding is that there are iterations done to get predictions of the counterfactual in the post-intervention period at each timestamp. So the sd is just the standard deviation of predictions? Can I use the standard deviation to select between models with different control variables?
[](https://i.stack.imgur.com/nL6L4.png)
|
Interpret CausalImpact summary
|
CC BY-SA 4.0
| null |
2023-05-05T06:40:52.943
|
2023-05-05T06:40:52.943
| null | null |
387224
|
[
"causality",
"causalimpact"
] |
614969
|
1
| null | null |
0
|
10
|
I have data with a nesting structure I am unsure how to account for.
I have data on countries, who were rated by two different organizations, once per year each. I would like to know what country specific factors influence the ratings and observe if these predictors have different effects on the rating from each organization.
So I do have two outcome values (rows) per country and year, one for each organization, over 10 years. How do I nest that?
My intuition says to nest countries in time and just control for the organization with a fixed effect. I could also use country and time both as random intercepts and use a random intercept for organization on level three.
I do have an expectation that there is variance between organizations and between countries, time is not so much a factor. I need to account for repeated mesures though. I also considered running two separate analyses, one for each organization, then I do not have the problem of having the organization as additional nesting structure. Then I would probably just use country and time each with a random intercept.
I have at this point confused myself and need some help.
|
Nesting structure in linear mixed model
|
CC BY-SA 4.0
| null |
2023-05-05T06:47:23.547
|
2023-05-05T06:47:23.547
| null | null |
268195
|
[
"multivariate-analysis",
"multilevel-analysis"
] |
614970
|
2
| null |
614919
|
2
| null |
If you assume normal distributed variables, then power can be computed without the need for simulations.
The $t$-statistic is computed as the ratio of the differences in the sample means divided by an estimate of the standard deviation of that difference.
$$t = \frac{\bar{X}_1 - \bar{X}_2}{\hat{s}}$$
where $$\hat{s} = \sqrt{\frac{1}{n_1}+\frac{1}{n_2}}\sqrt{\frac{(n_1-1)S_{X_1}^2+(n_2-1)S_{X_2}^2}{n_1+n_2-2}}$$
- If the population means are equal then the $t$-statistic will be t-distributed with $n_1+n_2-2$ degrees of freedom.
- If the population means are unequal, say $\delta = (\mu_1-\mu_2)/\sigma \neq 0$, then the $t$-statistic will be non-central t-distributed with $n_1+n_2-2$ degrees of freedom. And the parameter of non-centrality will be $$ncp = \frac{\delta}{\sqrt{\frac{1}{n_1}+\frac{1}{n_2}}}$$
Intuitively, the estimate of the $\hat{s}$ is not influenced by the unequal sample sizes (if the assumption of equal variance is correct). But the numerator is $$\hat{X}_1 - \hat{X}_2 \sim N\left(\delta \sigma, \left(\frac{1}{n_1}+\frac{1}{n_2}\right)\sigma^2\right)$$ and the standardized variable is
$$z = \frac{\hat{X}_1 - \hat{X}_2}{\sqrt{\frac{1}{n_1}+\frac{1}{n_2}}\sigma}
\sim N\left(\frac{\delta}{\sqrt{\frac{1}{n_1}+\frac{1}{n_2}}}, 1\right)$$
Here you may recognize why the power increases when we increase $n_1$ or $n_2$: With increasing sample size, the standard deviation of the sample means decrease, and the effect size $\delta$ relative to the standard deviation of the means increases.
---
So you can compute the critical value/region for the t-statistic using the central t-distribution and with that region compute the power with the non-central t-distribution. Here is an r-code that does this
```
ttestpower = function(n1,n2,delta,alpha) {
nu = n1+n2-2
ncp = delta/sqrt(1/n1 + 1/n2)
boundary = qt(alpha/2,nu) ### regions for alpha level t-test
power = 1-pt(-boundary,nu,ncp)+pt(boundary,nu,ncp) ### probability that non-central t-distribution is outside region
return(power)
}
ttestpower(n1 = 10, n2 = 90, delta = 0.5, alpha = 0.05) # 0.3178022
ttestpower(n1 = 50, n2 = 50, delta = 0.5, alpha = 0.05) # 0.6968934
ttestpower(n1 = 100, n2 = 900, delta = 0.5, alpha = 0.05) # 0.9972727
ttestpower(n1 = 300, n2 = 2700, delta = 0.5, alpha = 0.05) # 1
```
>
This leads me back to my initial opinion, that there are no reasons to reduce a sample to produce even groups, assuming that we are talking about sufficiently large samples.
With reducing sample sizes the power decreases.
But, reducing the size of the larger group can still make sense. But, only if also at the same time the size of the smaller group is increased.
For a given fixed sum $n_1 + n_2$ the term $\sqrt{1/n_1 + 1/n_2}$ is smallest if $n_1 = n_2$. This is also true when $n_1 + n_2$ are larger. Compare for instance the situation when the effect size is smaller:
```
ttestpower(n1 = 300, n2 = 2700, delta = 0.1, alpha = 0.05) # 0.3756569
ttestpower(n1 = 1500, n2 = 1500, delta = 0.1, alpha = 0.05) # 0.7816494
```
The power will be almost twice larger when we have two groups of 1500 instead of a group of 300 and a group of 2700.
| null |
CC BY-SA 4.0
| null |
2023-05-05T06:50:38.360
|
2023-05-05T07:57:24.027
|
2023-05-05T07:57:24.027
|
164061
|
164061
| null |
614971
|
1
| null | null |
1
|
14
|
I have a set of points whose shape is as below:
[](https://i.stack.imgur.com/6MbtY.png)
Its set of x and y points is as follows: x=[0.14741,0.180288,0.195,0.245342,0.25614,0.289377,0.315789,0.357143,0.431034,1.785714,2,2.323529,2.586207,3,3.894737,4.923077] y=[0.211718,0.408906,0.507768,0.832,0.903626,1.092787,1.368377,1.750082,2.593839,1.72126,1.653633,1.567115,1.502963,1.419437,1.279676,1.153634]
Can anyone fit a distribution with at least three degrees of freedom to these points? Do I need the name of the distribution and its coefficients? Does anyone have an idea of writing code with Python to get the coefficients and type of distribution? This chart is the most similar to the chi-square and Double Exponential charts.
I tried to write a polynomial for it by interpolation, which has a high degree.
This code is:
```
import numpy as np
import matplotlib.pyplot as plt
# Given data points
x = [0.14741, 0.180288, 0.195, 0.245342, 0.25614, 0.289377, 0.315789, 0.357143, 0.431034, 1.785714, 2, 2.323529, 2.586207, 3, 3.894737, 4.923077]
y = [0.211718, 0.408906, 0.507768, 0.832, 0.903626, 1.092787, 1.368377, 1.750082, 2.593839, 1.72126, 1.653633, 1.567115, 1.502963, 1.419437, 1.279676, 1.153634]
# Fit a polynomial equation of degree 5
p = np.poly1d(np.polyfit(x, y, 5))
# Generate a range of x values to plot the polynomial curve
x_new = np.linspace(min(x), max(x), 100)
# Plot the data points and the polynomial curve
plt.plot(x, y, 'o', label='Original data')
plt.plot(x_new, p(x_new), '-', label='Polynomial fit')
# Add axis labels and a legend
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
# Show the plot
plt.show()
```
and the result of the code is.
[](https://i.stack.imgur.com/B4OYJ.png)
can help me?
|
Fitting a set of points to a distribution by adding up to three degrees of freedom with Python
|
CC BY-SA 4.0
| null |
2023-05-05T06:56:08.453
|
2023-05-05T06:56:08.453
| null | null |
387258
|
[
"gamma-distribution",
"exponential-distribution",
"chi-squared-distribution",
"polynomial"
] |
614972
|
1
|
615084
| null |
2
|
30
|
I have a multiple logistic regression with continuous variables (dist_fs, slope, min_dist), one binary variable (exp), and one interaction between the binary variable and a continuous variable. The response variable is not balanced, I have n=273 for "1" and n=13847 for "0".
The summary of the model provided these coefficients:
```
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -3.25942 0.24478 -13.316 < 2e-16 ***
dist_fs -1.95321 0.83374 -2.343 0.01914 *
slope -0.01183 0.42595 -0.028 0.97785
min_dist -0.67566 0.43925 -1.538 0.12399
exp1 -1.25486 0.23607 -5.316 1.06e-07 ***
dist_fs:exp1 3.97196 1.26526 3.139 0.00169 **
```
Using `plot_model()` I obtained this plot for the interaction:
[](https://i.stack.imgur.com/mD7Y7.png)
My question:
I understand that the low predicted probabilities may come from the unbalanced sample size of the response variable. But I see that the difference between the probabilities of the binary variable is very small (~2-3%, when x=0).
If the sample size would be balanced, and the trends similar, could we expect the difference in the % between the two categories to be different as well (that is, also higher)? Or this difference would be the same, independently of the balanced/unbalanced sample size? Or we can't conclude anything about this, without a balanced dataset?
Thanks in advance for any thoughts on this.
|
multiple logistic regression - relative change in the predicted probabilities with unbalanced classes
|
CC BY-SA 4.0
| null |
2023-05-05T07:21:30.153
|
2023-05-06T14:28:06.100
| null | null |
117281
|
[
"probability",
"logistic",
"multiple-regression",
"interaction",
"unbalanced-classes"
] |
614976
|
2
| null |
614944
|
6
| null |
I know `python` but I'm not fluent enough to provide you with a full implementation of this, so I'm showing you an `R` solution instead. I tried to make the code as self-speaking as possible.
Some comments first.
(1) From what I can see, this seems a tough likelihood surface and for numerical stability I have switched to log-parameters, i.e. I have applied a one-to-one reparametrization. The parameter vector is thus $(\log(\theta), \log(\gamma))$; see the plot of the scaled log-likelihood surface below.
(2) To locate the maximum likelihood I'm using `optim` via the L-BFGS-B method but I invite you to try other methods as well. In addition, it is also possible to supply to the `optim` the analytical gradient of the log-likelihood. This may help to get closer to the peak, but here I'm not using analytical gradients so `optim` is computing the necessary gradients numerically.
(3) The option `hessian=TRUE` is for delivering the Hessian matrix at the optimum, that is, the observed information at the MLE. This is useful towards building a suitable grid for plotting the likelihood surface.
(4) In the second figure, you see the plot of $w(\tau) = \ell(\hat\tau)-\ell(\tau)$ and the contours can be treated as approximate confidence sets for $\tau = (\log\theta, \log\gamma)$.
```
# using the neg log-likelihood
nlogL <- function(param, x_vec) {
n = length(x_vec)
theta = exp(param[1])
gamma = exp(param[2])
oo1 = n*log(theta**2 /(1 + theta*gamma))
oo2 = (theta - 1)*sum(log(x_vec))
oo3 = sum(log(gamma - log(x_vec)))
return(-(oo1+oo2+oo3))
}
# (your density)
myf <- function(x, a, b) {
num = (a+1)**2 * x**a * log(b*x)
den = 1 - (a+1)*log(b)
return(-num/den)
}
# (your cdf)
myF <- function(x, a, b) {
num = x**(a+1)* ((a+1)*log(b*x)-1)
den = (a+1) * log(b) - 1
return(num/den)
}
# quantile function computed numerically
myqf <- function(p, a, b) {
uniroot(function(x) myF(x,a,b) -p,
lower = .Machine$double.eps,
upper = 1-.Machine$double.eps)$root
}
set.seed(5)
# get some uniform draws
u <- runif(100)
# apply the inverse transform to get the data
xvec <- sapply(u, myqf, a = 5, b=0.99)
# pdf of the true model
> plot(function(x) myf(x,a=5, b=0.99), from=0, to=1,
col=2, lwd=2)
```
[](https://i.stack.imgur.com/cYVw2.png)
```
# find min of nlogL (or max of logL), via optim
# note the use of hessian=TRUE
(opt = optim(par = c(0,0), fn = nlogL, x_vec = xvec, hessian=TRUE,
method = "L-BFGS-B",
lower = c(-5,-10), upper = c(20,20)))
$par
[1] 1.817531 -3.987984
$value
[1] -55.46771
$counts
function gradient
40 40
$convergence
[1] 0
$message
[1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
$hessian
[,1] [,2]
[1,] 198.950731 9.1942354
[2,] 9.194235 0.9039165
# the approx-variance covariance matrix of the log-params
sqrt(diag(solve(opt$hessian)))
# plot the relative log-Lik
ngrid = 200
ltheta = seq(0.8,2.2, len=ngrid)
lgamma = seq(-15,2, len = ngrid)
grid_lpar = expand.grid(ltheta, lgamma)
nlogLik_val = apply(grid_lpar, 1, nlogL, x_vec = xvec)
nlogLik_val_mat = matrix(nlogLik_val, ncol=ngrid, byrow = F)
image(ltheta, lgamma, (opt$value - nlogLik_val_mat),
xlab = "log(theta)", ylab = "log(gamma)",
main="Plot of max(logL)-logL")
contour(ltheta, lgamma, -(opt$value - nlogLik_val_mat),
levels = 2*c(qchisq(0.50, df=2),qchisq(0.75, df=2),qchisq(0.95, df=2)),
labels = c("50%", "75%", "95%"), add= TRUE)
points(opt$par[1], opt$par[2], pch=20)
```
[](https://i.stack.imgur.com/QgvP1.png)
| null |
CC BY-SA 4.0
| null |
2023-05-05T08:11:28.933
|
2023-05-05T08:37:36.397
|
2023-05-05T08:37:36.397
|
56940
|
56940
| null |
614977
|
1
| null | null |
1
|
22
|
[https://arxiv.org/abs/2304.01933](https://arxiv.org/abs/2304.01933) shows that the best performing adapter-based parameter-efficient fine-tuning depends on the language model being fine-tuned:
[](https://i.stack.imgur.com/T6IzW.jpg)
E.g., LORA is the best adapter for LlaMa-7B, while S-adapter is the best adapter for BLOOM-7.1B.
Why does the best performing adapter-based parameter-efficient fine-tuning depend on the language model being fine-tuned?
|
Why does the best performing adapter-based parameter-efficient fine-tuning depend on the language model being fine-tuned?
|
CC BY-SA 4.0
| null |
2023-05-05T08:24:08.070
|
2023-05-05T17:32:32.207
|
2023-05-05T17:32:32.207
|
12359
|
12359
|
[
"natural-language",
"language-models",
"domain-adaptation",
"llm"
] |
614978
|
1
| null | null |
1
|
9
|
I have proven that if $C$, a concept class, is efficiently learnable from statistical queries using $H$, a representation class over $X$, then $C$ is efficiently PAC learning using $H$ in the presence of classification noise.
How would I extend this result to a statistical query model where the learning algorithm, in addition to the oracle $STAT(c, D)$, is also given access to unlabeled random draws from the target distribution $D$ and that the concept class of axis-aligned rectangles in $R^2$ can be efficiently learned in this paradigm?
I saw this problem online, but it seems to original in Kearns–Vazirani.
|
Extending efficient PAC learning with classification noise to statistical query model with unlabeled random draws for axis-aligned rectangles in $R^2$
|
CC BY-SA 4.0
| null |
2023-05-05T08:33:05.337
|
2023-05-05T17:17:27.870
|
2023-05-05T17:17:27.870
|
387264
|
387264
|
[
"machine-learning",
"pac-learning"
] |
614980
|
1
| null | null |
0
|
28
|
Both [word2vec](https://www.baeldung.com/cs/nlps-word2vec-negative-sampling) and transformer model compute a SOFTMAX function over the words/tokens on the output side.
For word2vec models, [negative sampling](https://www.baeldung.com/cs/nlps-word2vec-negative-sampling) is used for computational reasons:
- Is negative sampling only used for computational reasons?
- https://stackoverflow.com/a/56401065/1516331
So why don't we use negative sampling for the transformer models as well?
Is it because:
- Transformer model was published later than word2vec, so now computational cost of the softmax over the whole vocab is not a big concern;
- Transformer is an autoregressive model. We need to ensure accurate estimate of the probability distribution as much as possible.
Are these true? Any other reasons?
|
Why the Transformer model does not require negative sampling but word2vec does?
|
CC BY-SA 4.0
| null |
2023-05-05T09:11:11.480
|
2023-05-05T09:11:11.480
| null | null |
14144
|
[
"natural-language",
"softmax",
"word2vec",
"transformers",
"negative-sampling"
] |
614981
|
1
| null | null |
0
|
17
|
I am trying to remove seasonality from a dataset. What I am interested in is the residuals, as trend and seasonal influence do not matter to the problem at hand. I am using the `statsmodels` package in python. Specifically, "[Multiple Seasonal-Trend decomposition using LOESS (MSTL)](https://www.statsmodels.org/dev/examples/notebooks/generated/mstl_decomposition.html)", as there are multiple seasonalities in the dataset. Is there a better way to handle the multiple seasonalities in a dataset than this?
|
Methods of adjusting for multiple seasonality
|
CC BY-SA 4.0
| null |
2023-05-05T09:26:16.433
|
2023-05-05T09:49:50.240
|
2023-05-05T09:49:50.240
|
53690
|
375621
|
[
"regression",
"time-series",
"python",
"seasonality",
"multiple-seasonalities"
] |
614982
|
1
| null | null |
5
|
228
|
This code was kindly recommended to me in my original question. It returned the same parameter estimates as the software called CRAFT by Aon Benfield. I have also managed to replicate it for the Weibull distribution.
I was wondering if anyone could help me replicate if for Pareto? Based on CRAFT, I would expect a shape of ~0.59 and scale of ~55.15 (b~93 and q~1.7). Finally, I would also like to do the same but using the SSQ method instead of MLE?
```
My data = c(28.744, 385.714, 20.595, 99.350,31.864, 77.713, 264.408, 21.204, 31.937, 0.900, 18.762, 173.276, 23.707)
```
```
constrained_mle <- function(x, p, q) {
lnL <- function(shape) {
# Solving q = qgamma(p, shape)*scale for the necssary scale
scale <- q/qgamma(p, shape)
sum(dgamma(x, shape, scale=scale, log=TRUE))
}
res <- optimise(lnL, lower=0, upper=1e+3, maximum=TRUE, tol=1e-8)
scale <- q/qgamma(p, res$maximum)
c(scale=scale, shape=res$maximum)
}
par <- constrained_mle(x, .95, 500.912)
par
#> scale shape
#> 231.0561574 0.6038756
# checking that the solution is correct
qgamma(.95, scale=par[1], shape=par[2])
#> [1] 500.912 ```
```
|
How can I fit a distribution to a dataset while forcing it through an exact point in r?
|
CC BY-SA 4.0
| null |
2023-05-05T09:39:18.290
|
2023-05-12T13:25:24.480
|
2023-05-12T13:25:24.480
|
112912
|
387268
|
[
"r",
"distributions",
"gamma-distribution",
"weibull-distribution",
"constraint"
] |
614983
|
1
|
615342
| null |
1
|
103
|
I am using linear mixed effect models to study the effect of a physical therapy treatment on two different populations of patients (Adolescents and Adults). The treatment is the same for both populations, and for each subject the (same) treatment is applied to both legs (Left/Right). The question is whether the treatment has a different effect on the two populations and the two legs (I am also interested in their potential interaction).
I therefore have two factors as fixed effects: `age` (factors with levels: Adolescents and Adults), and `leg` (factors with levels: Left and Right), with possibile interaction (`age*leg`). The random effect is `subject-id`. However, I am uncertain on how to design the model: The two populations are independent to each other, but the two legs are not (i.e. two legs within the same subject).
So far I used a model where I nested `leg` within `subject-id`:
```
options(contrasts = c("contr.sum","contr.poly"))
lCtr <- lmeControl(maxIter = 1000, niterEM = 500, msVerbose = FALSE, opt = 'optim')
linM <- lme(dat ~ leg*age, random = ~1|sbjID/leg, data=dat_trf, na.action=na.omit, method = "ML", control=lCtr )
```
Is this design correct?
The summary provides the following results:
```
> summary(linM)
Linear mixed-effects model fit by maximum likelihood
Data: dat_trf
AIC BIC logLik
118.2452 142.7825 -52.12259
Random effects:
Formula: ~1 | sbjID
(Intercept)
StdDev: 0.3601248
Formula: ~1 | leg %in% sbjID
(Intercept) Residual
StdDev: 0.144943 0.08273855
Fixed effects: dat ~ leg * age
Value Std.Error DF t-value p-value
(Intercept) 1.6142702 0.03496504 121 46.16813 0.0000
leg1 0.0232176 0.01088832 121 2.13234 0.0350
age1 -0.0514979 0.03496504 121 -1.47284 0.1434
leg:age1 -0.0089302 0.01088832 121 -0.82017 0.4137
Correlation:
(Intr) leg1 age1
leg1 0.000
age1 -0.171 0.000
leg1:age1 0.000 -0.171 0.000
Standardized Within-Group Residuals:
Min Q1 Med Q3 Max
-0.942245288 -0.249664440 -0.002136358 0.199769397 1.004654898
Number of Observations: 246
Number of Groups:
sbjID leg %in% sbjID
123 246
```
The number of groups seems to be correct: There are indeed 123 subjects (51 adolescents and 72 adults), for a total of 246 measurements (two leg per subjects). However, I suspect that something is wrong, because I obtain a relatively small p-value for `leg1` when the data look like in the following boxplots:
[](https://i.stack.imgur.com/iQRd0.png)
This weird result is confirmed by anova:
```
> anova.lme(linM,type="marginal")
numDF denDF F-value p-value
(Intercept) 1 121 2131.4958 <.0001
leg 1 121 4.5469 0.0350
age 1 121 2.1693 0.1434
leg:age 1 121 0.6727 0.4137
```
Is there anything wrong?
The assumptions of normality and constant variance of residuals are met.
Thanks in advance!
|
Mixed model with two fixed effects, with repeated measures for only one of them
|
CC BY-SA 4.0
| null |
2023-05-05T09:39:59.243
|
2023-05-09T15:27:20.037
| null | null |
131246
|
[
"mixed-model",
"lme4-nlme",
"repeated-measures",
"nested-data",
"two-way"
] |
614984
|
1
| null | null |
0
|
25
|
I have been working on prediction models in R studio based on a rather small data set. There is a total of ~ 1200 cases with 150 to 400 positive cases depending on which of the different outcomes is being modelled.
Initially it was done with a 70/30 split that was stratified to have a similar distribution of positive cases in both sets, but after gaining knowledge and especially after reading some of Frank Harrel's literature (for instance:"Prediction models need appropriate internal, internal-external, and external validation" by Steyerberg and Harrel) I am considering a different setup.
My main concerns with the work I have done already is with calibration. Brier scores are about 0.10 for the set with a that has ~10% positive classes. As I understand it this implies a non- informative model? AUC approx 0.80, but reliability diagram also implies poor calibration. DCA implies net benefit of using models.
My main question is concerning when to apply calibration to the models under development (GML, XGB, RF). I have seen some setups where people suggest splitting in train - validate - test which is not possible in this case due to data size and would even further decrease the available amount of data for developing models.
- Can I take the training set of 70% of the data, train models with CV or bootstrapping, take performance metrics from average of the test folds of the cv and finally calibrate the models based on those results before applying them on the hold out set of 30%?
- Is there another way to perform calibration during model development without having acces to an external validation set?
Otherwise, if you develop a model with poor calibration but an AUC that seem to imply reasonable discrimination, why even continue with external validation if don't know that it could be improved to a well calibrated model that could have real world use..?
Or do you report results including poor calibration and hope that it can be solved at the external validation stage sometime in the future...? That seems like a poor solution!
Thanks in advance for any input/ advice!
|
At what point during model development can model calibration be applied?
|
CC BY-SA 4.0
| null |
2023-05-05T09:47:19.080
|
2023-05-05T09:47:19.080
| null | null |
385064
|
[
"cross-validation",
"train",
"calibration",
"scoring-rules",
"train-test-split"
] |
614985
|
1
| null | null |
1
|
18
|
I am doing a random forest and classificatsion tree. I only have numeric variable, no factors, so I have some questions regarding the output.
Background of the variables:
Prob_1 are values between 0-1 (I devided the real values with 100, to have values between 0-1), all of the other variables used for the are between 1-100
First question is regarding this output:
```
Regression tree:
tree(formula = Prob_1 ~ ., data = P14_Q1_2, subset = train)
Variables actually used in tree construction:
[1] "Flowers" "Herbaceous_area" "Woodland"
Number of terminal nodes: 4
Residual mean deviance: 0.06002 = 3.301 / 55
Distribution of residuals:
Min. 1st Qu. Median Mean 3rd Qu. Max.
-0.63250 -0.12570 0.03416 0.00000 0.14140 0.59530
```
I can see residual mean deviance is 0.06002, but when I did tree with values (1-100) [Prob_1 are just the real values devided by 100), I got the residual mean deviance over 600, which seems a lot? How to tell if the residual mean deviance is good, if it too high what are the solutsions?
I calculated the MSE:
```
> mean((yhat - boston.test)^2)
[1] 0.1234633
```
What are considered to be good values of MSE, when can I say the tree is predicting correctly?
Lastly:
```
> RF1 <- randomForest(Prob_par ~ ., data = P14_Q1_2,
+ subset = train, ntree=50000, mtry = 1, importance = TRUE)
> RF1
Call:
randomForest(formula = Prob_par ~ ., data = P14_Q1_2, ntree = 50000, mtry = 1, importance = TRUE, subset = train)
Type of random forest: regression
Number of trees: 50000
No. of variables tried at each split: 1
Mean of squared residuals: 0.08347299
% Var explained: 0.69
```
The % Var explained= 0.69, which is quiet low, and when changeing the mtry, the value is some times even with a - sign (for example -3.22). What are the solutions?
|
Classificatsion tree, random forest output and negative values
|
CC BY-SA 4.0
| null |
2023-05-05T10:08:54.870
|
2023-05-31T00:37:46.917
|
2023-05-31T00:24:29.737
|
247274
|
370528
|
[
"machine-learning",
"classification",
"random-forest",
"interpretation"
] |
614986
|
2
| null |
540460
|
0
| null |
This should be good for your needs. It works well but often needs fine tuning of the control parameters
[https://rdrr.io/cran/trendsegmentR/man/trendsegment.html](https://rdrr.io/cran/trendsegmentR/man/trendsegment.html)
Based on the paper:
H. Maeng and P. Fryzlewicz (2021), Detecting linear trend changes in data sequences
[https://arxiv.org/abs/1906.01939](https://arxiv.org/abs/1906.01939)
| null |
CC BY-SA 4.0
| null |
2023-05-05T10:26:16.453
|
2023-05-05T10:27:21.303
|
2023-05-05T10:27:21.303
|
387275
|
387275
| null |
614987
|
2
| null |
614933
|
0
| null |
>
But what loss function matches the probabilistic assumption that $y_i$ is drawn from $N(\mu_i=0, \sigma^2_i=f(X_i))$ for some function $f$?
The model is
$$
y_i=f(x_i;\theta)\varepsilon_i, \quad \varepsilon_i \stackrel{i.i.d.}{\sim} N(0,1).
$$
I assume $f$ is known up to an unknown parameter (vector) $\theta$. Since the likelihood is normal, the corresponding loss function is quadratic. Nonlinearity of the model w.r.t. its parameter(s) $\theta$ does not affect this basic fact. Thus we should be able to estimate the parameter(s) $\theta$ by nonlinear least squares (as an alternative to doing this by maximum likelihood).
| null |
CC BY-SA 4.0
| null |
2023-05-05T10:37:45.233
|
2023-05-05T10:37:45.233
| null | null |
53690
| null |
614988
|
1
|
615082
| null |
3
|
63
|
The hazard function is commonly used in models using distributions with positive support (gamma, weibull, lognormal, etcetera). However, I have not seen this concept (hazard) being used in the context of models using distributions with support on the entire real line, like the normal, Student-t, logistic, laplace, etcetera.
Is there any use of the hazard function associated with distributions with support on the real line?
EDIT: I just came across the concept of Inverse Mills ratio. It seems like the hazard function of distributions with support of the real line appears, implicitly, through this concept (en.wikipedia.org/wiki/Mills_ratio).
|
Is it useful to define the hazard function of distributions with support in the real line?
|
CC BY-SA 4.0
| null |
2023-05-05T10:44:30.917
|
2023-05-06T16:03:37.463
|
2023-05-06T16:03:37.463
|
387277
|
387277
|
[
"distributions",
"survival",
"hazard"
] |
614989
|
1
| null | null |
1
|
13
|
I am trying to understand the link between (rejection) ABC and rejection sampling. For example, [this paper](https://arxiv.org/pdf/2101.04653.pdf) states:
>
Approximate Bayesian Computation (ABC, Sisson et al., 2018) is centered
around the idea of Monte Carlo rejection sampling (Tavaré et al., 1997; Pritchard et al., 1999). Parameters $\theta$ are sampled from a proposal distribution, simulation
outcomes $x$ are compared with observed data $x_{o}$, and are accepted or rejected depending on a (user-specified) distance function and rejection criterion. While rejection ABC (REJ-ABC) uses the prior as a proposal distribution, [...]
Usually, the comparison of simulated data $x$ and observed data $x_{o}$ happens by choosing a threshold $\epsilon$ and a distance measure $\rho$, and if $\rho(x, x_{o}) < \epsilon$, then $\theta$ is accepted. Hence, I do not see any connection (need) for rejection sampling, as is stated in the quote. Can anybody please clarify?
|
Rejection ABC: Connection with Rejection Sampling?
|
CC BY-SA 4.0
| null |
2023-05-05T11:30:07.303
|
2023-05-05T11:54:19.450
| null | null |
346180
|
[
"monte-carlo",
"approximate-bayesian-computation",
"accept-reject"
] |
614990
|
1
| null | null |
0
|
48
|
The problem is to distinguish between the definitions of AQL, RQL and reliability in the field of quality control. Let us just pose the context. We wish to calculate the reliability of a manufacturing process and/or the quality of the manufactured pieces by doing attributes testing. The particularity of attributes testing is that, for each piece tested, the outcome of the test will be either 'passed' or 'failed'.
I just remind here the definitions that I have at hand:
AQL
AQL stands for acceptance quality level. The AQL represents the poorest level of quality for the supplier's process that the consumer would consider to be acceptable in average.
RQL
RQL stands for risk quality level, it can be also denominated LTPD that stands for lot tolerance percent defective. The RQL is the poorest level of quality that the consumer is willing to accept in an individual lot.
Reliability
The process reliability is the probability that a process will perform its intended function without failure for a specified time under stated conditions.
The following definitions will also be useful to go on:
$\alpha$ producer's risk:
$\alpha$ is the probability that a good product will be rejected as a bad product by the consumer.
$\beta$ consumer's risk:
$\beta$ is the probability that a product not meeting quality standards will enter the marketplace.
Now, suppose we will inspect $n$ manufactured samples out of $N$ and, as soon as a default is detected, we reject the lot of $N$ items. Let us note $p_1$ for the AQL and $p_2$ for the RQL (usual notations). Then, taking into account the above, we can link $\alpha$ and $p_1$ and $\beta$ and $p_2$ thanks to the followings equations:
$$
1-\alpha = (1-p_1)^n = Prob(no \ defect \ are\ detected \ in\ the\ n \ samples | AQL = p_1),
$$
$$
\beta = (1-p_2)^n = Prob(no \ defect \ are\ detected \ in\ the\ n \ samples | RQL = p_2).
$$
On the other hand, the reliability $R$ is linked to the confidence level $C$ by the following equation:
$$
1-C = R^n.
$$
I do not really understand what is the difference between the RQL and the reliability, and so, what formulas I should use to know how many samples I must choose to sustain with a certain confidence level that in $p$% of the cases, my pieces are not defective. In addition, we can note that the formulas relating RQL and $\beta$ and $R$ and $C$ are the same with different parameters.
The thing is that I receive pieces X from a producer and he gives me an AQL. These pieces X will then be merged with other components and I need to know what is the quality of the pieces X at the end of the process.
|
The link between AQL, RQL and the reliability of a process
|
CC BY-SA 4.0
| null |
2023-05-05T11:31:52.130
|
2023-05-06T13:58:25.757
|
2023-05-05T11:37:27.170
|
383929
|
383929
|
[
"definition",
"quality-control"
] |
614991
|
2
| null |
614821
|
0
| null |
Your main line of thought seems to be about the bounds U and L. Say that the bounds have the properties that $P[p > U] \geq \alpha/2$ and $P[p < L] \geq \alpha/2$ (this is not the same as your statements with the Chebyshev inequality). So what if we combine two independently generated bounds (or slightly related bounds)? Can we use the confidence interval $P[L\geq p \geq U] \leq 1-\alpha$ when L and U are based on different experiments/data?
You can do that, but the consequence will be that in some rare occasions when $U<L$, you get an empty confidence interval (which obviously will not be true, because $p$ needs to have at least some value).
The reason that this can occur is because the probability that the confidence interval contains the parameter relates to the experiment as the random unit. Some experiments will result in a confidence interval that contains the parameter other experiments will not. See also [Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?](https://stats.stackexchange.com/questions/26450/)
| null |
CC BY-SA 4.0
| null |
2023-05-05T11:46:41.420
|
2023-05-05T14:03:18.423
|
2023-05-05T14:03:18.423
|
164061
|
164061
| null |
614992
|
2
| null |
614988
|
0
| null |
The hazard function is a function of time so a non-negative support is required.
In survival analysis, the hazard function is the probability of dying at time $t$ given that you survived until time $t$.
| null |
CC BY-SA 4.0
| null |
2023-05-05T11:49:00.483
|
2023-05-05T11:49:00.483
| null | null |
250675
| null |
614993
|
2
| null |
614989
|
0
| null |
The basic ABC algorithm is based on a `while` loop:
>
While$$\rho(x^\text{obs},x^{sim})>\epsilon,\tag{1}$$simulate$$(\theta,x^\text{sim})\sim
\pi(\theta)f(x^\text{sim}|\theta)$$
This means that the simulations from an instrumental distribution are rejected while (1) holds. This is a rudimentary form of (acceptance-) rejection sampling that does not require an extra uniform variate since the acceptance probability is either $0$ or $1$.
| null |
CC BY-SA 4.0
| null |
2023-05-05T11:54:19.450
|
2023-05-05T11:54:19.450
| null | null |
7224
| null |
614996
|
1
| null | null |
0
|
13
|
I have a dataset consisting of a case group and a control group, each with 8,000 unique individuals. Among the various independent variables, age is also included for each individual. However, I am not interested in age per se; I simply want to use it as a control variable. I have been advised to control for age using fixed effects instead of performing a regression with age group dummies. How should I proceed with this approach? My data is not panel data, and I have only one observation for each individual. I am using Stata as my software. I have searched online for an answer to my question and reviewed similar queries, but I haven't found enough information that fully addresses my question. Thank you! Best regards, Mijhalov
|
How can I use a fixed effects on age without having panel data?
|
CC BY-SA 4.0
| null |
2023-05-05T12:42:09.473
|
2023-05-05T12:42:09.473
| null | null |
387279
|
[
"dataset",
"fixed-effects-model",
"cross-section"
] |
614997
|
2
| null |
610084
|
1
| null |
The default dispersion parameter estimated by DHARMa is essentially observed/expected variance, so 0.8 means that there is 20% lower variance than expected, which I would consider a small to moderate deviation.
Regarding the vignette: I never meant to suggest that 5 is a cut-off for concern. In the vignette, I just provide numerical examples for small / large values of the dispersion parameter, in the spirit to the comment above.
About your question: the main effect of the over/underdispersion is on the CIs / p-values, with overdispersion leading to anti-conservative and underdispersion to conservative bias. Based on this, I wouldn't be super concerned about a small underdispersion such as 0.8, but you might get slightly higher power by fitting a variable dispersion model.
Disclaimer: I'm the developer / maintainer of DHARMa. For technical questions about DHARMa and error reports, please use [https://github.com/florianhartig/DHARMa/issues](https://github.com/florianhartig/DHARMa/issues)
| null |
CC BY-SA 4.0
| null |
2023-05-05T12:51:30.077
|
2023-05-07T12:15:50.320
|
2023-05-07T12:15:50.320
|
48591
|
48591
| null |
614998
|
1
| null | null |
2
|
19
|
Saltelly et al., in ["How to avoid a perfunctory sensitivity analysis"](http://www.nusap.net/spe/Saltelli_and_Annoni_2010.pdf) to discourage the use of One-factor-at-a-time have an example with a shpere in a cube and it says in section 3.
"all the points of the OAT design are by construction internal to the sphere."
And then starts for the case of two variables (k=2) and says that the area of the circle inscribed in the unit square.
I cannot understand why the OAT should be a circle in a case of two variables (or sphere in k=3).
For instance the example from [here](https://doi.org/10.1038/s41598-019-47846-6) doesn't seem like exploring a circle
[](https://i.stack.imgur.com/Eqxbf.png)
[Here](https://youtu.be/_PnVw2WTZm8?t=1218), is a recording talking about it
|
One-factor-at-a-time dimensionality
|
CC BY-SA 4.0
| null |
2023-05-05T12:58:25.577
|
2023-05-05T14:39:02.253
|
2023-05-05T14:39:02.253
|
162190
|
162190
|
[
"experiment-design"
] |
614999
|
2
| null |
614982
|
4
| null |
You can fit a gamma distribution in R with optim, e.g. by minimizing the sum of squared errors for the mean and the 95th percentile, which actually sets both errors to zero:
```
obs = c(28.744,385.714,20.595,99.350,31.864,77.713,
264.408,21.204,31.937,0.900,18.762,173.276,23.707)
tail = 500.912
error = function(x)(qgamma(0.95, shape=x[1], scale=x[2]) - tail)^2 + (x[1]*x[2]-mean(obs))^2
params = optim(c(1,1), error)$par
params
```
[The code](https://rdrr.io/snippets/embed/?code=obs%20%3D%20c(28.744%2C385.714%2C20.595%2C99.350%2C31.864%2C77.713%2C264.408%2C21.204%2C31.937%2C0.900%2C18.762%2C173.276%2C23.707)%0Atail%20%3D%20500.912%0A%0Aerror%20%3D%20function(x)(qgamma(0.95%2C%20shape%3Dx%5B1%5D%2C%20scale%3Dx%5B2%5D)%20-%20tail)%5E2%20%2B%20(x%5B1%5D*x%5B2%5D-mean(obs))%5E2%0A%0Aoptim(c(1%2C1)%2C%20error)) uses the formula for the gamma’s mean and gets the desired mean and percentile exactly with a shape parameter of 0.147 and a scale parameter of 616.6.
You can then check the quantile with
```
qgamma(0.95, shape=params[1], scale=params[2])
```
You can also check the log-likelihood of the result with
```
sum(log(dgamma(shape = params[1], scale = params[2], obs)))
```
and compare it with the log-likelihood from other two-parameter distributions.
| null |
CC BY-SA 4.0
| null |
2023-05-05T13:14:50.463
|
2023-05-05T18:35:04.823
|
2023-05-05T18:35:04.823
|
225256
|
225256
| null |
615000
|
1
| null | null |
0
|
34
|
I am working on ABT index and I calculated the return series. Also, I intend to fit a `GARCH(1,1)` model to the return series and then calculate the VaR and CVaR as the following image:
[](https://i.stack.imgur.com/i3JKL.png)
I used many approaches and as far as I know, the VaR and CVaR are one value per series. I do not understand the time-varying VaR/CVaR. Any help would be appreciated.
|
How can I calculate time-varying Value at Risk (VaR) and Conditional VaR for return series?
|
CC BY-SA 4.0
| null |
2023-05-05T13:22:35.720
|
2023-05-06T08:25:45.497
| null | null |
123131
|
[
"r",
"time-series",
"garch",
"risk"
] |
615001
|
2
| null |
614948
|
3
| null |
GANs are generally known to be hard to train and to collapse into producing only one "kind" of samples. They work well on images, because the discriminator forces the generator to produce good looking images. But usually (in my experience) you will have a hard time to get a GAN to produce the correct distribution. I think normalizing flows or diffusion models are way more likely to solve that problem.
| null |
CC BY-SA 4.0
| null |
2023-05-05T13:31:49.497
|
2023-05-05T13:31:49.497
| null | null |
365894
| null |
615003
|
2
| null |
614861
|
1
| null |
The most straightforward analysis would be of the time and location of the first recurrence. That would be a competing-risks model, with the locations (and death without recurrence) coded as separate types of event, and allowing for different regression coefficients for the biomarker among the different types of event. The [competing risks vignette](https://cran.r-project.org/web/packages/survival/vignettes/compete.pdf) of the R [survival package](https://cran.r-project.org/package=survival) shows how to set up not only such a simple competing-risks model but also more complicated models that allow for transitions among multiple states.
You could extend that approach to the multiple locations over time, but you would then have to be very specific about your assumptions.
At the simplest level, you could set up a model of all recurrences by having a separate data row for each recurrence within an individual, with the time to recurrence and an event indicator coding the recurrence location or death as above, along with an ID for the individual. You could then model similarly to the competing-risks model, using a cluster term (based on the ID values) to account for within-individual correlations while modeling the different locations in parallel.
That would, however, ignore some very important aspects of cancer biology. In particular, the correlations over time and among tumor sites are likely to be very important. Once there's a recurrence at one location, the probability of metastasis to multiple locations increases greatly. Furthermore, different organs tend to support metastases having different biological properties, so once there is a metastasis to one organ the probability will increase of further metastasis to an organ that supports a biologically similar metastasis.
At the other extreme of modeling, you thus could have a complicated multi-state model that allows for each possible combination of recurrence locations over time. But with 10 possible locations, the number of coefficients to estimate would grow quickly with the number of recurrences: 10 possibilities for the first recurrence location, with each location then having 9 possibilities for the next recurrence (90 combinations at that level), and each combination of second recurrences over time having 8 possibilities for the third recurrence (720 possibilities at that third level).
You might be able to apply your understanding of the subject matter to find a model intermediate in complexity between the completely parallel/independent model and the all-possible-combinations model. I'd recommend getting some experienced statistical consultation if you decide to do anything beyond the competing-risk model for the location and time of the first recurrence.
| null |
CC BY-SA 4.0
| null |
2023-05-05T13:33:35.653
|
2023-05-05T13:33:35.653
| null | null |
28500
| null |
615004
|
1
| null | null |
1
|
40
|
Suppose I have a finite data sample $\mathbf{S} = \{ (\mathbf{x}^{(1)}, \mathbf{y}^{(1)}), \dots, (\mathbf{x}^{(N)}, \mathbf{y}^{(N)}) \}$ from an unknown data-generating function of the form
$$ \mathbf{y} = f(\mathbf{x}), \quad \mathbf{x} \in \mathbb{R}^n, \mathbf{y} \in \mathbb{R}^m $$
I will attempt a non-linear regression:
$$ \hat{\mathbf{w}} = \underset{\mathbf{w}}{\rm arg \ min} \ \ell (\mathbf{y} - \hat{f}(\mathbf{x}; \mathbf{w}))$$
where $\mathbf{w}$ are the parameters of my regression function, and $\ell$ is some standard loss function.
As a conseqence of local (rather than global) optimisation, and potential model misspecification, I expect that if I repeat this regression on bootstrapped samples of $\mathbf{S}$, I will observe a sampling distribution for $\hat{\mathbf{w}}$, and the resulting prediction $f(\mathbf{x}^{(N+1)}; \hat{\mathbf{w}})$ for $\mathbf{x}^{(N+1)}$.
To quantify the resulting uncertainty in my prediction, I could make a bootstrap estimate of the variance
$$ \mathbf{s}^2_\text{boot} = E [ ( \hat{f}(\mathbf{x}^{(N+1)}; \hat{\mathbf{w}}) - E [ \hat{f}(\mathbf{x}^{(N+1)}; \hat{\mathbf{w}}) ] )^2 ] $$
Now assume that there is some uncertainty in the input $\mathbf{x}^{(N+1)}$.
I believe the simplest way to account for this is to assume a linear approximation (first-order Taylor expansion) for $\hat{f}$, and propagate the uncertainty via
$$\mathbf{K}_{\mathbf{yy}} = \nabla E[\hat{f}(\mathbf{x}^{(N+1)}; \hat{\mathbf{w})}]^\top \mathbf{K}_{\mathbf{xx}} \nabla E[\hat{f}(\mathbf{x}^{(N+1)}; \hat{\mathbf{w})}]$$
where $\nabla E[\hat{f}(\mathbf{x}^{(N+1)}; \hat{\mathbf{w})}]$ is the Jacobian of the bootstrap estimator, and $\mathbf{K}$ are the covariance matrices of $\mathbf{x}$ and $\mathbf{y}$, respectively.
Then the variance as a result of the input uncertainty is
$$ \mathbf{s}^2_\text{lin} = \mathrm{tr} (\mathbf{K}_{\mathbf{yy}}) $$
How can I combine these two sources of uncertainty? Can I sum the variances as:
$$ \mathbf{s}_\text{tot}^2 = \mathbf{s}_\text{boot}^2 + \mathbf{s}_\text{lin}^2 $$
|
How can I combine model parameter uncertainty and input uncertainty?
|
CC BY-SA 4.0
| null |
2023-05-05T13:49:06.090
|
2023-05-05T14:09:28.420
|
2023-05-05T14:09:28.420
|
261121
|
261121
|
[
"bootstrap",
"nonlinear-regression",
"uncertainty",
"multivariate-regression",
"error-propagation"
] |
615005
|
1
| null | null |
3
|
61
|
What are some good resources to learn about image synthesis? What are some of the key concepts or architectures to study?
I understand image synthesis as generating new images with ML techniques.
|
What are the best resources on image synthesis?
|
CC BY-SA 4.0
| null |
2023-05-05T14:06:33.457
|
2023-05-07T21:10:23.247
|
2023-05-07T21:10:23.247
|
22311
|
347904
|
[
"references",
"image-processing",
"computer-vision",
"generative-models"
] |
615007
|
1
| null | null |
0
|
30
|
I am analysing data from a 2^3 factorial design in R. I am using blocking, and I was wondering if my reasoning is correct. I am testing reaction time in milliseconds with three factors (Type of test, Hand temperature, and Dominant/NonDominant Hand).
Here is the output for the model:
[](https://i.stack.imgur.com/sr5Bt.png)
- For the "block" variable, am I correct in saying that we do not observe significant evidence that there is a difference in mean reaction time between the two blocks? And do I just remove this variable from my model?
- For the "temp:hand" interaction, do I remove it from the model as well? Or I retain it and state that the interaction between temperature and hand is not significant?
|
How to best interpret factorial design results?
|
CC BY-SA 4.0
| null |
2023-05-05T14:30:29.097
|
2023-05-05T15:05:48.000
|
2023-05-05T15:05:48.000
|
56940
|
387294
|
[
"r",
"anova",
"experiment-design"
] |
615008
|
1
|
615022
| null |
2
|
42
|
I am comparing 2 groups of sizes 13 and 28, using the Wilcoxon rank-sum test. While I think this method should not be very susceptible to outliers as it relies on ranks, I still want to investigate if the significant differences (p < 0.05) are reliable. I have tried using a leave-one-out cross-validation method, calculating the p-value while omitting one participant, either from the small or the larger group (so I obtain 41 p-values). Sometimes one or more of these 41 p-values are higher than 0.05 (i.e. not significant). What is a reasonable way to proceed, and is this extra test itself reasonable? For each comparison would a mean of all 41 p-values be meaningful? Should I deem any test where removing one participant resulted in a p-value above 0.05 as unreliable (despite removing one participant would reduce the statistical power)? Any advice would be greatly appreciated.
|
Leave-one-out validation after Wilcoxon rank-sum test
|
CC BY-SA 4.0
| null |
2023-05-05T14:32:19.003
|
2023-05-05T16:25:51.903
| null | null |
387295
|
[
"hypothesis-testing",
"cross-validation",
"wilcoxon-mann-whitney-test"
] |
615009
|
2
| null |
615005
|
1
| null |
Not sure this is what you are looking for but, my preferred resource is the book by Rafael C. Gonzalez and Richard E. Woods titled Digital Image Processing, Pearson, and now in its fourth edition. It's a quite comprehensive text on image processing, it covers: spatial filtering, filtering in the frequency domain, image restoration, colour image processing, wavelets, etc. The book has twelve chapters and the last chapter covers ML methods for image classification, e.g. NNets and Deep Learning.
| null |
CC BY-SA 4.0
| null |
2023-05-05T14:33:24.257
|
2023-05-05T14:33:24.257
| null | null |
56940
| null |
615011
|
2
| null |
438533
|
1
| null |
In nonlinear models, the relationship between the response variable and the predictors is not directly linear, making the interpretation of standardized beta coefficients less straightforward than in linear models. Although you can linearize the nonlinear function and use the Jacobian matrix to approximate a linear relationship, the standardized beta coefficients from this linear approximation may not accurately represent the true effect of each predictor in the original nonlinear model.
The idea of standardizing beta coefficients in the linearized version of a nonlinear model using the standard deviation of the i-th column of the Jacobian matrix might provide some insights into the relative importance of the predictors. However, this approach has limitations:
- The linearized version of the model is only an approximation, and the standardized beta coefficients may not represent the true effect of the predictors in the original nonlinear model.
- Standardized beta coefficients are more interpretable in linear models, where the effect of each predictor on the response variable is constant. In nonlinear models, the effect of predictors can change depending on the values of the other predictors.
Instead of relying on standardized beta coefficients, you may want to consider alternative methods to assess the importance of predictors in a nonlinear model, such as:
- Sensitivity analysis: Examine how changes in each predictor affect the model's predictions. This approach can help you understand the relative importance of each predictor and how their effects might change depending on the values of other predictors.
- Variable importance measures: Use techniques like Random Forests, which can handle nonlinear relationships and provide an importance score for each predictor.
In summary, the formulas for standardized beta coefficients in linear models do not directly translate to nonlinear models. Although you can use the linearized version of the model and the Jacobian matrix to approximate the relative importance of predictors, this approach has limitations. You may want to explore alternative methods, such as sensitivity analysis or variable importance measures, to assess the association between predictors and the response variable in a nonlinear model.
| null |
CC BY-SA 4.0
| null |
2023-05-05T14:36:05.417
|
2023-05-05T14:36:05.417
| null | null |
335198
| null |
615012
|
1
| null | null |
0
|
12
|
This is in reference to page 3 of [https://arxiv.org/abs/1911.01373](https://arxiv.org/abs/1911.01373)
In the following line after equation (3): the author mentioned that the generalised speed measure (a measure of speed for which a proposal distribution converges to its stationary distribution) balances between high entropy of the proposal and the average acceptance probability for a Markov Chain at initial state $x$.
- It is unclear to me what this tradeoff is. Can it not be the case that both the proposal and the average acceptance probability is high?
- How does $\beta$ balances the ratio of the entropy and the average acceptance probability?
|
Generalised speed measure balances between entropy of proposal density and average acceptance
|
CC BY-SA 4.0
| null |
2023-05-05T14:37:31.827
|
2023-05-05T14:45:21.890
|
2023-05-05T14:45:21.890
|
109101
|
109101
|
[
"bayesian",
"markov-process",
"convergence",
"entropy"
] |
615013
|
1
| null | null |
0
|
19
|
I know the principle of nested cross-validation, and it is used for testing model performance. However, when choosing models and selecting hyperparameters we still use normal cross-validation (CV) with grid-search (or other methods). So what is the use of nested CV? Just see the model's performance. Because finally, we need to do a CV and choose hypers and train models. Or is what I understand wrong?
And if I choose hypers by CV and train the model and find it a good performance, how do I know whether there exists overfitting? And what can I do?
|
Why do we need nested cross-validation?
|
CC BY-SA 4.0
| null |
2023-05-05T14:43:01.547
|
2023-05-05T14:49:28.793
|
2023-05-05T14:49:28.793
|
56940
|
383098
|
[
"machine-learning",
"cross-validation"
] |
615014
|
2
| null |
395038
|
0
| null |
"But by marginalising over
, have we not lost the information that would have enabled that?"
I'll try and provide insight into why the answer to this is no.
If we were to consider the parameters $\theta$ as a single point, we could fit a model (for example) via maximum likelihood estimation to yield
$$
p(y|\theta_{\text{ML}})
$$
Realize that we simply ignore the idea that there is randomness in $\theta$. In a Bayesian treatment, we don't ignore this, and we instead seek to find a distribution over $\theta$ rather than a point estimate for $\theta$. The distribution that takes into account the observed dataset we have at hand is called the "posterior" and I will denote it as $p(\theta|\pmb{\mathsf{y}})$.
Then, if we are interested in prediction, we compute the posterior predictive by marginalizing over all possible settings of $\theta$ contained in the posterior $p(\theta|\pmb{\mathsf{y}})$:
$$
p(y|\pmb{\mathsf{y}}) = \int p(y|\theta)p(\theta|\pmb{\mathsf{y}}) \text{d}\theta
$$
The MLE fitted model $p(y| \theta_{\text{ML}})$ considers a fixed $\theta$ and looks at the distribution over $y$ given such a fixed setting of $\theta$.
The Bayesian posterior predictive $p(y|\pmb{\mathsf{y}})$ considers a random $\theta$ and looks at the distribution over $y$ after considering all possible $\theta$ as per the posterior. Intuitively, it should have "more randomness" or "uncertainty" and it does. Namely, it has introduced "model" or "epistemic" uncertainty -- uncertainty coming from many different settings of $\theta$ that induce a model $p(y|\theta)$ that can explain our dataset at hand.
The variance of $p(y|\pmb{\mathsf{y}})$ will necessarily be greater than or equal to $p(y|\theta_{\text{ML}})$. Maybe the (very simplified) visualization below can help
[](https://i.stack.imgur.com/Ot4L7.png)
Also see Pattern Recognition and Machine Learning (Chris M. Bishop) [https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf](https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf). Specifically, section 1.2.5.
| null |
CC BY-SA 4.0
| null |
2023-05-05T15:04:52.073
|
2023-05-05T15:04:52.073
| null | null |
381061
| null |
615015
|
1
| null | null |
0
|
46
|
I want to test for correlation in a time series model. However, all four independent variables and the dependent variable are non-stationary. I tried taking first, second and third differences but my dependent variable remains non-stationary. I also tried transforming it into a squared variable or logarithm, but the altered variables were still non-stationary according to the augmented Dickey-Fuller test. What else can I do?
Raw data regarding:
My dependent variable the intra-EU trade share (exports)
[](https://i.stack.imgur.com/uXj3j.png)
An independent variable, the number of EU Civil law regulations
[](https://i.stack.imgur.com/H1CzE.png)
An independent variable, the number of EU Healthcare regulations
[](https://i.stack.imgur.com/nMIYH.png)
An independent variable, the number of EU Environment regulations
[](https://i.stack.imgur.com/CjCir.png)
An independent variable, the number of national court judgements in which EU regulations were used.
[](https://i.stack.imgur.com/6n3ou.png)
|
Differencing, taking logs or squaring does not fix nonstationarity. What to do?
|
CC BY-SA 4.0
| null |
2023-05-05T15:07:11.370
|
2023-05-05T22:46:38.777
|
2023-05-05T22:46:38.777
|
387300
|
387300
|
[
"time-series",
"correlation",
"data-transformation",
"stationarity",
"augmented-dickey-fuller"
] |
615016
|
1
|
615026
| null |
0
|
31
|
I need to compare performance of a web-application before and after some code change. We have several pages (call them $P_1$ and $P_2$; in practice there are more than 2), and for each we compare loading times. Let's say samples of $P_i$ loading time before the change are $s_{i1},s_{i2},\ldots$ and after the change $s'_{i1},s'_{i2},\ldots$. Currently we apply the Mann–Whitney U test to these samples separately for each $i$. However, I noticed that quite often the metric changes for all pages in the same direction, but is considered to be insignificant.
Is it a bad idea to take the union of the populations, so the test is applied to $s_{11},s_{21},s_{12},s_{22},\ldots$ and $s'_{11},s'_{21},s'_{12},s'_{22},\ldots$? At least I think this could make some of the cases described above significant, not be too likely to have false positive if (say) the change actually makes half the pages faster and the other half slower, and doesn't seem to violate the test's assumptions (as listed e.g. at [https://statistics.laerd.com/statistical-guides/mann-whitney-u-test-assumptions.php](https://statistics.laerd.com/statistical-guides/mann-whitney-u-test-assumptions.php)).
Even if the above is a good idea, is there another, even better way?
|
Can I combine the Mann–Whitney U test results for several distributions?
|
CC BY-SA 4.0
| null |
2023-05-05T15:14:32.783
|
2023-05-05T16:47:02.973
| null | null |
38088
|
[
"wilcoxon-mann-whitney-test"
] |
615018
|
1
| null | null |
1
|
6
|
Suppose that I have training data with dimension $(N,H,F)$, where $N$ represents the number of different datasets, $H$ is the history size and $F$ is the input size. Normalizing each dataset over the history dimension, and using (for example) a standard-scaler, I should be getting a mean and std tensors $\mu,\sigma$, both with shape $(N,F)$. Namely, for each dataset $n$, we normalize each feature $f$ across the entire history $H$.
Now, given a validation set with dimensions $(V,H,F)$, it seems that $\mu$ and $\sigma$ cannot be used for normalization as $N\ne V$ (at least not necessarily)
One naive solution is to discard the dataset association and normalize the training data when it is reshaped as $(N\times H, F)$, generating $\mu,\sigma$ with shape $(1,F)$. This seems problematic as the different datasets do not necessarily represent samples from the same underlying distribution.
Any suggestions?
|
Scaling datasets for multi-dataset time series
|
CC BY-SA 4.0
| null |
2023-05-05T15:36:33.590
|
2023-05-05T15:36:33.590
| null | null |
365891
|
[
"time-series",
"dataset",
"normalization"
] |
615019
|
1
| null | null |
0
|
10
|
I have a dataset predicting disease or not from several different variables. Two of the variables denote the status of pathological lymph nodes - one pertaining to consistency (three levels A, B and not present) and the other to size in mm (0-10, 10-20, 20+ and not present). The problem arises when the lymph node is not present and the results are all over the place for the not_present category in both consistency and size - presumably do to collinearity. Any good suggestions as to how I can recode this - I´d rather not just code them as missing since the observations are valuable for the other variables?
|
dummy coding categories in two different variables that are identical
|
CC BY-SA 4.0
| null |
2023-05-05T15:45:51.447
|
2023-05-05T15:45:51.447
| null | null |
1291
|
[
"logistic",
"categorical-encoding"
] |
615020
|
1
| null | null |
0
|
13
|
I'm working on building/selecting a model to predict the result of a sales lead: whether it's "SOLD" or "NOT SOLD".
My dataset consists of past leads with the following data:
- sales rep
- product pitched
- lead source
- date of lead
- zipcode
- result
The issue I'm running into is that I have a various amount of leads per day per rep. Here is some sample data:
|ID |salesrep |product |leadsource |date |zipcode |result |
|--|--------|-------|----------|----|-------|------|
|1 |Bob |A |Website |5-1-2023 |12345 |SOLD |
|2 |Bob |A |Call In |5-1-2023 |12344 |NOT SOLD |
|3 |Alice |A |Website |5-1-2023 |12343 |NOT SOLD |
|4 |Bob |A |Referral |5-1-2023 |12346 |NOT SOLD |
|5 |Alice |A |Call In |5-2-2023 |12345 |SOLD |
|6 |Bob |B |Referral |5-2-2023 |12344 |SOLD |
|7 |Alice |B |Website |5-2-2023 |12342 |NOT SOLD |
|8 |Alice |A |Call In |5-3-2023 |12333 |SOLD |
In this example, I have three dates of data.
- Day 1 (5-1-2023) has Bob with 3 leads, and Alice with 1.
- Day 2 (5-2-2023) has Bob with 1 lead and Alice with 2 leads.
- Day 3 (5-3-2023) has just Alice with 1 lead.
Now let's say I have information for a lead for day 4:
|ID |salesrep |product |leadsource |date |zipcode |result |
|--|--------|-------|----------|----|-------|------|
|9 |Bob |A |Website |5-4-2023 |12345 |??? |
I want to predict the result - specifically, the probability of the result being "SOLD".
Initially I was thinking about using something like a Decision Tree/Random Forest/XG Boost classifier, but realized that this data may have a time-series aspect to it. The business is seasonal and has a "busy season" during the spring and summer months, and I'd also like to account for sales reps that are on "hot streaks".
For this I was thinking about making use of an LSTM RNN model, but I'm not sure if my data's "shape" is appropriate for this model type, as I have a various amount of data points per date, some dates have no data points for specific reps, and some dates have no data points at all (no leads on Sundays, federal holidays, etc).
Thoughts on this? Which model would you use in this scenario?
|
What is the best model to use with a time-series dataset of uneven data?
|
CC BY-SA 4.0
| null |
2023-05-05T15:50:05.683
|
2023-05-05T15:50:05.683
| null | null |
387304
|
[
"machine-learning",
"time-series",
"predictive-models",
"lstm"
] |
615021
|
1
| null | null |
0
|
28
|
I'm very new to statistics. I have data from some field work and need to test for statistical significant relationships using SPSS. The data is looking for a relationship between urban park quality and human wellbeing. One variable is the wellbeing score which varies, the other is the quality score of the park which is constant. Can anyone advice on which test I should use in SPSS??
|
Which statistical test? - noobie question
|
CC BY-SA 4.0
| null |
2023-05-05T16:16:11.567
|
2023-05-05T16:16:11.567
| null | null |
387305
|
[
"statistical-significance"
] |
615022
|
2
| null |
615008
|
2
| null |
- Note that too much focus on a binary "reject/not reject" decision by a test is problematic. In fact p=0.06 and p=0.04 are very similar regarding their message about the data, so a p-value switching from, say, 0.04 to 0.06 when leaving out an observation doesn't mean that something big and worrying has happened. p-values slightly below 0.05 (and p-values in general) should be interpreted with some care anyway. Note that criticism of overuse and overinterpretation of hypothesis tests and p-values is all over the place these days, and too strong interpretation of being above or below some threshold is one of the major issues.
- The p-value computed on all the data gives you information about the evidence in all the data regarding your null hypothesis. If you leave out a data point (which normally would be good information in case your measurements are valid), you lose information, and obviously as a consequence the p-value will be different, and less useful than computing the p-value on all data.
- Cross-validation (CV) is not normally used with tests. The idea of CV is to leave data points out so that it can be assessed how well these points can be predicted by some kind of prediction method, where the predictions don't involve the predicted observations. As tests do not predict observations, this idea is not applicable to tests. (Although it is not necessarily wrong to take an "experimental" interest in what happens to the p-value if you leave observations out - however it's a nonstandard thing, so don't expect that there is a standard interpretation for whatever the outcome of this exercise is.)
This also means that you won't be in any trouble if you ignore the results of the CV on p-values, as hardly anybody else would do such a thing in the first place.
- In case the null hypothesis is wrong (which some people argue is always the case), the smaller the sample size, the weaker the power. This means that you should expect (slightly) larger p-values if you leave observations out. Consequently, I'd expect the mean p-value from your CV to be slightly larger than the p-value from all data, but the latter one is a valid p-value whereas the former one isn't (as averaging p-values doesn't correspond to a well defined probability).
- Averaging p-values is problematic anyway, as the meaning of p-values is not linear. E.g., 0.12 is arguably much more similar to 0.1 than 0.005 is to 0.025. Averaging treats the p-values implicitly as linear.
| null |
CC BY-SA 4.0
| null |
2023-05-05T16:25:51.903
|
2023-05-05T16:25:51.903
| null | null |
247165
| null |
615023
|
2
| null |
615000
|
0
| null |
A GARCH model specifies the entire conditional distribution of a time series $\{x_t\}$ at each time point. E.g. an ARMA(p,q)-GARCH(r,s) model looks like this:
\begin{aligned}
x_t &= \mu_t + u_t, \\
\mu_t &= c + \varphi_1 x_{t-1} + \dots + \varphi_p x_{t-p} + \theta_1 u_{t-1} + \dots + \theta_q u_{t-q}, \\
u_t &= \sigma_t \varepsilon_t, \\
\sigma_t^2 &= \omega + \alpha_1 u_{t-1}^2 + \dots + \alpha_s u_{t-s}^2 + \beta_1 \sigma_{t-1}^2 + \dots + \beta_r \sigma_{t-r}^2, \\
\varepsilon_t &\sim i.i.D(0,1),
\end{aligned}
where $D$ is some probability distribution with zero mean and unit variance. Thus the distribution of $x_t$ is $D(0,1)$ scaled by $\sigma_t$ and shifted by $\mu_t$.
Knowing the distribution, you can extract VaR and CVaR. You can do this analytically or by simulation. In the latter case, you generate a large number of realizations from the distribution of the time point of interest and obtain the empirical VaR and CVaR.
| null |
CC BY-SA 4.0
| null |
2023-05-05T16:37:59.583
|
2023-05-06T08:25:45.497
|
2023-05-06T08:25:45.497
|
53690
|
53690
| null |
615025
|
1
| null | null |
0
|
54
|
The question is made of 3 parts, 2 of which require the use of Bayes' formula:
>
(a) At a clinic for patients with suspected malaria, 89 out of a total of 300
were found to actually be infected with malaria.
Use this information to estimate P (I = infected), where I represents
the malaria infection status (infected or not infected) of a randomly
selected patient attending this clinic.
a) is straightforward, giving $$P(I)=89\div300\approx0.297$$
>
(b) There are various blood tests that can be used to diagnose malaria. One
particular test is ‘paracheck’, which is a rapid dipstick test. Let
R = paracheck positive for malaria.
For patients at this clinic, it is estimated that
P (R | I = infected) = 0.910 and P (R | I = not infected) = 0.137. Use
your answer to part (a) and Bayes’ theorem to estimate
P (I = infected | R), the probability that when paracheck is positive for
malaria, a patient at this clinic is infected with malaria.
The formula I used for this is $$P(R│I=infected) \times P(I=infected) \div P(R)$$
To calculate P(R)
$$P(R) = P(R│I)\times P(infected)+P(R│I not infected)\times P(not infected)$$
$$0.910 \times 0.296 + 0.137 \times 0.704 = 0.365808$$
$$(0.910 \times 0.296) \div 0.365808 = 0.7363425622$$
giving approximately a 73.6% chance a patient who tested positive with Paracheck is infected with malaria
>
(c) An alternative diagnosis method is blood microscopy. Let
M = microscopy positive for malaria.
For patients at this clinic, it is estimated that
P (M | R, I = infected) = 0.519 and
P (M | R, I = not infected) = 0.207. Calculate the probability that a
patient at this clinic who tested positive for malaria by both paracheck
and microscopy is infected with malaria.
To check if my calculations for b) were correct, I used the same formula and logic for this part
$$0.519 \times 0.296 + 0.207 \times 0.704 = 0.299352$$
$$(0.519 \times 0.296) \div 0.299352 = 0.5131884871$$
Meaning that theres only a 51.3% probability that a patient that tests positive for both paracheck and microscopy also has malaria.
This doesn't make sense to me based on my answer to part b) which makes me think I am making a foolish error.
Can anyone see where my mistake is?
|
Error answering a question using Bayes Theorem - Conditional probability?
|
CC BY-SA 4.0
| null |
2023-05-05T16:42:24.600
|
2023-05-05T18:07:35.860
|
2023-05-05T18:07:35.860
|
20519
|
387306
|
[
"bayesian",
"conditional-probability"
] |
615026
|
2
| null |
615016
|
4
| null |
There are two reasons against your idea.
- Running tests that are chosen conditionally on the data to be tested are invalid. This means that if you see insignificant results and because of this you run a new test on the same data, any significance you then get will not be valid. This problem however goes away if you apply the new test on new data other than those based on which you got this idea, which I can imagine is possible in your situation.
- Chances are that the different pages are systematically different regarding their loading times. This means that taking the union of the populations will invalidate the i.i.d. assumption, as I'd expect different distributions for different pages.
If you are willing to make a normality assumption, a mixed model with a dummy variable for before/after and a random effect may do the trick. I'm not aware of a combination of Mann-Whitney with a random effect. A "quick and dirty" method would be to put samples together after subtracting an overall pagewise median, but this is nonstandard and may easily be criticised. (I'd probably do it for curiosity's sake but I'd be very careful not to overinterpret it, and would for sure check whether the different pages look similarly distributed after this operation.)
| null |
CC BY-SA 4.0
| null |
2023-05-05T16:47:02.973
|
2023-05-05T16:47:02.973
| null | null |
247165
| null |
615027
|
1
|
615085
| null |
0
|
36
|
I am fitting a cox proportional hazards model looking at mortality. I have a number of covariates and a specific predictor of interest (inflammation), and I am also looking at how the effect of inflammation on mortality varies between two groups (Frail participants vs. Non-frail participants). Thus far, I have been stratifying by frailty status. As such, I have three different models:
`overall_model.fit <- coxph(Surv(Follow_Up, Mortality) ~ Age + Sex + Assessment_Center + Smoking + Education + Ethnicity + Education + Inflammation, overall_df)`
`frail_model.fit <- coxph(Surv(Follow_Up, Mortality) ~ Age + Sex + Assessment_Center + Smoking + Education + Ethnicity + Education + Inflammation, frail_df)`
`non_frail_model.fit <- coxph(Surv(Follow_Up, Mortality) ~ Age + Sex + Assessment_Center + Smoking + Education + Ethnicity + Education + Inflammation, non_frail_df)`
The sum of the frail and non-frail groups is the total set of participants, which is the "overall" model. My supervisor has told me that rather than stratify, I should get point estimates for the frail and non-frail "groups" from an interaction model:
`interaction_model.fit <- coxph(Surv(Follow_Up, Mortality) ~ Age + Sex + Assessment_Center + Smoking + Education + Ethnicity + Education + Inflammation + Frailty + Inflammation*Frailty, overall_df)`
Where I assign levels of frailty (i.e. Frail or Non-frail, 1 or 0) and see how the model estimate changes for different levels of inflammation (e.g. -1SD Inflammation, Mean Inflammation, +1SD Inflammation). Thus far, I have been trying to do this using predict.coxph - however, this has been problematic because predict.coxph compares the "newdata" field to the model means. Therefore, the levels that I set for each covariate have an impact on the risk of mortality (e.g. someone with no education has a higher mortality risk than someone with a university degree, but I have to set one of these levels). Worse, since the point estimates are relative to this mean, the results of predict.coxph do not tell me how the risk of mortality changes per unit increase/decrease in my predictor - it only tells me what the point estimate is for a given participant with those values of covariates relative to the mean. How can I get point estimates similar to the output of summary(coxph.model) (i.e. per unit increase/decrease in the predictor, Inflammation) for different levels of another predictor (i.e. frail or non-frail)?
I hope that this question makes sense.
|
Stratification vs. point estimates for a cox proportional hazards model?
|
CC BY-SA 4.0
| null |
2023-05-05T16:51:57.290
|
2023-05-06T14:37:13.737
| null | null |
371493
|
[
"r",
"cox-model"
] |
615028
|
1
| null | null |
0
|
24
|
Consider the following additive model
\begin{equation*}y=h(x)+v\end{equation*}
where:
- $y\in\mathbb{R}^p$ is the observation
- $x\in\mathbb{R}^n$ is the estimand
- $v$ is noise, distributed according to a PDF $p_v(\cdot)$
- $h:\mathbb{R}^n\mapsto\mathbb{R}^n$ is a map that is invertible
It is trivial to see that the likelihood function for this model is simply
\begin{equation*}
\ell(y,x) = p_v(y-h(x))
\end{equation*}
so that the maximum likelihood estimate of $x$ given $y$ is
\begin{equation*}
\hat{x}_\text{ML}(y) = \arg \max_{s\in\mathbb{R}^n} p_v(y-h(s))
\end{equation*}
Problem
I've got the strong suspicion that, in the current simple setting, the maximum likelihood estimate is always
\begin{equation*}
\hat{x}_\text{ML}(y) = h^{-1}(y-v^*)
\end{equation*}
where $v^*$ is the noise mode, i.e.
\begin{equation*}
v^* \triangleq \arg \max_{v \in \mathbb{R}^p} p_v(v)
\end{equation*}
The idea behind my suspect is the following: we are searching for $s$ that, once mapped in the observation space $\mathbb{R}^p$ and combined with $y$, gives the maximum value of the PDF $p_v(\cdot)$. Hence, we are searching for $s$ so that $y-h(s)$ is the mode $v^*$ of $p_v(\cdot)$. Thus, we are searching for the solution of
\begin{equation*}
y-h(s)=v^*
\end{equation*}
which (after changing the name of $s$ to $\hat{x}_{\text{ML}}$) is
\begin{equation*}
\hat{x}_{\text{ML}}=h^{-1}(y-v^*)
\end{equation*}
Questions
Is it true my intuition? If no, why? If yes, is it correct my explanation?
|
Maximum likelihood for additive models
|
CC BY-SA 4.0
| null |
2023-05-05T17:06:15.107
|
2023-05-05T17:06:15.107
| null | null |
350817
|
[
"maximum-likelihood",
"estimation",
"density-function"
] |
615029
|
1
| null | null |
1
|
150
|
When refuting two causal models, model 1 has a bigger p-value and an estimated effect closer to the new effect (compared to model 2). Both can't be refuted because they have a p-value above 0.05.
Is it safe to assume that model 1 will arguably outperform model 2 when applied to new data?
Basically, what I want to know is if refuting methods can also be used to measure the performance of a model (e.g. the closer p-value is to 1, the better is the expected performance of the model).
|
Causal counterfactual inference model comparison
|
CC BY-SA 4.0
| null |
2023-05-05T17:11:46.800
|
2023-05-13T02:02:38.167
|
2023-05-05T20:48:20.130
|
386911
|
386911
|
[
"causality",
"causal-diagram"
] |
615030
|
1
| null | null |
1
|
29
|
I have a dataset with spatial grid cells. After performing some spatial diagnostics (Moran's I, LaGrange multiplier tests), it seems that there is spatial dependency in the form of spatial error, so a spatial error model seemed fitting. However, my dependent variable is a count variable and is quite skewed (the standard deviation is much higher than the mean). From what I have gathered this violates the assumptions of the spatial error model (which assumes more continuous data) and also seems to lead to heteroskedasticity, biasing the results.
An alternative could be to run a negative binomial model, which seems to fit the data better but leaves out the spatial dependencies, also biasing the results. I know there are some ways to perform spatial count models, but these seem to complex to grasp for me at this stage (or to difficult to implement in R).
My question is: how do I choose between these two evils; unfortunately the results are quite different. Is there a good way to compare goodness-of-fit? The AIC and Log Likelihood are much better for the negative binomial model whereas the RMSE of the spatial model is better but I am not sure if I can compare either those metrics across the models.
|
How to choose between a count model and spatial error model
|
CC BY-SA 4.0
| null |
2023-05-05T17:35:36.413
|
2023-05-05T17:35:36.413
| null | null |
387310
|
[
"generalized-linear-model",
"goodness-of-fit",
"negative-binomial-distribution",
"spatial"
] |
615031
|
1
| null | null |
-1
|
12
|
I am building a recommendation system, that makes recommendations to a specific user based on his own interactions and other data. In particular I am using a type of graph neural network to create the embeddings and later make link prediction (but this might be irellevant for now). As opossed to other systems I suppose I have to hash - or encode the User IDs in some way to use it as input to my model. It seems weird using ID's as features but I suppose that is the way to go in a recommender system. Lets suppose the user IDs are strings and we have a million users. What type of encoding do you suggest
|
Encoding UserIDs to use in ML model
|
CC BY-SA 4.0
| null |
2023-05-05T17:38:28.690
|
2023-05-05T17:38:28.690
| null | null |
336594
|
[
"categorical-encoding",
"recommender-system"
] |
615033
|
2
| null |
332086
|
0
| null |
What to take from these apparently inconsistent thresholds is that you should judge for yourself if a Cramér's V is "negligible", "weak", "moderate", "important", "strong", or whatever adjective you can imagine.
Even using thresholds defined using the distribution of effect sizes in a specific scientific field is bound to be problematic (Panzarella et al., 2021), so eventually you should use your own judgement and knowledge of your specific research question to decide how strong or important a given Cramér's V is.
Similarly to the two videos you mention, you can find various thresholds in scientific publications, that are relevant to their own research questions. Here is a non-exhaustive list of some publications using very different thresholds, to show how useless it can be to use someone else's benchmarks without exercising your own judgement in your specific context:
- Chanvril-Ligneel and Le Hay (2014) define an "important Cramér's V" as $V > 0.15$.
- Dai et al. (2021) use the following thresholds for Cramér's V:
weak: >0.05; moderate: >0.10; strong: >0.15; and very strong: >0.25.
- Kakudji et al. (2020):
A Cramér's V ≥ 0.1 was deemed as a weak association, Cramér's V ≥ 0.3
was seen as a moderate association and Cramér's V ≥ 0.5 regarded as a large
effect/association
- Lee (2016) uses the following thresholds:
$ V < 0.1$: negligible
$ 0.1< V <0.2$: weak
$ 0.2 < V <0.4$: moderate
$ 0.4 < V < 0.6$: relatively strong
$ 0.6 < V < 0.8$: strong
$V > 0.8$: very strong
- Le Quéau et al. (2017) use the following thresholds:
$V < 0.1$: very weak,
$0.1 \le V < 0.2$: weak,
$0.2 \le V < 0.3$: medium,
$V \ge 0.3$: strong.
- Another method is to convert $V$ to Cohen's $\omega$ (omega), and then interpret the result according to Jacob Cohen's guidelines (Cohen, 1988). (Incidentally, in his book Cohen tends to advise against using these thresholds, and rather suggests them as a kind of last resort).
---
References
- Chanvril-Ligneel, F., & Hay, V. L. (2014). Méthodes Statistiques pour
les Sciences Sociales. ELLIPSES.
- Cohen, J. (1988). Statistical Power Analysis for the Behavioral
Sciences (2nd edition). Routledge.
- Dai, J., Teng, L., Zhao, L., & Zou, H. (2021). The combined analgesic
effect of pregabalin and morphine in the treatment of pancreatic
cancer pain, a retrospective study. Cancer Medicine, 10(5),
1738–1744. https://doi.org/10.1002/cam4.3779
- Kakudji, B. K., Mwila, P. K., Burger, J. R., & Plessis, J. M. D. (2020). Epidemiological, clinical and diagnostic profile of breast cancer patients treated at Potchefstroom regional hospital, South Africa, 2012-2018: An open-cohort study. Pan African Medical Journal, 36(1), Article 1. https://www.ajol.info/index.php/pamj/article/view/210808
- Lee, D. K. (2016). Alternatives to P value: Confidence interval and effect size. Korean Journal of Anesthesiology, 69(6), 555–562. https://doi.org/10.4097/kjae.2016.69.6.555
- Le Quéau, P., Labarthe F., & Zerbib, O. (2017). Analyse de données quantitatives en sciences humaines et sociales [Mooc]. France Université Numérique. https://www.fun-mooc.fr/fr/cours/analyse-de-donnees-quantitatives-en-sciences-humaines-et-sociales-adshs/
- Panzarella, E., Beribisky, N., & Cribbie, R. A. (2021). Denouncing the use of field-specific effect size distributions to inform magnitude. PeerJ, 9, e11383. https://doi.org/10.7717/peerj.11383
| null |
CC BY-SA 4.0
| null |
2023-05-05T18:17:38.813
|
2023-05-06T10:24:23.613
|
2023-05-06T10:24:23.613
|
164936
|
164936
| null |
615034
|
1
|
615056
| null |
0
|
52
|
I have the following problem: I have log-likelihoods that need to be transformed to probabilities. One thing I have attempted is the following. Define $\kappa_{s} := \log \int p (\theta | X_s) \text{d} \theta $ and $\nu_{j,s} := \log p (\theta_j | X_s$ for data-generating processes $X_s$ and parameters $\theta_j$.
The log-likelihood $\kappa_{s} \in [-\infty,\infty]$. I have found $\frac{\exp( \kappa_s - b )}{\sum_{s} \exp( \kappa_s - b)}$ for $b= \underset{s}{ \max} \kappa_{s}$.
However, my log-likelihoods are still fairly spread out so that I wind up with lots of probabilities that are very close to zero with the maximum occupying a one. Is there a type of outlier-robust transformation or something slightly more sophisticated than the scheme described above?
For more context, the problem is more carefully described by
$\sum_{j} \exp(\nu_{s,j}) \lambda_{j} \geq \exp (\kappa_{s}) $
which appears as a constraint of a linear programme. The log-likelihoods are fairly large and dispersed.
The reason why I want to transform them is that the LHS of the inequality approximates the integral $\sum_{j} \exp(\nu_{s,j}) \lambda_{j} \approx \int p_{\nu} \, \text{d} \lambda $.
The $\text{d} \lambda$ weights are the variable I am solving for, so logsumexp sadly does not seem to be applicable here.
This is a likelihood ratio test constraint.
Thank you!
|
Numerically stable transformation of log-likelihoods to probability
|
CC BY-SA 4.0
| null |
2023-05-05T18:21:39.793
|
2023-05-05T22:54:12.637
|
2023-05-05T19:46:44.317
|
387242
|
387242
|
[
"data-transformation",
"likelihood",
"logarithm"
] |
615035
|
1
| null | null |
1
|
42
|
How would I prove the Perception mistake bound is tight. Avrim Blum’s lecture notes claim that the upper bound for mistakes is $\frac{R}{\gamma}^2$, but I don’t understand how to prove this is mistake bound is tight
|
Proving Perceptron algorithm mistake bound is tight
|
CC BY-SA 4.0
| null |
2023-05-05T18:24:32.573
|
2023-05-05T23:52:38.890
|
2023-05-05T23:52:38.890
|
387312
|
387312
|
[
"machine-learning",
"mathematical-statistics",
"perceptron"
] |
615037
|
2
| null |
328216
|
1
| null |
Just to expand on the directed graphical model point. If you can assume conditional independence of your continuous features $X$ from your categorical features $Y$ given your latent class $Z$, your likelihood will factorize as
$$P(X, Y)=\sum_{z} P(X, Y, Z=z)=\sum_{z} P(X|Z=z)P(Y|Z=z)P(Z=z)$$
meaning can now freely specify the type of each conditional as $X|Z=z \sim \mathcal{N}(\mu_z, \Sigma_z)$ and $Y|Z=z \sim \text{Bernoulli}(p_z)$ or $Y|Z=z \sim \text{Multinoulli}(p^z_1, p^z_2, ..., p^z_K)$. This is the approach we used in the [StepMix Python and R Package](https://github.com/Labo-Lacourse/stepmix).
| null |
CC BY-SA 4.0
| null |
2023-05-05T19:11:31.157
|
2023-05-05T19:12:22.787
|
2023-05-05T19:12:22.787
|
387315
|
387315
| null |
615038
|
1
| null | null |
1
|
21
|
I am trying to identify if three groups of species (like here) overlap or seperate out. I tried a popular method to identify ellipses:
```
data(iris)
res.pca <- prcomp(iris[, -5], scale = TRUE)
fviz_pca_biplot(res.pca, label = "var", habillage=iris$Species,
addEllipses=TRUE, ellipse.level=0.95,
ggtheme = theme_minimal())
fviz_pca_biplot(res.pca, label = "var", habillage=iris$Species,
addEllipses=TRUE, ellipse.level=0.95,ellipse.type = "confidence",
ggtheme = theme_minimal())
```
The first produced:
[](https://i.stack.imgur.com/ItzAI.png)
I think this draws ellipses of some kind around the centroid. I would interpret this as blue and green species share common traits but red uses completely different traits
This is the second one:
[](https://i.stack.imgur.com/rycDu.png)
I think this draws 95% confidence ellipses of some kind around the centroid. I would interpret this as all three blue, green and red have completely different traits.
I am trying to understand how to interpert each of these graphs? More importantly, what kind of statistics should I be reporting to talk about extent of overlap? Alternately, I tried to look up ellipses from the `vegan package` sensu Oksanen. But I only found NMDS plots not regular PCA
|
What is the difference in interpretation between adding ellipse.type = "confidence" in PCA biplots?
|
CC BY-SA 4.0
| null |
2023-05-05T19:18:44.467
|
2023-05-05T19:18:44.467
| null | null |
169437
|
[
"confidence-interval",
"pca",
"canonical-correlation",
"ellipse"
] |
615040
|
1
| null | null |
4
|
172
|
I'm curious if there are any common parametric distribution models for mixed discrete/continuous data. For illustration, suppose I have two random vectors, $X_c,X_d$, where $X_c$ is continuous and $X_d$ is discrete. I have data consisting of samples of $(X_c,X_d)$, and I'd like to do some density estimation. Ultimately the distribution of $X_c\vert X_d$ is what I would like, but I need to be able to vary $X_d$. I have a fair amount of data but it may be sparse in some areas, so it seems like a parametric model is a good place to start. But are there any standard parametric models for joint continuous/discrete data?
I could obviously go down the generative NN rabbit hole (GANs or VAEs w/ discrete-to-continuous encoding, for instance), but I'm curious about classical approaches as well.
|
Parametric models for mixed discrete/continuous data
|
CC BY-SA 4.0
| null |
2023-05-05T19:27:45.773
|
2023-05-07T17:57:37.670
| null | null |
28114
|
[
"density-estimation"
] |
615041
|
1
| null | null |
0
|
14
|
Suppose that I have $n\gg 500000$ observations, and I specify
$$\mathbf{y} \sim \text{Normal}(\mathbf{X}\boldsymbol{\beta},\sigma^2_y\boldsymbol{\Sigma}_y + \tau \mathbf{K}\mathbf{K}^T),$$ where $\boldsymbol{\Sigma}_y$ is a diagonal matrix and $\mathbf{K}$ is a mapping matrix where there are two 1's in each row and 0's elsewhere. When attempting to evaluate this normal density for $\mathbf{y}$, the computation is incredibly slow - and that's with me using the symmetric Woodbury matrix identity. So, I'm wondering what type of dimension reduction techniques may be helpful in this situation, where the covariance matrix for $\mathbf{y}$ is sparse.
|
What are some dimension reduction techniques applicable to sparse covariance matrices?
|
CC BY-SA 4.0
| null |
2023-05-05T19:38:49.307
|
2023-05-05T19:38:49.307
| null | null |
257939
|
[
"dimensionality-reduction",
"multivariate-normal-distribution"
] |
615043
|
1
| null | null |
2
|
26
|
As long as I know, for both ROC and PR curves, the classifier performance is usually measured by the AUC. This might indicate that classifiers with equivalent performance might have different ROC/PR curves.
I am interested in the modification of the PR curve shape without a general improvement in performance. Specifically, is there a way to improve the precision at low recalls (say recall lower than 0.25) with a precision reduction in higher recalls?
I have some thoughts about overfitting the model allowing it to learn with high detail the class I what to detect (also requires increasing the detected class's weight). Does it make sense?
|
Is there a way to effect the shape of precision-recall curve?
|
CC BY-SA 4.0
| null |
2023-05-05T19:48:35.867
|
2023-05-07T19:00:53.277
|
2023-05-07T19:00:53.277
|
211655
|
211655
|
[
"precision-recall",
"average-precision"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.