idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
49,901 | Do examples in the training and test sets have to be independent? | Training on multiple records/observations from the same user/subject is ok, but you want your test data independent of your training data.
For example, you might imagine two approaches to constructing a test set (eg. for cross-validation):
Record wise: select records at random and assign to test set.
Subject wise: select subjects at random, and assign all their records to a test set.
If the records of subjects aren't independent, the two approaches may be extremely different, and one should almost certainly do the latter, selecting subjects at random to place in test set.
What can go wrong with record-wise test set construction?
To take an extreme example, imagine that all the records for each subject were exactly the same and each subject has numerous records. Then with record-wise validation, you'd be training on the test set! If your algorithm overfit the data, you'd get amazing performance on the test set but horrible performance when you actually see new, independent data.
Training and testing on the same set of users can give horribly misleading results that will not predict out of sample performance on new users.
Another example, here's a recent paper that discusses how record-wise cross-validation can go totally wrong in the clinical context: http://biorxiv.org/content/early/2016/06/19/059774.full.pdf+html | Do examples in the training and test sets have to be independent? | Training on multiple records/observations from the same user/subject is ok, but you want your test data independent of your training data.
For example, you might imagine two approaches to constructin | Do examples in the training and test sets have to be independent?
Training on multiple records/observations from the same user/subject is ok, but you want your test data independent of your training data.
For example, you might imagine two approaches to constructing a test set (eg. for cross-validation):
Record wise: select records at random and assign to test set.
Subject wise: select subjects at random, and assign all their records to a test set.
If the records of subjects aren't independent, the two approaches may be extremely different, and one should almost certainly do the latter, selecting subjects at random to place in test set.
What can go wrong with record-wise test set construction?
To take an extreme example, imagine that all the records for each subject were exactly the same and each subject has numerous records. Then with record-wise validation, you'd be training on the test set! If your algorithm overfit the data, you'd get amazing performance on the test set but horrible performance when you actually see new, independent data.
Training and testing on the same set of users can give horribly misleading results that will not predict out of sample performance on new users.
Another example, here's a recent paper that discusses how record-wise cross-validation can go totally wrong in the clinical context: http://biorxiv.org/content/early/2016/06/19/059774.full.pdf+html | Do examples in the training and test sets have to be independent?
Training on multiple records/observations from the same user/subject is ok, but you want your test data independent of your training data.
For example, you might imagine two approaches to constructin |
49,902 | Are Maximum Likelihood Estimators asymptotically unbiased? | I have presented in this answer the issues surrounding the concept of "asymptotic unbiasedness". In short, the issue is whether it is defined as "convergence of the sequence of first moments to the true value", or as "asymptotic distribution having expected value equal to the true value" (of the parameter under estimation).
Under the second approach (which is the more intuitive in my view, while the first and the one the OP discusses can be called "unbiased in the limit"), we have that asymptotic consistency of an estimator is sufficient also for asymptotic unbiasedness. Then, when the MLE is consistent (and it usually is), it will also be asymptotically unbiased.
And no, asymptotic unbiasedness as I use the term, does not guarantee "unbiasedness in the limit" (i.e. convergence of the sequence of first moments).
The conditions for the limit of the sequence of moments to equal the corresponding moment of the asymptotic distribution can be found here, and here. | Are Maximum Likelihood Estimators asymptotically unbiased? | I have presented in this answer the issues surrounding the concept of "asymptotic unbiasedness". In short, the issue is whether it is defined as "convergence of the sequence of first moments to the tr | Are Maximum Likelihood Estimators asymptotically unbiased?
I have presented in this answer the issues surrounding the concept of "asymptotic unbiasedness". In short, the issue is whether it is defined as "convergence of the sequence of first moments to the true value", or as "asymptotic distribution having expected value equal to the true value" (of the parameter under estimation).
Under the second approach (which is the more intuitive in my view, while the first and the one the OP discusses can be called "unbiased in the limit"), we have that asymptotic consistency of an estimator is sufficient also for asymptotic unbiasedness. Then, when the MLE is consistent (and it usually is), it will also be asymptotically unbiased.
And no, asymptotic unbiasedness as I use the term, does not guarantee "unbiasedness in the limit" (i.e. convergence of the sequence of first moments).
The conditions for the limit of the sequence of moments to equal the corresponding moment of the asymptotic distribution can be found here, and here. | Are Maximum Likelihood Estimators asymptotically unbiased?
I have presented in this answer the issues surrounding the concept of "asymptotic unbiasedness". In short, the issue is whether it is defined as "convergence of the sequence of first moments to the tr |
49,903 | Am I introducing bias by assuming birthdate is middle of month? | Is my derived age in days more accurate than calculating age in whole
months?
Obviously not. You cannot make measurement more precise then it was actually measured. Imagine that besides assuming middle of month, you assumed also that the patients were born at 12:30, 30 seconds, 30 milliseconds, etc. - would it make your measurement super precise? It is impossible to de-aggregate aggregated data. Notice that in long run your procedure would yield the same results as picking uniformly random day for each patient - would such procedure make measurement any more accurate? | Am I introducing bias by assuming birthdate is middle of month? | Is my derived age in days more accurate than calculating age in whole
months?
Obviously not. You cannot make measurement more precise then it was actually measured. Imagine that besides assuming mi | Am I introducing bias by assuming birthdate is middle of month?
Is my derived age in days more accurate than calculating age in whole
months?
Obviously not. You cannot make measurement more precise then it was actually measured. Imagine that besides assuming middle of month, you assumed also that the patients were born at 12:30, 30 seconds, 30 milliseconds, etc. - would it make your measurement super precise? It is impossible to de-aggregate aggregated data. Notice that in long run your procedure would yield the same results as picking uniformly random day for each patient - would such procedure make measurement any more accurate? | Am I introducing bias by assuming birthdate is middle of month?
Is my derived age in days more accurate than calculating age in whole
months?
Obviously not. You cannot make measurement more precise then it was actually measured. Imagine that besides assuming mi |
49,904 | Am I introducing bias by assuming birthdate is middle of month? | I'm just going to state the obvious and say that I agree with you and I can't evaluate your colleague's explanation without seeing it. For (1), I can't even tell what direction the bias should be expected to be in. Are people substantially more likely to be born near the beginning or end of a month? Not that I know of. For (2), it's obvious that using your method will give you slightly more accurate estimates than computing in whole months. The increase in accuracy may be so slight as to make no difference, but certainly your method won't hurt. | Am I introducing bias by assuming birthdate is middle of month? | I'm just going to state the obvious and say that I agree with you and I can't evaluate your colleague's explanation without seeing it. For (1), I can't even tell what direction the bias should be expe | Am I introducing bias by assuming birthdate is middle of month?
I'm just going to state the obvious and say that I agree with you and I can't evaluate your colleague's explanation without seeing it. For (1), I can't even tell what direction the bias should be expected to be in. Are people substantially more likely to be born near the beginning or end of a month? Not that I know of. For (2), it's obvious that using your method will give you slightly more accurate estimates than computing in whole months. The increase in accuracy may be so slight as to make no difference, but certainly your method won't hurt. | Am I introducing bias by assuming birthdate is middle of month?
I'm just going to state the obvious and say that I agree with you and I can't evaluate your colleague's explanation without seeing it. For (1), I can't even tell what direction the bias should be expe |
49,905 | Identical variable importance values for different model types | Alright, looking into the code of varImp.train I see, that in case of a classification problem, the variable importance is just computed via the filterVarImp function. So the variables are just ranked by the AUC, as stated in the documentation varImp under model independent metrics.
I tested it, by calling varImp on each of my models and comparing the variable importance values with the ones computed via filterVarImp on the training data.
## Compute variable importance via filter approach
varImps.filtered <- filterVarImp(trainData, trainClasses)
varImps <- list(knn=varImp(models$knn, scale=F),
pda=varImp(models$pda, scale=F))
## Sort variable importance by their average value
## over all classes in decreasing order.
varImps.filtered$Mean <- apply(varImps.filtered, 1, mean)
varImps.filtered <- varImps.filtered[with(varImps.filtered, order(-Mean)), ]
varImps.filtered$Mean <- NULL
... and surprise surprise, it is exactly the same.
> varImps$knn
ROC curve variable importance
variables are sorted by maximum importance across the classes
Class_1 Class_2 Class_3 Class_4 Class_5
V5 0.7094 0.9912 0.9431 0.9231 0.9912
V3 0.3706 0.5631 0.9744 0.9744 0.7831
V9 0.9725 0.9619 0.9725 0.8125 0.8988
V8 0.6887 0.6644 0.8650 0.9531 0.9531
V4 0.9325 0.9194 0.9325 0.6044 0.3138
V10 0.7250 0.8119 0.8544 0.8544 0.8331
V7 0.8169 0.7606 0.8244 0.7025 0.8244
V6 0.3650 0.5775 0.7838 0.8081 0.8081
V11 0.6194 0.7662 0.7662 0.6000 0.6506
V2 0.5138 0.7412 0.7412 0.5938 0.4031
U5 0.5609 0.5731 0.5731 0.4944 0.4834
U4 0.5259 0.5531 0.5531 0.5103 0.5109
U3 0.5134 0.5134 0.5103 0.5384 0.5384
U2 0.5384 0.5203 0.5216 0.5384 0.5219
U1 0.4853 0.5312 0.5312 0.5238 0.4872
> varImps.filtered
Class_1 Class_2 Class_3 Class_4 Class_5
V9 0.9725000 0.9618750 0.9725000 0.8125000 0.8987500
V5 0.7093750 0.9912500 0.9431250 0.9231250 0.9912500
V8 0.6887500 0.6643750 0.8650000 0.9531250 0.9531250
V10 0.7250000 0.8118750 0.8543750 0.8543750 0.8331250
V7 0.8168750 0.7606250 0.8243750 0.7025000 0.8243750
V4 0.9325000 0.9193750 0.9325000 0.6043750 0.3137500
V3 0.3706250 0.5631250 0.9743750 0.9743750 0.7831250
V11 0.6193750 0.7662500 0.7662500 0.6000000 0.6506250
V6 0.3650000 0.5775000 0.7837500 0.8081250 0.8081250
V2 0.5137500 0.7412500 0.7412500 0.5937500 0.4031250
U5 0.5609375 0.5731250 0.5731250 0.4943750 0.4834375
U4 0.5259375 0.5531250 0.5531250 0.5103125 0.5109375
U2 0.5384375 0.5203125 0.5215625 0.5384375 0.5218750
U3 0.5134375 0.5134375 0.5103125 0.5384375 0.5384375
U1 0.4853125 0.5312500 0.5312500 0.5237500 0.4871875
My goal was to come up with a model specific stable feature selection method. The only way I see to achieve this now, is by utilizing caret's inbuilt feature selection methods like "Recursive Feature Selection (RFE)" and "Selection by Filter (SBF)". As far as I understand it, RFE however is only supporting a handful of models in caret. So I might be forced to implement the rfeControl$functions myself. | Identical variable importance values for different model types | Alright, looking into the code of varImp.train I see, that in case of a classification problem, the variable importance is just computed via the filterVarImp function. So the variables are just ranked | Identical variable importance values for different model types
Alright, looking into the code of varImp.train I see, that in case of a classification problem, the variable importance is just computed via the filterVarImp function. So the variables are just ranked by the AUC, as stated in the documentation varImp under model independent metrics.
I tested it, by calling varImp on each of my models and comparing the variable importance values with the ones computed via filterVarImp on the training data.
## Compute variable importance via filter approach
varImps.filtered <- filterVarImp(trainData, trainClasses)
varImps <- list(knn=varImp(models$knn, scale=F),
pda=varImp(models$pda, scale=F))
## Sort variable importance by their average value
## over all classes in decreasing order.
varImps.filtered$Mean <- apply(varImps.filtered, 1, mean)
varImps.filtered <- varImps.filtered[with(varImps.filtered, order(-Mean)), ]
varImps.filtered$Mean <- NULL
... and surprise surprise, it is exactly the same.
> varImps$knn
ROC curve variable importance
variables are sorted by maximum importance across the classes
Class_1 Class_2 Class_3 Class_4 Class_5
V5 0.7094 0.9912 0.9431 0.9231 0.9912
V3 0.3706 0.5631 0.9744 0.9744 0.7831
V9 0.9725 0.9619 0.9725 0.8125 0.8988
V8 0.6887 0.6644 0.8650 0.9531 0.9531
V4 0.9325 0.9194 0.9325 0.6044 0.3138
V10 0.7250 0.8119 0.8544 0.8544 0.8331
V7 0.8169 0.7606 0.8244 0.7025 0.8244
V6 0.3650 0.5775 0.7838 0.8081 0.8081
V11 0.6194 0.7662 0.7662 0.6000 0.6506
V2 0.5138 0.7412 0.7412 0.5938 0.4031
U5 0.5609 0.5731 0.5731 0.4944 0.4834
U4 0.5259 0.5531 0.5531 0.5103 0.5109
U3 0.5134 0.5134 0.5103 0.5384 0.5384
U2 0.5384 0.5203 0.5216 0.5384 0.5219
U1 0.4853 0.5312 0.5312 0.5238 0.4872
> varImps.filtered
Class_1 Class_2 Class_3 Class_4 Class_5
V9 0.9725000 0.9618750 0.9725000 0.8125000 0.8987500
V5 0.7093750 0.9912500 0.9431250 0.9231250 0.9912500
V8 0.6887500 0.6643750 0.8650000 0.9531250 0.9531250
V10 0.7250000 0.8118750 0.8543750 0.8543750 0.8331250
V7 0.8168750 0.7606250 0.8243750 0.7025000 0.8243750
V4 0.9325000 0.9193750 0.9325000 0.6043750 0.3137500
V3 0.3706250 0.5631250 0.9743750 0.9743750 0.7831250
V11 0.6193750 0.7662500 0.7662500 0.6000000 0.6506250
V6 0.3650000 0.5775000 0.7837500 0.8081250 0.8081250
V2 0.5137500 0.7412500 0.7412500 0.5937500 0.4031250
U5 0.5609375 0.5731250 0.5731250 0.4943750 0.4834375
U4 0.5259375 0.5531250 0.5531250 0.5103125 0.5109375
U2 0.5384375 0.5203125 0.5215625 0.5384375 0.5218750
U3 0.5134375 0.5134375 0.5103125 0.5384375 0.5384375
U1 0.4853125 0.5312500 0.5312500 0.5237500 0.4871875
My goal was to come up with a model specific stable feature selection method. The only way I see to achieve this now, is by utilizing caret's inbuilt feature selection methods like "Recursive Feature Selection (RFE)" and "Selection by Filter (SBF)". As far as I understand it, RFE however is only supporting a handful of models in caret. So I might be forced to implement the rfeControl$functions myself. | Identical variable importance values for different model types
Alright, looking into the code of varImp.train I see, that in case of a classification problem, the variable importance is just computed via the filterVarImp function. So the variables are just ranked |
49,906 | Variance computed using Taylor series does not agree with numerical experiment | It does not matter that whether the mean squared error is decreasing with $n$. All that matters is that it can be made arbitrarily small. Here is a simple demonstration.
Fixing $\theta$, let $s=\sin(\theta)$ and $c=\cos(\theta)$. The estimator $\hat\theta$ is the slope of the ray through the origin and the point
$$(c + Y_n, s+X_n).$$
Let $\epsilon \gt 0$. Consider the disk of radius $\sin\delta$ around $(c,s)$. Euclidean geometry shows us this disk is contained within the wedge at the origin with angles $\theta-\delta$ to $\theta+\delta$. By choosing $n$ sufficiently large, the chance that $(c+Y_n, s+X_n)$ lies within that disk can be made to exceed $1-\alpha$ for any tiny $\alpha$. In fact, because $n(X_n^2 + Y_n^2)$ follows a $\chi^2_2$ distribution,
$$n = \lceil\frac{(\chi^2_2)^{-1}(1-\alpha)}{(\sin\delta)^2}\rceil$$
will work.
In this sketch, $(c,s)$ is at the red dot, the disk is drawn in black, the wedge in blue, and 10,000 simulated values of $(c+Y_n, s+X_n)$ are shown. $\epsilon$ was equal to $1/50$ here, corresponding to a root mean error in the slope estimate of no more than $\sqrt{1/50}\approx 0.14$. With $\alpha=0.05$, the value of $n$ was $300$, corresponding to a standard deviation of $\sqrt{1/300} \approx 0.058$ in the individual coordinates of the simulated points.
Now, within that disk the angular error does not exceed $\delta$ and outside that disk the angular error cannot exceed $\pi$ under any circumstance (since the inverse tangent always produces values in the range $[-\pi/2, \pi/2]$). This bounds the expected squared error:
$$\mathbb{E}([\hat\theta - \theta]^2) \le (1-\alpha)(\sin(\delta))^2 + \alpha \pi^2.$$
By choosing, say, $\delta = \arcsin{\sqrt{\epsilon}}$ and $\alpha=\epsilon/\pi^2$, the right hand side will be less than $2\epsilon$. Because $\epsilon \gt 0$ was arbitrary, the limiting mean square error is zero.
(In the figure, the mean square error was $0.0033 \ll 0.04 = 2\epsilon$.)
The argument concerning angular error in the disk assumed the inverse tangent was continuous within a neighborhood of $\theta$. That will not be the case for $\theta=\pm \pi/2$, but a slight change in the definition of the inverse tangent in cases where $\theta$ is close to these values will fix the problem. | Variance computed using Taylor series does not agree with numerical experiment | It does not matter that whether the mean squared error is decreasing with $n$. All that matters is that it can be made arbitrarily small. Here is a simple demonstration.
Fixing $\theta$, let $s=\sin | Variance computed using Taylor series does not agree with numerical experiment
It does not matter that whether the mean squared error is decreasing with $n$. All that matters is that it can be made arbitrarily small. Here is a simple demonstration.
Fixing $\theta$, let $s=\sin(\theta)$ and $c=\cos(\theta)$. The estimator $\hat\theta$ is the slope of the ray through the origin and the point
$$(c + Y_n, s+X_n).$$
Let $\epsilon \gt 0$. Consider the disk of radius $\sin\delta$ around $(c,s)$. Euclidean geometry shows us this disk is contained within the wedge at the origin with angles $\theta-\delta$ to $\theta+\delta$. By choosing $n$ sufficiently large, the chance that $(c+Y_n, s+X_n)$ lies within that disk can be made to exceed $1-\alpha$ for any tiny $\alpha$. In fact, because $n(X_n^2 + Y_n^2)$ follows a $\chi^2_2$ distribution,
$$n = \lceil\frac{(\chi^2_2)^{-1}(1-\alpha)}{(\sin\delta)^2}\rceil$$
will work.
In this sketch, $(c,s)$ is at the red dot, the disk is drawn in black, the wedge in blue, and 10,000 simulated values of $(c+Y_n, s+X_n)$ are shown. $\epsilon$ was equal to $1/50$ here, corresponding to a root mean error in the slope estimate of no more than $\sqrt{1/50}\approx 0.14$. With $\alpha=0.05$, the value of $n$ was $300$, corresponding to a standard deviation of $\sqrt{1/300} \approx 0.058$ in the individual coordinates of the simulated points.
Now, within that disk the angular error does not exceed $\delta$ and outside that disk the angular error cannot exceed $\pi$ under any circumstance (since the inverse tangent always produces values in the range $[-\pi/2, \pi/2]$). This bounds the expected squared error:
$$\mathbb{E}([\hat\theta - \theta]^2) \le (1-\alpha)(\sin(\delta))^2 + \alpha \pi^2.$$
By choosing, say, $\delta = \arcsin{\sqrt{\epsilon}}$ and $\alpha=\epsilon/\pi^2$, the right hand side will be less than $2\epsilon$. Because $\epsilon \gt 0$ was arbitrary, the limiting mean square error is zero.
(In the figure, the mean square error was $0.0033 \ll 0.04 = 2\epsilon$.)
The argument concerning angular error in the disk assumed the inverse tangent was continuous within a neighborhood of $\theta$. That will not be the case for $\theta=\pm \pi/2$, but a slight change in the definition of the inverse tangent in cases where $\theta$ is close to these values will fix the problem. | Variance computed using Taylor series does not agree with numerical experiment
It does not matter that whether the mean squared error is decreasing with $n$. All that matters is that it can be made arbitrarily small. Here is a simple demonstration.
Fixing $\theta$, let $s=\sin |
49,907 | What is the best way to detect repetition in xyz data for purposes of splitting data? | It sounds like you have a set of known patterns and want to find places in your signal where these patterns occur. A typical way of doing this is using the cross correlation. In this approach, you'd compute the cross correlation of your pattern with the signal. You can think of this as repeatedly shifting the pattern by some lag to align it with a different portion of the signal, then taking the dot product of the pattern and the local portion of the signal. This gives a measure of the similarity between the pattern and the local signal at each lag. When the signal matches the pattern, this will manifest as a peak in the cross correlation.
Different variants of the cross correlation exist. For example, some versions locally scale and/or normalize the signals. This can be useful if you want your comparison to be shift/scale invariant (e.g. you want the shape of the signal to be the same, but don't care about the actual magnitude; in the case of detecting accelerometer patterns, this might correspond to performing the same motion but more or less vigorously).
The cross correlation will naturally fluctate, reflecting varying degrees of similarity between the pattern and signal. So, the question is how to distinguish peaks that represent a 'true match' from those that reflect partial similarity. You'll have to define this based on the variant of cross correlation you use. For example, if the pattern exactly matches the signal at some offset, the magnitude of the unnormalized cross correlation will equal the squared $l_2$ norm of the pattern (i.e. the dot product of the pattern with itself). Some normalized versions of the cross correlation will have maximum amplitude 1. Another thing you'd need to define is some tolerance, to account for noise in the signal (you probably don't want to require an exact match).
Another possibility is that you want to use some other measure of similarity (e.g. the euclidean distance). In this case, you could use peaks in the cross correlation to identify candidate matches, then check them using whatever distance metric/similarity function you like.
One of main the reasons to use cross correlation is that it's very computationally efficient. For large signals, you can gain even more speed by computing it in the Fourier domain, using FFTs. Many packages/libraries are available to do this.
The cross correlation approach (and FFT acceleration) will also work for higher dimensional signals (e.g. images). | What is the best way to detect repetition in xyz data for purposes of splitting data? | It sounds like you have a set of known patterns and want to find places in your signal where these patterns occur. A typical way of doing this is using the cross correlation. In this approach, you'd c | What is the best way to detect repetition in xyz data for purposes of splitting data?
It sounds like you have a set of known patterns and want to find places in your signal where these patterns occur. A typical way of doing this is using the cross correlation. In this approach, you'd compute the cross correlation of your pattern with the signal. You can think of this as repeatedly shifting the pattern by some lag to align it with a different portion of the signal, then taking the dot product of the pattern and the local portion of the signal. This gives a measure of the similarity between the pattern and the local signal at each lag. When the signal matches the pattern, this will manifest as a peak in the cross correlation.
Different variants of the cross correlation exist. For example, some versions locally scale and/or normalize the signals. This can be useful if you want your comparison to be shift/scale invariant (e.g. you want the shape of the signal to be the same, but don't care about the actual magnitude; in the case of detecting accelerometer patterns, this might correspond to performing the same motion but more or less vigorously).
The cross correlation will naturally fluctate, reflecting varying degrees of similarity between the pattern and signal. So, the question is how to distinguish peaks that represent a 'true match' from those that reflect partial similarity. You'll have to define this based on the variant of cross correlation you use. For example, if the pattern exactly matches the signal at some offset, the magnitude of the unnormalized cross correlation will equal the squared $l_2$ norm of the pattern (i.e. the dot product of the pattern with itself). Some normalized versions of the cross correlation will have maximum amplitude 1. Another thing you'd need to define is some tolerance, to account for noise in the signal (you probably don't want to require an exact match).
Another possibility is that you want to use some other measure of similarity (e.g. the euclidean distance). In this case, you could use peaks in the cross correlation to identify candidate matches, then check them using whatever distance metric/similarity function you like.
One of main the reasons to use cross correlation is that it's very computationally efficient. For large signals, you can gain even more speed by computing it in the Fourier domain, using FFTs. Many packages/libraries are available to do this.
The cross correlation approach (and FFT acceleration) will also work for higher dimensional signals (e.g. images). | What is the best way to detect repetition in xyz data for purposes of splitting data?
It sounds like you have a set of known patterns and want to find places in your signal where these patterns occur. A typical way of doing this is using the cross correlation. In this approach, you'd c |
49,908 | What is the best way to detect repetition in xyz data for purposes of splitting data? | If you know the patterns ahead of time, you can just make a hash table of the patterns and pattern detection is just a matter of hashing segments of the input signal and looking for collisions. Conventional hashing will only work for exact, noise-free input signals. Locality-sensitive hashing can be made insensitive to small variations in input signal by checking that the incoming signal is sufficiently near the target. | What is the best way to detect repetition in xyz data for purposes of splitting data? | If you know the patterns ahead of time, you can just make a hash table of the patterns and pattern detection is just a matter of hashing segments of the input signal and looking for collisions. Conven | What is the best way to detect repetition in xyz data for purposes of splitting data?
If you know the patterns ahead of time, you can just make a hash table of the patterns and pattern detection is just a matter of hashing segments of the input signal and looking for collisions. Conventional hashing will only work for exact, noise-free input signals. Locality-sensitive hashing can be made insensitive to small variations in input signal by checking that the incoming signal is sufficiently near the target. | What is the best way to detect repetition in xyz data for purposes of splitting data?
If you know the patterns ahead of time, you can just make a hash table of the patterns and pattern detection is just a matter of hashing segments of the input signal and looking for collisions. Conven |
49,909 | What is the best way to detect repetition in xyz data for purposes of splitting data? | This is exactly the definition of time series motifs
Here is a tutorial on the topic
http://www.cs.unm.edu/~mueen/Tutorial/ICDMTutorial3.ppt
eamonn | What is the best way to detect repetition in xyz data for purposes of splitting data? | This is exactly the definition of time series motifs
Here is a tutorial on the topic
http://www.cs.unm.edu/~mueen/Tutorial/ICDMTutorial3.ppt
eamonn | What is the best way to detect repetition in xyz data for purposes of splitting data?
This is exactly the definition of time series motifs
Here is a tutorial on the topic
http://www.cs.unm.edu/~mueen/Tutorial/ICDMTutorial3.ppt
eamonn | What is the best way to detect repetition in xyz data for purposes of splitting data?
This is exactly the definition of time series motifs
Here is a tutorial on the topic
http://www.cs.unm.edu/~mueen/Tutorial/ICDMTutorial3.ppt
eamonn |
49,910 | What is the best way to detect repetition in xyz data for purposes of splitting data? | If the activity is sinusoidal within a specific frequency band you could use the frequency to classify patterns. To achieve this you perform a fast-fourier transform (FFT) on the data and search for global maxima in the resulting power spectrum. Using different band-pass filters you can target specific frequency bands. Alternatively, you need to search for local maxima in the whole-band power spectrum. Just be advised that this method will not take phase into account.
If the activity is characterized by an irregular but predictable pattern, you could search for how many times the signal flips (the amplitude changes direction) and set up different maxima and minima criteria or time range criteria (e.g. a positive flip followed by a negative flip at at least -0.5 within 300 ms).
If you baseline correct the data - for the event at 0 ms - relative to some prior point (e.g. -100 ms to 0 ms) you can count how many times zero was crossed to get the length of the pattern. The baseline correction takes the global maximum and minimum of the baseline range and centers the signal on $$\frac{max-min}{2}$$ | What is the best way to detect repetition in xyz data for purposes of splitting data? | If the activity is sinusoidal within a specific frequency band you could use the frequency to classify patterns. To achieve this you perform a fast-fourier transform (FFT) on the data and search for g | What is the best way to detect repetition in xyz data for purposes of splitting data?
If the activity is sinusoidal within a specific frequency band you could use the frequency to classify patterns. To achieve this you perform a fast-fourier transform (FFT) on the data and search for global maxima in the resulting power spectrum. Using different band-pass filters you can target specific frequency bands. Alternatively, you need to search for local maxima in the whole-band power spectrum. Just be advised that this method will not take phase into account.
If the activity is characterized by an irregular but predictable pattern, you could search for how many times the signal flips (the amplitude changes direction) and set up different maxima and minima criteria or time range criteria (e.g. a positive flip followed by a negative flip at at least -0.5 within 300 ms).
If you baseline correct the data - for the event at 0 ms - relative to some prior point (e.g. -100 ms to 0 ms) you can count how many times zero was crossed to get the length of the pattern. The baseline correction takes the global maximum and minimum of the baseline range and centers the signal on $$\frac{max-min}{2}$$ | What is the best way to detect repetition in xyz data for purposes of splitting data?
If the activity is sinusoidal within a specific frequency band you could use the frequency to classify patterns. To achieve this you perform a fast-fourier transform (FFT) on the data and search for g |
49,911 | What is the best way to detect repetition in xyz data for purposes of splitting data? | I agree with @user20160 that cross correlation (CC) will most likely be a valid and fast solution to your problem (in combination with scaling of your reference pattern + selecting the window position where correlation peaks occur).
What I want to point out is: if you run into problems because the pattern you search for is irregularly stretched or condensed in your time series, consider using Dynamic Time Warping (DTW) instead of CC. DTW can compensate for any stretching or condensing of your pattern using its warping property - which will be helpful in such cases. | What is the best way to detect repetition in xyz data for purposes of splitting data? | I agree with @user20160 that cross correlation (CC) will most likely be a valid and fast solution to your problem (in combination with scaling of your reference pattern + selecting the window position | What is the best way to detect repetition in xyz data for purposes of splitting data?
I agree with @user20160 that cross correlation (CC) will most likely be a valid and fast solution to your problem (in combination with scaling of your reference pattern + selecting the window position where correlation peaks occur).
What I want to point out is: if you run into problems because the pattern you search for is irregularly stretched or condensed in your time series, consider using Dynamic Time Warping (DTW) instead of CC. DTW can compensate for any stretching or condensing of your pattern using its warping property - which will be helpful in such cases. | What is the best way to detect repetition in xyz data for purposes of splitting data?
I agree with @user20160 that cross correlation (CC) will most likely be a valid and fast solution to your problem (in combination with scaling of your reference pattern + selecting the window position |
49,912 | Can a classifier trained with oversampled data be used to classify unbalanced data | This is actually an interesting question that comes across alot in medical data. One of the ways to understand oversampling and classification of unbalanced data is because oversampling is an active bias of sampling the data, the results will be biased. When compensating for the minority class, remember that the goal of classification is to identify characteristics that can determine which class an outcome can belong to and then address how the independent variables interact.
When oversampling data for classification, remember to use cross-validation properly and to oversample data during the cross-validation as opposed to before the cross-validation. This will give you better (more accurate) scores with sensitivity and specificity and limit (though not eliminate) the effect of bias and overfitting due to improperly using cross-validation and oversampling.
Here is a good reference using preterm births: http://www.marcoaltini.com/blog/dealing-with-imbalanced-data-undersampling-oversampling-and-proper-cross-validation | Can a classifier trained with oversampled data be used to classify unbalanced data | This is actually an interesting question that comes across alot in medical data. One of the ways to understand oversampling and classification of unbalanced data is because oversampling is an active b | Can a classifier trained with oversampled data be used to classify unbalanced data
This is actually an interesting question that comes across alot in medical data. One of the ways to understand oversampling and classification of unbalanced data is because oversampling is an active bias of sampling the data, the results will be biased. When compensating for the minority class, remember that the goal of classification is to identify characteristics that can determine which class an outcome can belong to and then address how the independent variables interact.
When oversampling data for classification, remember to use cross-validation properly and to oversample data during the cross-validation as opposed to before the cross-validation. This will give you better (more accurate) scores with sensitivity and specificity and limit (though not eliminate) the effect of bias and overfitting due to improperly using cross-validation and oversampling.
Here is a good reference using preterm births: http://www.marcoaltini.com/blog/dealing-with-imbalanced-data-undersampling-oversampling-and-proper-cross-validation | Can a classifier trained with oversampled data be used to classify unbalanced data
This is actually an interesting question that comes across alot in medical data. One of the ways to understand oversampling and classification of unbalanced data is because oversampling is an active b |
49,913 | Can a classifier trained with oversampled data be used to classify unbalanced data | If your classifier works well at production mode, it is legitimate to use it.
You can build it in whatever method you like.
Working on a balanced dataset is a good way to build a model that differ between the majority and minority classes. However, as @akash87 and you noted, it might cause a bias.
You might have been lucky and though the bias your model will perform well on the production data. In order to know that evaluate it on the original dataset too. For the usage of different datasets in order to learn and validate see here
In the more common scenario, the bias hurts the performance and you should adapt the model back to the production distribution. You can adapt your model back to the production distribution by learning a new model that will do this adaptation. For details see here.
You might be interested in this Editorial: Special Issue on Learning from Imbalanced Data Sets and Learning from Imbalanced Data | Can a classifier trained with oversampled data be used to classify unbalanced data | If your classifier works well at production mode, it is legitimate to use it.
You can build it in whatever method you like.
Working on a balanced dataset is a good way to build a model that differ be | Can a classifier trained with oversampled data be used to classify unbalanced data
If your classifier works well at production mode, it is legitimate to use it.
You can build it in whatever method you like.
Working on a balanced dataset is a good way to build a model that differ between the majority and minority classes. However, as @akash87 and you noted, it might cause a bias.
You might have been lucky and though the bias your model will perform well on the production data. In order to know that evaluate it on the original dataset too. For the usage of different datasets in order to learn and validate see here
In the more common scenario, the bias hurts the performance and you should adapt the model back to the production distribution. You can adapt your model back to the production distribution by learning a new model that will do this adaptation. For details see here.
You might be interested in this Editorial: Special Issue on Learning from Imbalanced Data Sets and Learning from Imbalanced Data | Can a classifier trained with oversampled data be used to classify unbalanced data
If your classifier works well at production mode, it is legitimate to use it.
You can build it in whatever method you like.
Working on a balanced dataset is a good way to build a model that differ be |
49,914 | Determining proper K value for Elo rating | I discussed briefly this solution with professional mathematician and he didn't discard the solution.
K depends on n the number of competitors:
$$
K = An^2 + Bn + C
$$
Then I worked on my data set through various values and assumed that best chosen K should have minimal differences between expected result and actual result (makes sense, doesn't it?) data-set wide. It turned out that best polynomial on my data set had $A=0$ and $C=0$. This means that I actually could treat the whole tournament as round robin after all.
In my case the optimal value was $B=42$ (I required for the value K to be represented by natural numbers) which is nice but only coincident. | Determining proper K value for Elo rating | I discussed briefly this solution with professional mathematician and he didn't discard the solution.
K depends on n the number of competitors:
$$
K = An^2 + Bn + C
$$
Then I worked on my data set thr | Determining proper K value for Elo rating
I discussed briefly this solution with professional mathematician and he didn't discard the solution.
K depends on n the number of competitors:
$$
K = An^2 + Bn + C
$$
Then I worked on my data set through various values and assumed that best chosen K should have minimal differences between expected result and actual result (makes sense, doesn't it?) data-set wide. It turned out that best polynomial on my data set had $A=0$ and $C=0$. This means that I actually could treat the whole tournament as round robin after all.
In my case the optimal value was $B=42$ (I required for the value K to be represented by natural numbers) which is nice but only coincident. | Determining proper K value for Elo rating
I discussed briefly this solution with professional mathematician and he didn't discard the solution.
K depends on n the number of competitors:
$$
K = An^2 + Bn + C
$$
Then I worked on my data set thr |
49,915 | Determining proper K value for Elo rating | Considering each game of multiple players be a group of 1vs1 matches between all pairs of players is not the best option, and it leads to the problems you said.
If you're interested in use (more than in development), you should give a try to rankade, our ranking system (here's a comparison between most known ranking systems). Rankade, as a post-Elo ranking system, manage a multiplayer match as a single multiplayer match (and not as many fake 1-on-1 matches), having a variable $K$, depending on number of players (and many other factors, as playing frequency and group characteristics, and more). | Determining proper K value for Elo rating | Considering each game of multiple players be a group of 1vs1 matches between all pairs of players is not the best option, and it leads to the problems you said.
If you're interested in use (more than | Determining proper K value for Elo rating
Considering each game of multiple players be a group of 1vs1 matches between all pairs of players is not the best option, and it leads to the problems you said.
If you're interested in use (more than in development), you should give a try to rankade, our ranking system (here's a comparison between most known ranking systems). Rankade, as a post-Elo ranking system, manage a multiplayer match as a single multiplayer match (and not as many fake 1-on-1 matches), having a variable $K$, depending on number of players (and many other factors, as playing frequency and group characteristics, and more). | Determining proper K value for Elo rating
Considering each game of multiple players be a group of 1vs1 matches between all pairs of players is not the best option, and it leads to the problems you said.
If you're interested in use (more than |
49,916 | How can I experiment with Lagrange multiplier in PCA optimization? | I think I got the answer by myself but wish some experts can confirm.
The confusion is that, in CVX book we are converting one optimization problem with constraints to another optimization problem without constraints and solve the dual problem. But in PCA optimization we cannot.
For example, page 227, we convert
$$
\underset{x}{\text{minimize}}~~ x^\top x \\
\text{s.t.}~~~~~~ Ax=b
$$
into maximize the dual function $g(v)=-(1/4)v^\top A A^\top v -b^\top v$, which is
$$
\underset{x}{\text{maximize}}~~\left(-(1/4)v^\top A A^\top v -b^\top v \right)\\
$$
In PCA optimization problem, problem has Lagrangian (for equality constraint we can use $-\lambda$)
$$
\mathcal{L}(\mathbf w,\lambda)=\mathbf w^\top \mathbf{Cw}-\lambda(\mathbf w^\top \mathbf w-1)
$$
For fixed $\lambda$, we get partial derivative and set to $\mathbf 0$.
$$
\frac{\partial \mathcal{L}}{\partial \mathbf w}=\mathbf 0=2\mathbf {Cw}-2\lambda\mathbf w
$$
which is the eigenvector equation
$$
\mathbf {Cw}=\lambda\mathbf w
$$
As pointed out by Matthew Gunn in the comment, PCA problem the objective is not convex see this discussion. Therefore we should not try to minimize dual function to solve the original problem. | How can I experiment with Lagrange multiplier in PCA optimization? | I think I got the answer by myself but wish some experts can confirm.
The confusion is that, in CVX book we are converting one optimization problem with constraints to another optimization problem wit | How can I experiment with Lagrange multiplier in PCA optimization?
I think I got the answer by myself but wish some experts can confirm.
The confusion is that, in CVX book we are converting one optimization problem with constraints to another optimization problem without constraints and solve the dual problem. But in PCA optimization we cannot.
For example, page 227, we convert
$$
\underset{x}{\text{minimize}}~~ x^\top x \\
\text{s.t.}~~~~~~ Ax=b
$$
into maximize the dual function $g(v)=-(1/4)v^\top A A^\top v -b^\top v$, which is
$$
\underset{x}{\text{maximize}}~~\left(-(1/4)v^\top A A^\top v -b^\top v \right)\\
$$
In PCA optimization problem, problem has Lagrangian (for equality constraint we can use $-\lambda$)
$$
\mathcal{L}(\mathbf w,\lambda)=\mathbf w^\top \mathbf{Cw}-\lambda(\mathbf w^\top \mathbf w-1)
$$
For fixed $\lambda$, we get partial derivative and set to $\mathbf 0$.
$$
\frac{\partial \mathcal{L}}{\partial \mathbf w}=\mathbf 0=2\mathbf {Cw}-2\lambda\mathbf w
$$
which is the eigenvector equation
$$
\mathbf {Cw}=\lambda\mathbf w
$$
As pointed out by Matthew Gunn in the comment, PCA problem the objective is not convex see this discussion. Therefore we should not try to minimize dual function to solve the original problem. | How can I experiment with Lagrange multiplier in PCA optimization?
I think I got the answer by myself but wish some experts can confirm.
The confusion is that, in CVX book we are converting one optimization problem with constraints to another optimization problem wit |
49,917 | Why take the gradient of the moments (mean and variance) when using Batch Normalization in a Neural Network? | Derivatives of the moments are used for backpropagation.
Along the two derivatives of the moments on page 4, it gives the derivatives with respect to the input, which makes use of the derivative of the moments, $$\frac{\partial l}{\partial x}=\frac{\partial l}{\partial \hat{x}}\cdot\frac{1}{\sqrt{\sigma^2+\epsilon}}+\frac{\partial l}{\partial \sigma^2}\cdot\frac{2(x-\mu)}{m}+\frac{\partial l}{\partial \mu}\cdot\frac{1}{m},$$
which will be used for computing the derivatives of the parameters in previous layers by the chain rule.
IMO, the moments are not treated as parameters nor constants, they can be thought of as some intermediate results of computing the output of the layer. | Why take the gradient of the moments (mean and variance) when using Batch Normalization in a Neural | Derivatives of the moments are used for backpropagation.
Along the two derivatives of the moments on page 4, it gives the derivatives with respect to the input, which makes use of the derivative of t | Why take the gradient of the moments (mean and variance) when using Batch Normalization in a Neural Network?
Derivatives of the moments are used for backpropagation.
Along the two derivatives of the moments on page 4, it gives the derivatives with respect to the input, which makes use of the derivative of the moments, $$\frac{\partial l}{\partial x}=\frac{\partial l}{\partial \hat{x}}\cdot\frac{1}{\sqrt{\sigma^2+\epsilon}}+\frac{\partial l}{\partial \sigma^2}\cdot\frac{2(x-\mu)}{m}+\frac{\partial l}{\partial \mu}\cdot\frac{1}{m},$$
which will be used for computing the derivatives of the parameters in previous layers by the chain rule.
IMO, the moments are not treated as parameters nor constants, they can be thought of as some intermediate results of computing the output of the layer. | Why take the gradient of the moments (mean and variance) when using Batch Normalization in a Neural
Derivatives of the moments are used for backpropagation.
Along the two derivatives of the moments on page 4, it gives the derivatives with respect to the input, which makes use of the derivative of t |
49,918 | Why take the gradient of the moments (mean and variance) when using Batch Normalization in a Neural Network? | There's no reason for the moments to be thought of as constants.
The quote
we make the second simplification: since we use mini-batches in stochastic gradient training, each mini-batch produces estimates of the mean and variance of each activation.
doesn't imply that they are constant, it just makes them scalars.
Another way to look at it is that there are constant only for a mini-batch (but so are the input and the output!). Furthermore, the output (and therefore the loss) is dependant on the change from mini batch to mini batch, and thus the derivative is logically non-zero.
As an aside, these quantities are only used to calculate the gradient towards inputs, as mentioned by @dontloo .
As for your last question, they're variables, but completely tied to the input variables and the network's parameters. (Not only the input variables due to batch normalization being appliable (and mostly interesting) in-network) | Why take the gradient of the moments (mean and variance) when using Batch Normalization in a Neural | There's no reason for the moments to be thought of as constants.
The quote
we make the second simplification: since we use mini-batches in stochastic gradient training, each mini-batch produces esti | Why take the gradient of the moments (mean and variance) when using Batch Normalization in a Neural Network?
There's no reason for the moments to be thought of as constants.
The quote
we make the second simplification: since we use mini-batches in stochastic gradient training, each mini-batch produces estimates of the mean and variance of each activation.
doesn't imply that they are constant, it just makes them scalars.
Another way to look at it is that there are constant only for a mini-batch (but so are the input and the output!). Furthermore, the output (and therefore the loss) is dependant on the change from mini batch to mini batch, and thus the derivative is logically non-zero.
As an aside, these quantities are only used to calculate the gradient towards inputs, as mentioned by @dontloo .
As for your last question, they're variables, but completely tied to the input variables and the network's parameters. (Not only the input variables due to batch normalization being appliable (and mostly interesting) in-network) | Why take the gradient of the moments (mean and variance) when using Batch Normalization in a Neural
There's no reason for the moments to be thought of as constants.
The quote
we make the second simplification: since we use mini-batches in stochastic gradient training, each mini-batch produces esti |
49,919 | Why take the gradient of the moments (mean and variance) when using Batch Normalization in a Neural Network? | There is a reason, why you should propagate gradient through mean and variance.
I will try to show you a simple example.
Let's just start with the mean. Imagine, that you have two samples (with one attribute) with values: $x_1$ and $x_2$.
Now mean is $m = \frac{x_1 + x_2}{2}$ and output after subtracting the mean would be: $y_1 = x_1 - m$ and $y_2 = x_2 - m$.
Now consider the backward pass and imagine, that both of those gradients $\frac{\partial L}{\partial y_1}$ and $\frac{\partial L}{\partial y_2}$ are equal to one. If you do not pass gradient through $m$, then also $\frac{\partial L}{\partial x_1}$ and $\frac{\partial L}{\partial x_2}$ would be equal to one. And you will increase both inputs. But this would have zero effect in the next iteration since you are subtracting the mean! It would actually be detrimental since lower layers would get some confusing signal.
Now if you pass gradients through mean, then $\frac{\partial L}{\partial m} = -\left( \frac{\partial L}{\partial y_1} + \frac{\partial L}{\partial y_2} \right)$.
Which is two in our example.
And $\frac{\partial L}{\partial x_1} = \frac{\partial L}{\partial y_1} - \frac{1}{2} \frac{\partial L}{\partial m}$. Which is zero in our example.
Thus we can say, that passing gradient through mean prevents changing mean of the inputs, which is meaningless.
And we can say same thing about passing gradients through variance, but demonstration would be much more complicated. | Why take the gradient of the moments (mean and variance) when using Batch Normalization in a Neural | There is a reason, why you should propagate gradient through mean and variance.
I will try to show you a simple example.
Let's just start with the mean. Imagine, that you have two samples (with one at | Why take the gradient of the moments (mean and variance) when using Batch Normalization in a Neural Network?
There is a reason, why you should propagate gradient through mean and variance.
I will try to show you a simple example.
Let's just start with the mean. Imagine, that you have two samples (with one attribute) with values: $x_1$ and $x_2$.
Now mean is $m = \frac{x_1 + x_2}{2}$ and output after subtracting the mean would be: $y_1 = x_1 - m$ and $y_2 = x_2 - m$.
Now consider the backward pass and imagine, that both of those gradients $\frac{\partial L}{\partial y_1}$ and $\frac{\partial L}{\partial y_2}$ are equal to one. If you do not pass gradient through $m$, then also $\frac{\partial L}{\partial x_1}$ and $\frac{\partial L}{\partial x_2}$ would be equal to one. And you will increase both inputs. But this would have zero effect in the next iteration since you are subtracting the mean! It would actually be detrimental since lower layers would get some confusing signal.
Now if you pass gradients through mean, then $\frac{\partial L}{\partial m} = -\left( \frac{\partial L}{\partial y_1} + \frac{\partial L}{\partial y_2} \right)$.
Which is two in our example.
And $\frac{\partial L}{\partial x_1} = \frac{\partial L}{\partial y_1} - \frac{1}{2} \frac{\partial L}{\partial m}$. Which is zero in our example.
Thus we can say, that passing gradient through mean prevents changing mean of the inputs, which is meaningless.
And we can say same thing about passing gradients through variance, but demonstration would be much more complicated. | Why take the gradient of the moments (mean and variance) when using Batch Normalization in a Neural
There is a reason, why you should propagate gradient through mean and variance.
I will try to show you a simple example.
Let's just start with the mean. Imagine, that you have two samples (with one at |
49,920 | Is data visualization a sufficient indication for the separability of the data? What are other indications of data separation? | There is an asymmetry worth noting here.
If a PCA plot shows distinct, separated clusters, then it is clear evidence for the separability of the data. But an absence of this kind of structure in the PCA plot (such as in your example) is not evidence for a lack of separability.
This is because (as pointed out in the comments) your 2-dimensional plot above omits information in the dataset, assuming it contains >2 dimensions. You may simply be looking at the wrong dimensions! There is no rule that says that the data pattern or structure you are interested in must show up in the first two principal components; they are merely the dimensions with the most variation in the dataset. It is entirely possible for dimensions with less variation (i.e. principal components 3, or 4, or whatever) to be the ones on which the data are clearly separable.
If you are interested in separability and identifying the dimensions that contribute to this, PCA might not be the most useful tool. As suggested by @naught101, a clustering approach is likely to be more useful. | Is data visualization a sufficient indication for the separability of the data? What are other indic | There is an asymmetry worth noting here.
If a PCA plot shows distinct, separated clusters, then it is clear evidence for the separability of the data. But an absence of this kind of structure in the | Is data visualization a sufficient indication for the separability of the data? What are other indications of data separation?
There is an asymmetry worth noting here.
If a PCA plot shows distinct, separated clusters, then it is clear evidence for the separability of the data. But an absence of this kind of structure in the PCA plot (such as in your example) is not evidence for a lack of separability.
This is because (as pointed out in the comments) your 2-dimensional plot above omits information in the dataset, assuming it contains >2 dimensions. You may simply be looking at the wrong dimensions! There is no rule that says that the data pattern or structure you are interested in must show up in the first two principal components; they are merely the dimensions with the most variation in the dataset. It is entirely possible for dimensions with less variation (i.e. principal components 3, or 4, or whatever) to be the ones on which the data are clearly separable.
If you are interested in separability and identifying the dimensions that contribute to this, PCA might not be the most useful tool. As suggested by @naught101, a clustering approach is likely to be more useful. | Is data visualization a sufficient indication for the separability of the data? What are other indic
There is an asymmetry worth noting here.
If a PCA plot shows distinct, separated clusters, then it is clear evidence for the separability of the data. But an absence of this kind of structure in the |
49,921 | Interpretation of hidden states in HMM in the part-of-speech tagging task | Two cases:
supervised POS tagging: no need for EM, you can simply count to learn the emission and transition probabilities (2). Each hidden state is mapped to one single POS.
unsupervised POS tagging: the Baum-Welch (EM) is typically used learn the emission and transition probabilities. To map the hidden states to actual POS, it depends what information regarding the actual POS you have. You can look at the literature on POS induction to see how they map hidden states with gold POS tags, e.g. see (1) Section 3 Evaluation Measures.
Regarding the case of unsupervised POS tagging, below is a quick overview of typical evaluation strategies, taken from (3) which I was lucky to TA:
The problem
The hidden states of the HMM are supposed to correspond to parts of speech. However, the model is not given any linguistic information. Furthermore, we do not have the correspondence between numeric hidden states and actual POS tags, such as "noun" or "verb". To evaluate the accuracy of the HMM, we can use a matching algorithm, which attempts to construct a correspondence between hidden states and POS tags that maximizes the POS tagging accuracy. The goal of this matching is to see what POS each hidden state corresponds to.
Let's assume that the HMM has two possible hidden states, $y_1$ and $y_2$, and that there are three possible POS tags, N, V, and ADJ. Consider the two example sentences "Strong stocks are good" and "Big weak stocks are bad" (Figure 1). The HMM has assigned a hidden state ($y_1$ or $y_2$) to each word. In addition, we also know the correct POS tag for each word. Our goal is to use a matching algorithm to match each hidden state to the POS tag that would give us the highest score.
Matching
A matching is a bipartite graph between part-of-speech tags and hidden states. Two types of matchings can be performed: a 1-to-many matching or a 1-to-1 matching. The definitions of 1-to-many and 1-to-1 matchings are given below. Both types of matchings have different advantages and disadvantages when they are used to evaluate the performance of an unsupervised model.
1-to-Many Matching: In a 1-to-many matching, each hidden state is assigned to with the POS tag that gives the most correct matches. Each POS tag can be matched to multiple hidden states, hence the name "1-to-many".
1-to-1 Matching: In contrast, in a 1-to-1 matching, each POS tag can only be matched to a maximum of one hidden state. In this case, if the number of hidden states is greater than the number of tags, then some hidden states will not correspond to any tag. This will decrease the reported accuracy.
Evaluation
Given a matching, the accuracy of the HMM tagging can be computed against a gold-standard corpus, where each word has been assigned the correct POS tag (based on the Penn WSJ Treebank). The correct tag for a word is compared with the tag that is matched with the word's hidden state. If these two tags match, this means that the tagging is correct for this word. \
To find a matching, we can create a table that contains the counts of each correct POS tag for each hidden state. Here is an example evaluation table for the example sentences from Figure 1.
The matching between hidden states and POS tags can then be found based on this table for both 1-to-many and 1-to-1 matchings.
Finding a good matching
As we can see, the accuracy of the HMM depends on the type of matching that is chosen (1-to-many or 1-to-1). To find a good matching,
we can use a greedy algorithm that chooses individual state to tag matchings one at a time and picks the new matching that maximally increases the accuracy at each step.
The algorithm terminates when no more matchings can be made. In a 1-to-1 matching, the algorithm terminates when either each POS tag has been assigned a hidden state or each hidden state has received a POS tag (whichever happens first). In a 1-to-many matching, the algorithm terminates when each hidden state has received a POS tag.
Example: 1-to-many Matching:
Consider the table from Figure 4. The highest count we see in the table is 3, which corresponds to $y_1$ being assigned the ADJ tag. Thus, we greedily assign state $y_1$ to ADJ. Since the highest count for $y_2$ is also ADJ, state $y_2$ is also assigned to the ADJ tag. $y_1$ tags 3 words correctly, $y_2$ tags 2 words correctly, and there are 9 words in total, so the best score from a 1-to-many matching is $\frac{5}{9}$. The bipartite matching is illustrated in Figure 5.
Example: 1-to-1 Matching
We know that the highest count for $y_1$ is 3, which corresponds to an adjective, so we again greedily assign state $y_1$ to ADJ. Even though state $y_2$ also has the highest count for adjectives, ADJ has already been matched to state $y_1$ so we cannot use it again. Thus, $y_2$ must be assigned to V, which has the highest count that is not yet assigned to any state. $y_1$ tags 3 words correctly and $y_2$ tags 1 word correctly, so the total score for a 1-to-1 matching is $\frac{4}{9}$. See Figure 8 for the matching.
Pitfalls of the HMM
Fully unsupervised tagging models generally perform very poorly. The HMM computed with the EM algorithm tends to give a relatively uniform distribution of hidden states, while the empirical distribution for POS tags is highly skewed since some tags are much more common than others. As a result, any 1-to-1 matching will give a POS tag distribution that is relatively uniform, which results in a low accuracy score.
A 1-to-many matching can create a non-uniform distribution by matching
a single POS tag with many hidden states. However, as the number of hidden states increases, the model could overfit by giving each word in the vocabulary its own hidden state. If each word has its own hidden state, we would achieve 100 percent accuracy, but the model would not give us any useful information. For more details on the evaluation of HMM for POS tagging, see Why doesn't EM find good HMM POS-taggers? by Mark Johnson.
Improving Unsupervised Models
There are several modifications we can make to improve the accuracy of unsupervised models.
Supply the model with a dictionary and limit possibilities for each word: For example, we know that the word "train" can only be a verb or a noun. This limits the set of possible tags for each word, which will push the model in the correct direction.
Prototypes: Provide each POS tag with several representative words for the tag. Even just a few words for each tag will help push the model in the right direction.
(1) Christodoulopoulos, Christos, Sharon Goldwater, and Mark Steedman. "Two decades of unsupervised POS induction: How far have we come?." In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pp. 575-584. Association for Computational Linguistics, 2010.
(2) Speech and Language Processing. Daniel Jurafsky & James H. Martin. Draft of February 19, 2015. Chapter 9 Part-of-Speech Tagging
https://web.stanford.edu/~jurafsky/slp3/9.pdf
(3) MIT 6.806/6.864 Advanced Natural Language Processing course. Lecture 9' scribe note. Fall 2015. Instructors: Prof. Regina Barzilay, and Prof. Tommi Jaakkola. TAs: Franck Dernoncourt, Karthik Rajagopal Narasimhan, Tianheng Wang. Scribes: Clare Liu, Evan Pu, Kalki Seksaria https://stellar.mit.edu/S/course/6/fa15/6.864/ | Interpretation of hidden states in HMM in the part-of-speech tagging task | Two cases:
supervised POS tagging: no need for EM, you can simply count to learn the emission and transition probabilities (2). Each hidden state is mapped to one single POS.
unsupervised POS tagging | Interpretation of hidden states in HMM in the part-of-speech tagging task
Two cases:
supervised POS tagging: no need for EM, you can simply count to learn the emission and transition probabilities (2). Each hidden state is mapped to one single POS.
unsupervised POS tagging: the Baum-Welch (EM) is typically used learn the emission and transition probabilities. To map the hidden states to actual POS, it depends what information regarding the actual POS you have. You can look at the literature on POS induction to see how they map hidden states with gold POS tags, e.g. see (1) Section 3 Evaluation Measures.
Regarding the case of unsupervised POS tagging, below is a quick overview of typical evaluation strategies, taken from (3) which I was lucky to TA:
The problem
The hidden states of the HMM are supposed to correspond to parts of speech. However, the model is not given any linguistic information. Furthermore, we do not have the correspondence between numeric hidden states and actual POS tags, such as "noun" or "verb". To evaluate the accuracy of the HMM, we can use a matching algorithm, which attempts to construct a correspondence between hidden states and POS tags that maximizes the POS tagging accuracy. The goal of this matching is to see what POS each hidden state corresponds to.
Let's assume that the HMM has two possible hidden states, $y_1$ and $y_2$, and that there are three possible POS tags, N, V, and ADJ. Consider the two example sentences "Strong stocks are good" and "Big weak stocks are bad" (Figure 1). The HMM has assigned a hidden state ($y_1$ or $y_2$) to each word. In addition, we also know the correct POS tag for each word. Our goal is to use a matching algorithm to match each hidden state to the POS tag that would give us the highest score.
Matching
A matching is a bipartite graph between part-of-speech tags and hidden states. Two types of matchings can be performed: a 1-to-many matching or a 1-to-1 matching. The definitions of 1-to-many and 1-to-1 matchings are given below. Both types of matchings have different advantages and disadvantages when they are used to evaluate the performance of an unsupervised model.
1-to-Many Matching: In a 1-to-many matching, each hidden state is assigned to with the POS tag that gives the most correct matches. Each POS tag can be matched to multiple hidden states, hence the name "1-to-many".
1-to-1 Matching: In contrast, in a 1-to-1 matching, each POS tag can only be matched to a maximum of one hidden state. In this case, if the number of hidden states is greater than the number of tags, then some hidden states will not correspond to any tag. This will decrease the reported accuracy.
Evaluation
Given a matching, the accuracy of the HMM tagging can be computed against a gold-standard corpus, where each word has been assigned the correct POS tag (based on the Penn WSJ Treebank). The correct tag for a word is compared with the tag that is matched with the word's hidden state. If these two tags match, this means that the tagging is correct for this word. \
To find a matching, we can create a table that contains the counts of each correct POS tag for each hidden state. Here is an example evaluation table for the example sentences from Figure 1.
The matching between hidden states and POS tags can then be found based on this table for both 1-to-many and 1-to-1 matchings.
Finding a good matching
As we can see, the accuracy of the HMM depends on the type of matching that is chosen (1-to-many or 1-to-1). To find a good matching,
we can use a greedy algorithm that chooses individual state to tag matchings one at a time and picks the new matching that maximally increases the accuracy at each step.
The algorithm terminates when no more matchings can be made. In a 1-to-1 matching, the algorithm terminates when either each POS tag has been assigned a hidden state or each hidden state has received a POS tag (whichever happens first). In a 1-to-many matching, the algorithm terminates when each hidden state has received a POS tag.
Example: 1-to-many Matching:
Consider the table from Figure 4. The highest count we see in the table is 3, which corresponds to $y_1$ being assigned the ADJ tag. Thus, we greedily assign state $y_1$ to ADJ. Since the highest count for $y_2$ is also ADJ, state $y_2$ is also assigned to the ADJ tag. $y_1$ tags 3 words correctly, $y_2$ tags 2 words correctly, and there are 9 words in total, so the best score from a 1-to-many matching is $\frac{5}{9}$. The bipartite matching is illustrated in Figure 5.
Example: 1-to-1 Matching
We know that the highest count for $y_1$ is 3, which corresponds to an adjective, so we again greedily assign state $y_1$ to ADJ. Even though state $y_2$ also has the highest count for adjectives, ADJ has already been matched to state $y_1$ so we cannot use it again. Thus, $y_2$ must be assigned to V, which has the highest count that is not yet assigned to any state. $y_1$ tags 3 words correctly and $y_2$ tags 1 word correctly, so the total score for a 1-to-1 matching is $\frac{4}{9}$. See Figure 8 for the matching.
Pitfalls of the HMM
Fully unsupervised tagging models generally perform very poorly. The HMM computed with the EM algorithm tends to give a relatively uniform distribution of hidden states, while the empirical distribution for POS tags is highly skewed since some tags are much more common than others. As a result, any 1-to-1 matching will give a POS tag distribution that is relatively uniform, which results in a low accuracy score.
A 1-to-many matching can create a non-uniform distribution by matching
a single POS tag with many hidden states. However, as the number of hidden states increases, the model could overfit by giving each word in the vocabulary its own hidden state. If each word has its own hidden state, we would achieve 100 percent accuracy, but the model would not give us any useful information. For more details on the evaluation of HMM for POS tagging, see Why doesn't EM find good HMM POS-taggers? by Mark Johnson.
Improving Unsupervised Models
There are several modifications we can make to improve the accuracy of unsupervised models.
Supply the model with a dictionary and limit possibilities for each word: For example, we know that the word "train" can only be a verb or a noun. This limits the set of possible tags for each word, which will push the model in the correct direction.
Prototypes: Provide each POS tag with several representative words for the tag. Even just a few words for each tag will help push the model in the right direction.
(1) Christodoulopoulos, Christos, Sharon Goldwater, and Mark Steedman. "Two decades of unsupervised POS induction: How far have we come?." In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pp. 575-584. Association for Computational Linguistics, 2010.
(2) Speech and Language Processing. Daniel Jurafsky & James H. Martin. Draft of February 19, 2015. Chapter 9 Part-of-Speech Tagging
https://web.stanford.edu/~jurafsky/slp3/9.pdf
(3) MIT 6.806/6.864 Advanced Natural Language Processing course. Lecture 9' scribe note. Fall 2015. Instructors: Prof. Regina Barzilay, and Prof. Tommi Jaakkola. TAs: Franck Dernoncourt, Karthik Rajagopal Narasimhan, Tianheng Wang. Scribes: Clare Liu, Evan Pu, Kalki Seksaria https://stellar.mit.edu/S/course/6/fa15/6.864/ | Interpretation of hidden states in HMM in the part-of-speech tagging task
Two cases:
supervised POS tagging: no need for EM, you can simply count to learn the emission and transition probabilities (2). Each hidden state is mapped to one single POS.
unsupervised POS tagging |
49,922 | Can logistic regression be used with "years" as a continuous variable? | Yes, you can use years as a continuous variable in your model. But, I would not be estimating a logit model for this problem. Some specific issues:
The way to show your data here is as a plot, where the x-axis shows the years, and the y-axis shows the proportion of jelly beans. Estimating a logit model to do this brings with it the risk that you make an error, but no benefits of any kind in terms of interpretation.
If you are desperate to compute a p-value, you would be better off using Kendall's tau-b, as then you have no assumptions to worry about.
If the plot reveals a non-linear relationship I suppose you could use a logit model with a polynomial effect, using, say, JellyBeans ~ poly(Year, 3) or something similar and a likelihood ratio test for significance of the model. | Can logistic regression be used with "years" as a continuous variable? | Yes, you can use years as a continuous variable in your model. But, I would not be estimating a logit model for this problem. Some specific issues:
The way to show your data here is as a plot, where | Can logistic regression be used with "years" as a continuous variable?
Yes, you can use years as a continuous variable in your model. But, I would not be estimating a logit model for this problem. Some specific issues:
The way to show your data here is as a plot, where the x-axis shows the years, and the y-axis shows the proportion of jelly beans. Estimating a logit model to do this brings with it the risk that you make an error, but no benefits of any kind in terms of interpretation.
If you are desperate to compute a p-value, you would be better off using Kendall's tau-b, as then you have no assumptions to worry about.
If the plot reveals a non-linear relationship I suppose you could use a logit model with a polynomial effect, using, say, JellyBeans ~ poly(Year, 3) or something similar and a likelihood ratio test for significance of the model. | Can logistic regression be used with "years" as a continuous variable?
Yes, you can use years as a continuous variable in your model. But, I would not be estimating a logit model for this problem. Some specific issues:
The way to show your data here is as a plot, where |
49,923 | Covariance of order statistics | Let $(X_1,\dots,X_n,X_{n+1})$ denote a random sample of size $(n+1)$ drawn on $X$, and let $$Z_n = \min\{X_1,...,X_n\} \quad \text{and} \quad Z_{n+1} = \min\{X_1,...,X_n,X_{n+1}\}$$
By including the extra $X_{n+1}$ term, there are only 2 possibilities:
EITHER CASE A $\rightarrow$ with probability $\frac{n}{n+1}$
$\quad \quad \text{The extra term } X_{n+1}$ does NOT change the sample minimum i.e. $z_{n+1} = z_n$. Then:
$$\text{Cov}(Z_n, Z_{n+1})\big|_\text{Case A} \; = \; \text{Cov}(Z_{n+1}, Z_{n+1}) \; = \; \text{Var}(Z_{n+1})$$
Since Event A occurs with probability $\frac{n}{n+1}$, this immediately explains why your observed unconditional covariance $\text{Cov}(Z_n, Z_{n+1})$ is well approximated by $\text{Var}(Z_{n+1})$, as $n$ increases.
OR CASE B $\rightarrow$ with probability $\frac{1}{n+1}$
$\quad \quad \text{The extra term } X_{n+1}$ DOES change the sample minimum i.e. $Z_{n+1} < Z_n$. Then $Z_{n+1}$ and $Z_n$ must be the $1^{\text{st}}$ and $2^{\text{nd}}$ order statistics from a sample of size $n+1$ i.e.
$$\text{Cov}(Z_n, Z_{n+1})\big|_\text{Case B} \; = \; \text{Cov}\big(X_{(1)}, X_{(2)}\big) \text{ in a sample of size: } n+1$$
In summary:
\begin{align*}\displaystyle \text{Cov}(Z_n, Z_{n+1}) \; &= \frac{n}{n+1}\text{Cov}(Z_n, Z_{n+1})\big|_\text{Case A} \quad + \quad \frac{1}{n+1}\text{Cov}(Z_n, Z_{n+1})\big|_\text{Case B} \\
&= \frac{n}{n+1} \text{Var}(Z_{n+1}) \quad + \quad \frac{1}{n+1} \text{Cov}\big(X_{(1)}, X_{(2)}\big)_{\text{sample size } = n+1} \\ &
\end{align*}
This makes it easy to see why the result is similar to $\text{Var}(Z_{n+1})$: because Case A dominates with probability $\frac{n}{n+1}$
Example and Check: Uniform Parent
In the case of $X \sim \text{Uniform}(0,1)$ parent:
Case A: $\text{Var}(Z_{n+1}) = \text{Var}(X_{(1)})_{\text{sample size } = n+1} = \frac{n+1}{(n+2)^2 (n+3)}$
Case B: $\text{Cov}\big(X_{(1)}, X_{(2)}\big)_{\text{sample size } = n+1} = \frac{n}{(n+2)^2 (n+3)}$
Then: $\text{Cov}(Z_n, Z_{n+1}) = \frac{n}{(n+1) (n+2) (n+3)}$
The following diagram compares:
this exact theoretical solution for $\text{Cov}(Z_n, Z_{n+1})$, as $n$ increases from 1 to 30 $\rightarrow$ the red curve
to a Monte Carlo calculation of $\text{Cov}(Z_n, Z_{n+1})$ $\rightarrow$ the blue dots
Looks fine.
The following diagram compares the exact theoretical solution for $\text{Cov}(Z_n, Z_{n+1})$, $\text{Var}(Z_n)$ and $\text{Var}(Z_{n+1})$: as the OP reports, by the time $n = 5$, $\text{Cov}(Z_n, Z_{n+1})$ is well approximated by $\text{Var}(Z_{n+1})$: | Covariance of order statistics | Let $(X_1,\dots,X_n,X_{n+1})$ denote a random sample of size $(n+1)$ drawn on $X$, and let $$Z_n = \min\{X_1,...,X_n\} \quad \text{and} \quad Z_{n+1} = \min\{X_1,...,X_n,X_{n+1}\}$$
By including the | Covariance of order statistics
Let $(X_1,\dots,X_n,X_{n+1})$ denote a random sample of size $(n+1)$ drawn on $X$, and let $$Z_n = \min\{X_1,...,X_n\} \quad \text{and} \quad Z_{n+1} = \min\{X_1,...,X_n,X_{n+1}\}$$
By including the extra $X_{n+1}$ term, there are only 2 possibilities:
EITHER CASE A $\rightarrow$ with probability $\frac{n}{n+1}$
$\quad \quad \text{The extra term } X_{n+1}$ does NOT change the sample minimum i.e. $z_{n+1} = z_n$. Then:
$$\text{Cov}(Z_n, Z_{n+1})\big|_\text{Case A} \; = \; \text{Cov}(Z_{n+1}, Z_{n+1}) \; = \; \text{Var}(Z_{n+1})$$
Since Event A occurs with probability $\frac{n}{n+1}$, this immediately explains why your observed unconditional covariance $\text{Cov}(Z_n, Z_{n+1})$ is well approximated by $\text{Var}(Z_{n+1})$, as $n$ increases.
OR CASE B $\rightarrow$ with probability $\frac{1}{n+1}$
$\quad \quad \text{The extra term } X_{n+1}$ DOES change the sample minimum i.e. $Z_{n+1} < Z_n$. Then $Z_{n+1}$ and $Z_n$ must be the $1^{\text{st}}$ and $2^{\text{nd}}$ order statistics from a sample of size $n+1$ i.e.
$$\text{Cov}(Z_n, Z_{n+1})\big|_\text{Case B} \; = \; \text{Cov}\big(X_{(1)}, X_{(2)}\big) \text{ in a sample of size: } n+1$$
In summary:
\begin{align*}\displaystyle \text{Cov}(Z_n, Z_{n+1}) \; &= \frac{n}{n+1}\text{Cov}(Z_n, Z_{n+1})\big|_\text{Case A} \quad + \quad \frac{1}{n+1}\text{Cov}(Z_n, Z_{n+1})\big|_\text{Case B} \\
&= \frac{n}{n+1} \text{Var}(Z_{n+1}) \quad + \quad \frac{1}{n+1} \text{Cov}\big(X_{(1)}, X_{(2)}\big)_{\text{sample size } = n+1} \\ &
\end{align*}
This makes it easy to see why the result is similar to $\text{Var}(Z_{n+1})$: because Case A dominates with probability $\frac{n}{n+1}$
Example and Check: Uniform Parent
In the case of $X \sim \text{Uniform}(0,1)$ parent:
Case A: $\text{Var}(Z_{n+1}) = \text{Var}(X_{(1)})_{\text{sample size } = n+1} = \frac{n+1}{(n+2)^2 (n+3)}$
Case B: $\text{Cov}\big(X_{(1)}, X_{(2)}\big)_{\text{sample size } = n+1} = \frac{n}{(n+2)^2 (n+3)}$
Then: $\text{Cov}(Z_n, Z_{n+1}) = \frac{n}{(n+1) (n+2) (n+3)}$
The following diagram compares:
this exact theoretical solution for $\text{Cov}(Z_n, Z_{n+1})$, as $n$ increases from 1 to 30 $\rightarrow$ the red curve
to a Monte Carlo calculation of $\text{Cov}(Z_n, Z_{n+1})$ $\rightarrow$ the blue dots
Looks fine.
The following diagram compares the exact theoretical solution for $\text{Cov}(Z_n, Z_{n+1})$, $\text{Var}(Z_n)$ and $\text{Var}(Z_{n+1})$: as the OP reports, by the time $n = 5$, $\text{Cov}(Z_n, Z_{n+1})$ is well approximated by $\text{Var}(Z_{n+1})$: | Covariance of order statistics
Let $(X_1,\dots,X_n,X_{n+1})$ denote a random sample of size $(n+1)$ drawn on $X$, and let $$Z_n = \min\{X_1,...,X_n\} \quad \text{and} \quad Z_{n+1} = \min\{X_1,...,X_n,X_{n+1}\}$$
By including the |
49,924 | Covariance of order statistics | This seems to imply that Var(zn)/Var(zn+1) if far from 1 as well. If zi is really the minimum of a sample of size i, then for standard computable examples, like the exponential distribution, the ratio of variances is close to 1 for n reasonably large, for example ((n+1)/n)^2 for the exponential. Perhaps you're simulating the second smallest order statistic? | Covariance of order statistics | This seems to imply that Var(zn)/Var(zn+1) if far from 1 as well. If zi is really the minimum of a sample of size i, then for standard computable examples, like the exponential distribution, the ratio | Covariance of order statistics
This seems to imply that Var(zn)/Var(zn+1) if far from 1 as well. If zi is really the minimum of a sample of size i, then for standard computable examples, like the exponential distribution, the ratio of variances is close to 1 for n reasonably large, for example ((n+1)/n)^2 for the exponential. Perhaps you're simulating the second smallest order statistic? | Covariance of order statistics
This seems to imply that Var(zn)/Var(zn+1) if far from 1 as well. If zi is really the minimum of a sample of size i, then for standard computable examples, like the exponential distribution, the ratio |
49,925 | Double lasso variable selection | A major advantage of the double selection method is that it is heteroskedasticity robust.
Belloni, Chernozhukov and Hansen (ReStud 2014) showed that this is true even if the selection is not perfect.
We propose robust methods for inference about the effect of a treatment variable on a scalar outcome in the presence of very many regressors in a model with possibly non-Gaussian and heteroscedastic disturbances.
[...]
The main attractive feature of our method is that it allows for imperfect selection of the controls and provides confidence intervals that are valid uniformly across a large class of models. In contrast, standard post-model selection estimators fail to provide uniform inference even in simple cases with a small, fixed number of controls. ' | Double lasso variable selection | A major advantage of the double selection method is that it is heteroskedasticity robust.
Belloni, Chernozhukov and Hansen (ReStud 2014) showed that this is true even if the selection is not perfect.
| Double lasso variable selection
A major advantage of the double selection method is that it is heteroskedasticity robust.
Belloni, Chernozhukov and Hansen (ReStud 2014) showed that this is true even if the selection is not perfect.
We propose robust methods for inference about the effect of a treatment variable on a scalar outcome in the presence of very many regressors in a model with possibly non-Gaussian and heteroscedastic disturbances.
[...]
The main attractive feature of our method is that it allows for imperfect selection of the controls and provides confidence intervals that are valid uniformly across a large class of models. In contrast, standard post-model selection estimators fail to provide uniform inference even in simple cases with a small, fixed number of controls. ' | Double lasso variable selection
A major advantage of the double selection method is that it is heteroskedasticity robust.
Belloni, Chernozhukov and Hansen (ReStud 2014) showed that this is true even if the selection is not perfect.
|
49,926 | ergodic theory for markov processes | Say your state space is $\Omega$ and your process is $X_{t}$. Consider now a new state space - $\Omega \times \Omega$. Then $Y_{y} := (X_{t-1}, X_{t})$ is a Markov process on $\Omega \times \Omega$. Now, you can use the ergodic theorem, provided you know the invariant distribution of $Y_t$. This is a distribution of pairs $(X_{t-1}, X_t)$ and we may write it as the joint distribution $\pi( x_{t-1}, x_t)$. By laws of probability,
$$
\pi( x_{t-1}, x_t ) = \pi (x_{t-1} ) p( x_t | x_{t-1} ).
$$
Thus:
\begin{align}
\lim _{T\to \infty} \frac{1}{T} \sum_{t=1}^{T} \log p(x_t | x_{t-1} ) &= \mathbb{E}_{\pi( x, y )} [ \log p( y | x ) ]\\
&= \sum_{(x,y) \in \Omega \times \Omega } \log p( y | x ) \pi(x,y) \\
&= \sum_{(x,y) \in \Omega \times \Omega } \log \frac{\pi(x,y)}{\pi(x)} \pi(x,y) \\
&= \sum_{(x,y) \in \Omega \times \Omega } \log \pi(x,y) \pi(x,y) -\log \pi(x) \pi(x,y) \\
&= \sum_{(x,y) \in \Omega \times \Omega } \log \pi(x,y) \pi(x,y)
-\sum_{x \in \Omega } \log \pi(x) \pi(x) \text{ marginalized in } y \\
&= H(X_{t-1}) - H(X_{t-1},X_t) \\
&= H(X_{t-1}) - H(X_{t-1},X_t) \\
&= -H(X_t | X_{t-1} ).
\end{align}
$H$ is the entropy function(al) and $H(X|Y)$ is the conditional entropy. According to Wikipedia: conditional entropy (or equivocation) quantifies the amount of information needed to describe the outcome of a random variable Y given that the value of another random variable X is known.
So I think maybe you should consider the negative of the above quantity.
Regarding your last question - you can apply the same trick from above to $(X_{t-L} ,..., x_{t})$. | ergodic theory for markov processes | Say your state space is $\Omega$ and your process is $X_{t}$. Consider now a new state space - $\Omega \times \Omega$. Then $Y_{y} := (X_{t-1}, X_{t})$ is a Markov process on $\Omega \times \Omega$. | ergodic theory for markov processes
Say your state space is $\Omega$ and your process is $X_{t}$. Consider now a new state space - $\Omega \times \Omega$. Then $Y_{y} := (X_{t-1}, X_{t})$ is a Markov process on $\Omega \times \Omega$. Now, you can use the ergodic theorem, provided you know the invariant distribution of $Y_t$. This is a distribution of pairs $(X_{t-1}, X_t)$ and we may write it as the joint distribution $\pi( x_{t-1}, x_t)$. By laws of probability,
$$
\pi( x_{t-1}, x_t ) = \pi (x_{t-1} ) p( x_t | x_{t-1} ).
$$
Thus:
\begin{align}
\lim _{T\to \infty} \frac{1}{T} \sum_{t=1}^{T} \log p(x_t | x_{t-1} ) &= \mathbb{E}_{\pi( x, y )} [ \log p( y | x ) ]\\
&= \sum_{(x,y) \in \Omega \times \Omega } \log p( y | x ) \pi(x,y) \\
&= \sum_{(x,y) \in \Omega \times \Omega } \log \frac{\pi(x,y)}{\pi(x)} \pi(x,y) \\
&= \sum_{(x,y) \in \Omega \times \Omega } \log \pi(x,y) \pi(x,y) -\log \pi(x) \pi(x,y) \\
&= \sum_{(x,y) \in \Omega \times \Omega } \log \pi(x,y) \pi(x,y)
-\sum_{x \in \Omega } \log \pi(x) \pi(x) \text{ marginalized in } y \\
&= H(X_{t-1}) - H(X_{t-1},X_t) \\
&= H(X_{t-1}) - H(X_{t-1},X_t) \\
&= -H(X_t | X_{t-1} ).
\end{align}
$H$ is the entropy function(al) and $H(X|Y)$ is the conditional entropy. According to Wikipedia: conditional entropy (or equivocation) quantifies the amount of information needed to describe the outcome of a random variable Y given that the value of another random variable X is known.
So I think maybe you should consider the negative of the above quantity.
Regarding your last question - you can apply the same trick from above to $(X_{t-L} ,..., x_{t})$. | ergodic theory for markov processes
Say your state space is $\Omega$ and your process is $X_{t}$. Consider now a new state space - $\Omega \times \Omega$. Then $Y_{y} := (X_{t-1}, X_{t})$ is a Markov process on $\Omega \times \Omega$. |
49,927 | When is there a point for using regression with controls to analyze experimental data? | An alternative way to write the standard error (assuming homoscedasticity) of the regression coefficient estimate in your first model is
\begin{equation}
se(\hat \beta) = \sqrt{\frac{\sum_{i=1}^n e^2}{(n-k-1)\sum_{i=1}^n(T_{i}-\bar T)^2(1-R^2_T)}}
\end{equation}
where $e$ are the residuals from the regression and $R^2_T$ is the $R^2$ from a regression of $T$ on $Z$. Three things in the equation could change depending on if you include $Z$ in the model or not.
First, the sum of squared residuals will always decrease (technically never increases) since we will never explain less of $y$ by adding regressors. This will make the standard error smaller.
Second $k$, the number of regressors, will change. This increases the standard error.
Third, $R^2_T$ will likely not be too different from 0 since random assignment ensures that $Z$ is uncorrelated with $T$. I will assume this is 0, but larger values will increase the standard error.
From the first two adding more regressors will decrease the standard error as long as the decrease in sum of squared residuals (unexplained part of $y$) is large relative to the increase in the number of coefficients. | When is there a point for using regression with controls to analyze experimental data? | An alternative way to write the standard error (assuming homoscedasticity) of the regression coefficient estimate in your first model is
\begin{equation}
se(\hat \beta) = \sqrt{\frac{\sum_{i=1}^n e^2 | When is there a point for using regression with controls to analyze experimental data?
An alternative way to write the standard error (assuming homoscedasticity) of the regression coefficient estimate in your first model is
\begin{equation}
se(\hat \beta) = \sqrt{\frac{\sum_{i=1}^n e^2}{(n-k-1)\sum_{i=1}^n(T_{i}-\bar T)^2(1-R^2_T)}}
\end{equation}
where $e$ are the residuals from the regression and $R^2_T$ is the $R^2$ from a regression of $T$ on $Z$. Three things in the equation could change depending on if you include $Z$ in the model or not.
First, the sum of squared residuals will always decrease (technically never increases) since we will never explain less of $y$ by adding regressors. This will make the standard error smaller.
Second $k$, the number of regressors, will change. This increases the standard error.
Third, $R^2_T$ will likely not be too different from 0 since random assignment ensures that $Z$ is uncorrelated with $T$. I will assume this is 0, but larger values will increase the standard error.
From the first two adding more regressors will decrease the standard error as long as the decrease in sum of squared residuals (unexplained part of $y$) is large relative to the increase in the number of coefficients. | When is there a point for using regression with controls to analyze experimental data?
An alternative way to write the standard error (assuming homoscedasticity) of the regression coefficient estimate in your first model is
\begin{equation}
se(\hat \beta) = \sqrt{\frac{\sum_{i=1}^n e^2 |
49,928 | How to combine likert items into a single variable | You should clarify what you mean by "combined all the likert scale questions". If you asked 5 likert scale questions that together are a conventional way to measure e.g. self-confidence, then those should be combined into one variable (provided some conditions described below are met). If you asked 5 likert scale questions for self-confidence, 7 others for narcissism and 12 others for empathy, then don't combine all those into one variable obviously.
Phrasing is very important with those likert scale questions. Don't invent new questions unless you absolutely have to. You will almost always find well established sets of questions for your purpose. Copy paste them into your questionnaire and cite the researcher that built the scale.
Likert scales are in principle not continuous scales which means that you shouldn't do t-tests or ANOVAs on them. The problem is that respondents may not find it obvious that the distance between "moderately" and "much" is the same as between "much" and "very much" etc. So you need to make it obvious to them by visually spacing the answer options equidistant and using numbers while saying e.g. 1=most and 10=least, not by putting subjective descriptions on all the answer options in between.
The question whether you should have an even or an odd number of options is controversial. When you have an odd number, people might be lazy and answer in the middle too often. If you have an even number, you may force them to reveal an "opinion" where they truly don't have one.
If that is taken care of, you still need to assess the internal validity of the scale according to your actual response data.
Have attention checks to sort out respondents that answered randomly.
Do a cronbach $\alpha$ or a similar measure to see that all questions are sufficiently aligned.
Do a PCA to see that the scale really measures a unidimensional quantity. Most of the variability should be in just one principal component.
If that is done, most people will just do an arithmetic mean of the responses to have a continuous variable. This assumes that every likert question on the scale is equally important. Having just done the PCA, you might also take the respondents scores on the first principal component. This gives different weights to different questions according to their capability to differentiate respondents in your data-set. That last option is seldom done. | How to combine likert items into a single variable | You should clarify what you mean by "combined all the likert scale questions". If you asked 5 likert scale questions that together are a conventional way to measure e.g. self-confidence, then those sh | How to combine likert items into a single variable
You should clarify what you mean by "combined all the likert scale questions". If you asked 5 likert scale questions that together are a conventional way to measure e.g. self-confidence, then those should be combined into one variable (provided some conditions described below are met). If you asked 5 likert scale questions for self-confidence, 7 others for narcissism and 12 others for empathy, then don't combine all those into one variable obviously.
Phrasing is very important with those likert scale questions. Don't invent new questions unless you absolutely have to. You will almost always find well established sets of questions for your purpose. Copy paste them into your questionnaire and cite the researcher that built the scale.
Likert scales are in principle not continuous scales which means that you shouldn't do t-tests or ANOVAs on them. The problem is that respondents may not find it obvious that the distance between "moderately" and "much" is the same as between "much" and "very much" etc. So you need to make it obvious to them by visually spacing the answer options equidistant and using numbers while saying e.g. 1=most and 10=least, not by putting subjective descriptions on all the answer options in between.
The question whether you should have an even or an odd number of options is controversial. When you have an odd number, people might be lazy and answer in the middle too often. If you have an even number, you may force them to reveal an "opinion" where they truly don't have one.
If that is taken care of, you still need to assess the internal validity of the scale according to your actual response data.
Have attention checks to sort out respondents that answered randomly.
Do a cronbach $\alpha$ or a similar measure to see that all questions are sufficiently aligned.
Do a PCA to see that the scale really measures a unidimensional quantity. Most of the variability should be in just one principal component.
If that is done, most people will just do an arithmetic mean of the responses to have a continuous variable. This assumes that every likert question on the scale is equally important. Having just done the PCA, you might also take the respondents scores on the first principal component. This gives different weights to different questions according to their capability to differentiate respondents in your data-set. That last option is seldom done. | How to combine likert items into a single variable
You should clarify what you mean by "combined all the likert scale questions". If you asked 5 likert scale questions that together are a conventional way to measure e.g. self-confidence, then those sh |
49,929 | Residual network dimension changing blocks identity function | Instead of max pooling you can use 2D average pooling with stride (2,2) and kernel size (2,2), optionally concatenating it with itself to get 512 features instead of 256.
The benefit of this is that average pooling is linear operation which does not prevent gradient propagation and does not lose any information.
The problem with the 1x1 strided max pooling is, as you said, that 3/4 of the information gets discarded. The problem with your proposed solution is that the max pooling is still non-linear, which is contradictory to the aim of residual networks: finding residual function to a linear mapping. | Residual network dimension changing blocks identity function | Instead of max pooling you can use 2D average pooling with stride (2,2) and kernel size (2,2), optionally concatenating it with itself to get 512 features instead of 256.
The benefit of this is that a | Residual network dimension changing blocks identity function
Instead of max pooling you can use 2D average pooling with stride (2,2) and kernel size (2,2), optionally concatenating it with itself to get 512 features instead of 256.
The benefit of this is that average pooling is linear operation which does not prevent gradient propagation and does not lose any information.
The problem with the 1x1 strided max pooling is, as you said, that 3/4 of the information gets discarded. The problem with your proposed solution is that the max pooling is still non-linear, which is contradictory to the aim of residual networks: finding residual function to a linear mapping. | Residual network dimension changing blocks identity function
Instead of max pooling you can use 2D average pooling with stride (2,2) and kernel size (2,2), optionally concatenating it with itself to get 512 features instead of 256.
The benefit of this is that a |
49,930 | how can I calculate the mutual information between two normal densities using the parameters mu and sigma? | The parameters you have only tell you about the marginal distributions of $X_1$ and $X_2$ so no, you cannot compute a measure of dependence like mutual information. Consider for instance that given $\mu_1, \mu_2, \sigma^2_1$ and $\sigma^2_2$ the random variables $X_1$ and $X_2$ could either be perfectly correlated or entirely independent of one another. | how can I calculate the mutual information between two normal densities using the parameters mu and | The parameters you have only tell you about the marginal distributions of $X_1$ and $X_2$ so no, you cannot compute a measure of dependence like mutual information. Consider for instance that given $ | how can I calculate the mutual information between two normal densities using the parameters mu and sigma?
The parameters you have only tell you about the marginal distributions of $X_1$ and $X_2$ so no, you cannot compute a measure of dependence like mutual information. Consider for instance that given $\mu_1, \mu_2, \sigma^2_1$ and $\sigma^2_2$ the random variables $X_1$ and $X_2$ could either be perfectly correlated or entirely independent of one another. | how can I calculate the mutual information between two normal densities using the parameters mu and
The parameters you have only tell you about the marginal distributions of $X_1$ and $X_2$ so no, you cannot compute a measure of dependence like mutual information. Consider for instance that given $ |
49,931 | time varying coefficients in cox proportional hazard model | Since the seasonal dummy variables are static by nature, and their coefficients clearly vary with the time variable, how much does it matter?
The value that you get is a form of average over time. Unfortunately as you naturally have more cases in time-to-event analyses early on, you can't simply say that the effect is balanced throughout time. From my experience the initial period has a much heavier impact on the estimate than the later and it
I get that statistically, it means something, but does the violation of PH assumption invalidate the (intuitively appealing) result that non-response is more likely to happen in the summer and winter?
Again, it probably doesn't but you can't be sure until you've checked. Something is definitely happening over time and you should at least have a look at the residual plot (plot(cox.zph(...))). It isn't entirely surprising that you have a problem with the PH since seasons are part of the time variable and there will be situations where early summer and late spring are similar.
If so, is there a way to handle this so that the PH assumption is not violated? I know about using the tt transform, but I can't seem to figure out the exact form for the function.
The tt transform is tricky to use with big data. It explodes the matrix and can get a little messy, e.g. if you modify the lung example in the survival package:
library(survival)
coxph(Surv(time, status) ~ ph.ecog + tt(age), data=lung,
tt=function(x,t,...) {
print(length(x))
pspline(x + t/365.25)
})
It prints 15809 while there are only 228 rows in the original dataset. The principle of the tt() is that it feeds the variables into the transformation function where you are free to use time any way you wish. Note that you can also have different transformation functions depending for each variable:
library(survival)
coxph(Surv(time, status) ~ tt(ph.ecog) + tt(age), data=lung,
tt=list(
function(x,t,...) {
cbind(x, x + t/365.25, (x + t/365.25)^2)
},
function(x,t,...) {
pspline(x + t/365.25)
}),
x=T,
y=T) -> fit
head(fit$x)
Gives:
tt(ph.ecog)x tt(ph.ecog) tt(ph.ecog) tt(age)1 tt(age)2 tt(age)3 tt(age)4
6 1 3.4 11.7 0 0 0 0.000
3 0 2.4 5.8 0 0 0 0.020
38 1 3.4 11.7 0 0 0 0.000
5 0 2.4 5.8 0 0 0 0.000
6.1 1 3.2 10.4 0 0 0 0.000
3.1 0 2.2 5.0 0 0 0 0.026
tt(age)5 tt(age)6 tt(age)7 tt(age)8 tt(age)9 tt(age)10 tt(age)11 tt(age)12
6 0.00 0.00000 0.000 0.0052 0.359 0.58 0.053 0
3 0.48 0.48232 0.021 0.0000 0.000 0.00 0.000 0
38 0.00 0.00087 0.266 0.6393 0.094 0.00 0.000 0
5 0.03 0.51933 0.437 0.0136 0.000 0.00 0.000 0
6.1 0.00 0.00000 0.000 0.0078 0.388 0.56 0.044 0
3.1 0.50 0.45457 0.016 0.0000 0.000 0.00 0.000 0
I therefore try to avoid this solution and use the time-split approach that I wrote about here and in my answer to my own question. | time varying coefficients in cox proportional hazard model | Since the seasonal dummy variables are static by nature, and their coefficients clearly vary with the time variable, how much does it matter?
The value that you get is a form of average over time. Un | time varying coefficients in cox proportional hazard model
Since the seasonal dummy variables are static by nature, and their coefficients clearly vary with the time variable, how much does it matter?
The value that you get is a form of average over time. Unfortunately as you naturally have more cases in time-to-event analyses early on, you can't simply say that the effect is balanced throughout time. From my experience the initial period has a much heavier impact on the estimate than the later and it
I get that statistically, it means something, but does the violation of PH assumption invalidate the (intuitively appealing) result that non-response is more likely to happen in the summer and winter?
Again, it probably doesn't but you can't be sure until you've checked. Something is definitely happening over time and you should at least have a look at the residual plot (plot(cox.zph(...))). It isn't entirely surprising that you have a problem with the PH since seasons are part of the time variable and there will be situations where early summer and late spring are similar.
If so, is there a way to handle this so that the PH assumption is not violated? I know about using the tt transform, but I can't seem to figure out the exact form for the function.
The tt transform is tricky to use with big data. It explodes the matrix and can get a little messy, e.g. if you modify the lung example in the survival package:
library(survival)
coxph(Surv(time, status) ~ ph.ecog + tt(age), data=lung,
tt=function(x,t,...) {
print(length(x))
pspline(x + t/365.25)
})
It prints 15809 while there are only 228 rows in the original dataset. The principle of the tt() is that it feeds the variables into the transformation function where you are free to use time any way you wish. Note that you can also have different transformation functions depending for each variable:
library(survival)
coxph(Surv(time, status) ~ tt(ph.ecog) + tt(age), data=lung,
tt=list(
function(x,t,...) {
cbind(x, x + t/365.25, (x + t/365.25)^2)
},
function(x,t,...) {
pspline(x + t/365.25)
}),
x=T,
y=T) -> fit
head(fit$x)
Gives:
tt(ph.ecog)x tt(ph.ecog) tt(ph.ecog) tt(age)1 tt(age)2 tt(age)3 tt(age)4
6 1 3.4 11.7 0 0 0 0.000
3 0 2.4 5.8 0 0 0 0.020
38 1 3.4 11.7 0 0 0 0.000
5 0 2.4 5.8 0 0 0 0.000
6.1 1 3.2 10.4 0 0 0 0.000
3.1 0 2.2 5.0 0 0 0 0.026
tt(age)5 tt(age)6 tt(age)7 tt(age)8 tt(age)9 tt(age)10 tt(age)11 tt(age)12
6 0.00 0.00000 0.000 0.0052 0.359 0.58 0.053 0
3 0.48 0.48232 0.021 0.0000 0.000 0.00 0.000 0
38 0.00 0.00087 0.266 0.6393 0.094 0.00 0.000 0
5 0.03 0.51933 0.437 0.0136 0.000 0.00 0.000 0
6.1 0.00 0.00000 0.000 0.0078 0.388 0.56 0.044 0
3.1 0.50 0.45457 0.016 0.0000 0.000 0.00 0.000 0
I therefore try to avoid this solution and use the time-split approach that I wrote about here and in my answer to my own question. | time varying coefficients in cox proportional hazard model
Since the seasonal dummy variables are static by nature, and their coefficients clearly vary with the time variable, how much does it matter?
The value that you get is a form of average over time. Un |
49,932 | Bootstrapping in Binary Response Data with Few Clusters and Within-Cluster Correlation | There is an extension of the wild bootstrap called the "score bootstrap" developed by Kline and Santos (2012) (working paper here). Whereas the wild works for OLS, the score method works additionally for ML models such as logit/probit and 2SLS and GMM models. The user-written Stata command boottest can calculate p-values using the score method after an initial estimation. | Bootstrapping in Binary Response Data with Few Clusters and Within-Cluster Correlation | There is an extension of the wild bootstrap called the "score bootstrap" developed by Kline and Santos (2012) (working paper here). Whereas the wild works for OLS, the score method works additionally | Bootstrapping in Binary Response Data with Few Clusters and Within-Cluster Correlation
There is an extension of the wild bootstrap called the "score bootstrap" developed by Kline and Santos (2012) (working paper here). Whereas the wild works for OLS, the score method works additionally for ML models such as logit/probit and 2SLS and GMM models. The user-written Stata command boottest can calculate p-values using the score method after an initial estimation. | Bootstrapping in Binary Response Data with Few Clusters and Within-Cluster Correlation
There is an extension of the wild bootstrap called the "score bootstrap" developed by Kline and Santos (2012) (working paper here). Whereas the wild works for OLS, the score method works additionally |
49,933 | Is it possible to use LASSO regression with multi-levlel data? | Evan! Hi! It's Sam B, former resident of one-office-over. Funny meeting you here. Did you ever find a solution to your problem? In googling, I discovered both your question and this R package, which seems to have been released since your post. | Is it possible to use LASSO regression with multi-levlel data? | Evan! Hi! It's Sam B, former resident of one-office-over. Funny meeting you here. Did you ever find a solution to your problem? In googling, I discovered both your question and this R package, which s | Is it possible to use LASSO regression with multi-levlel data?
Evan! Hi! It's Sam B, former resident of one-office-over. Funny meeting you here. Did you ever find a solution to your problem? In googling, I discovered both your question and this R package, which seems to have been released since your post. | Is it possible to use LASSO regression with multi-levlel data?
Evan! Hi! It's Sam B, former resident of one-office-over. Funny meeting you here. Did you ever find a solution to your problem? In googling, I discovered both your question and this R package, which s |
49,934 | Calculating Jeffreys prior - where's the mistake? | It is likely that derivating/integrating wrt to discrete variables may be problematic. Nevertheless, considering the continuous analogous to the discrete model for derivated the associated Jeffreys is a situation discussed by Berger in https://www2.stat.duke.edu/~berger/papers/discrete.pdf (among others). Moreover, in this paper, it is stated that the solution for your problem is $p(N)\propto 1/N$ (first paragraph of Section 1.2.1).
However I found the same result as yours. Nevertheless (assuming that everything go well with the Heaviside step function), the alternative definition of Fisher information gives the expected result:
$$
(\frac{d log(f)}{dN})^2=\frac{1}{N^2}
$$
Then integrating:
$$
I(N)=\int_0^{\infty} \frac{1}{N^2} \frac{1}{N} 1(y<N) dy
$$
gives $I(N)=\frac{1}{N^2}$ and finally $p(N) \propto \frac{1}{N} 1_{R^+}(N)$.
But I do not know why (but as the switch to continuous may be a bit problematic, I would not be so surprised that some of the conditions related to the Fisher information formulation are not met) | Calculating Jeffreys prior - where's the mistake? | It is likely that derivating/integrating wrt to discrete variables may be problematic. Nevertheless, considering the continuous analogous to the discrete model for derivated the associated Jeffreys is | Calculating Jeffreys prior - where's the mistake?
It is likely that derivating/integrating wrt to discrete variables may be problematic. Nevertheless, considering the continuous analogous to the discrete model for derivated the associated Jeffreys is a situation discussed by Berger in https://www2.stat.duke.edu/~berger/papers/discrete.pdf (among others). Moreover, in this paper, it is stated that the solution for your problem is $p(N)\propto 1/N$ (first paragraph of Section 1.2.1).
However I found the same result as yours. Nevertheless (assuming that everything go well with the Heaviside step function), the alternative definition of Fisher information gives the expected result:
$$
(\frac{d log(f)}{dN})^2=\frac{1}{N^2}
$$
Then integrating:
$$
I(N)=\int_0^{\infty} \frac{1}{N^2} \frac{1}{N} 1(y<N) dy
$$
gives $I(N)=\frac{1}{N^2}$ and finally $p(N) \propto \frac{1}{N} 1_{R^+}(N)$.
But I do not know why (but as the switch to continuous may be a bit problematic, I would not be so surprised that some of the conditions related to the Fisher information formulation are not met) | Calculating Jeffreys prior - where's the mistake?
It is likely that derivating/integrating wrt to discrete variables may be problematic. Nevertheless, considering the continuous analogous to the discrete model for derivated the associated Jeffreys is |
49,935 | Is the sample mean a better point estimate of the population median than the sample median? | It would depend on details of the distribution family. For normal distributions, what you said would be true. For some more heavy-tailed distribution, it might not. You could for instance check with some t-distribution with low degrees of freedom. | Is the sample mean a better point estimate of the population median than the sample median? | It would depend on details of the distribution family. For normal distributions, what you said would be true. For some more heavy-tailed distribution, it might not. You could for instance check with s | Is the sample mean a better point estimate of the population median than the sample median?
It would depend on details of the distribution family. For normal distributions, what you said would be true. For some more heavy-tailed distribution, it might not. You could for instance check with some t-distribution with low degrees of freedom. | Is the sample mean a better point estimate of the population median than the sample median?
It would depend on details of the distribution family. For normal distributions, what you said would be true. For some more heavy-tailed distribution, it might not. You could for instance check with s |
49,936 | Comparing relative risks of independent samples | In this case, you should directly quantify the size of the interaction (between the treatment group variable and the age group variable) within studies. You can do this by taking the difference between the two log RRs within studies. The variance of the difference is just the sum of the two squared standard errors, since the two subgroups within trials consist of different individuals (below/above 50 years of age) and are therefore independent. So:
df.diff <- with(df, data.frame(study = 1:2,
yi = c(logrr[1]-logrr[2], logrr[3]-logrr[4]),
vi = c(se[1]^2 + se[2]^2, se[3]^2 + se[4]^2)))
So we get:
study yi vi
1 1 0.5108256 0.1268421
2 2 1.0986123 0.2553729
Then you can meta-analyze these values:
res <- rma(yi, vi, data=df.diff)
res
This yields:
Random-Effects Model (k = 2; tau^2 estimator: REML)
tau^2 (estimated amount of total heterogeneity): 0 (SE = 0.2703)
tau (square root of estimated tau^2 value): 0
I^2 (total heterogeneity / total variability): 0.00%
H^2 (total variability / sampling variability): 1.00
Test for Heterogeneity:
Q(df = 1) = 0.9039, p-val = 0.3417
Model Results:
estimate se zval pval ci.lb ci.ub
0.7059 0.2911 2.4248 0.0153 0.1353 1.2765 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
So, the estimated size of the interaction effect is $0.7059$, and since we computed (log RR for below 50) - (log RR for above 50), this value indicates that the log RR is on average $0.7059$ points higher in groups that are below 50 years of age. Or:
predict(res, transf=exp)
yields:
pred ci.lb ci.ub cr.lb cr.ub
2.0256 1.1449 3.5839 1.1449 3.5839
which indicates that the RR is on average roughly twice as large in groups that are below 50 years of age.
I'll leave aside the question whether using random-effects models with $k=2$ is sensible or not, but since the estimated amount of heterogeneity is 0 anyway, we would obtain the same results if we had used a fixed-effects model (method="FE"). | Comparing relative risks of independent samples | In this case, you should directly quantify the size of the interaction (between the treatment group variable and the age group variable) within studies. You can do this by taking the difference betwee | Comparing relative risks of independent samples
In this case, you should directly quantify the size of the interaction (between the treatment group variable and the age group variable) within studies. You can do this by taking the difference between the two log RRs within studies. The variance of the difference is just the sum of the two squared standard errors, since the two subgroups within trials consist of different individuals (below/above 50 years of age) and are therefore independent. So:
df.diff <- with(df, data.frame(study = 1:2,
yi = c(logrr[1]-logrr[2], logrr[3]-logrr[4]),
vi = c(se[1]^2 + se[2]^2, se[3]^2 + se[4]^2)))
So we get:
study yi vi
1 1 0.5108256 0.1268421
2 2 1.0986123 0.2553729
Then you can meta-analyze these values:
res <- rma(yi, vi, data=df.diff)
res
This yields:
Random-Effects Model (k = 2; tau^2 estimator: REML)
tau^2 (estimated amount of total heterogeneity): 0 (SE = 0.2703)
tau (square root of estimated tau^2 value): 0
I^2 (total heterogeneity / total variability): 0.00%
H^2 (total variability / sampling variability): 1.00
Test for Heterogeneity:
Q(df = 1) = 0.9039, p-val = 0.3417
Model Results:
estimate se zval pval ci.lb ci.ub
0.7059 0.2911 2.4248 0.0153 0.1353 1.2765 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
So, the estimated size of the interaction effect is $0.7059$, and since we computed (log RR for below 50) - (log RR for above 50), this value indicates that the log RR is on average $0.7059$ points higher in groups that are below 50 years of age. Or:
predict(res, transf=exp)
yields:
pred ci.lb ci.ub cr.lb cr.ub
2.0256 1.1449 3.5839 1.1449 3.5839
which indicates that the RR is on average roughly twice as large in groups that are below 50 years of age.
I'll leave aside the question whether using random-effects models with $k=2$ is sensible or not, but since the estimated amount of heterogeneity is 0 anyway, we would obtain the same results if we had used a fixed-effects model (method="FE"). | Comparing relative risks of independent samples
In this case, you should directly quantify the size of the interaction (between the treatment group variable and the age group variable) within studies. You can do this by taking the difference betwee |
49,937 | Comparing relative risks of independent samples | You could consider this as an example of meta-regression. You would set up your data frame to have columns for RR, CILB, CIUB, study, agegrp. Then you would enter both study and agregrp as moderators in the model. The effect for study is unimportant it is just there to take out study differences in overall treatment effect. The coefficient for agegrp would tell you about differences between the age groups. I assume you would do this all on the log scale and if you use the default contrasts in R the coefficient for agegrp when exponentiated would give you the relative relative risk if you see what I mean. | Comparing relative risks of independent samples | You could consider this as an example of meta-regression. You would set up your data frame to have columns for RR, CILB, CIUB, study, agegrp. Then you would enter both study and agregrp as moderators | Comparing relative risks of independent samples
You could consider this as an example of meta-regression. You would set up your data frame to have columns for RR, CILB, CIUB, study, agegrp. Then you would enter both study and agregrp as moderators in the model. The effect for study is unimportant it is just there to take out study differences in overall treatment effect. The coefficient for agegrp would tell you about differences between the age groups. I assume you would do this all on the log scale and if you use the default contrasts in R the coefficient for agegrp when exponentiated would give you the relative relative risk if you see what I mean. | Comparing relative risks of independent samples
You could consider this as an example of meta-regression. You would set up your data frame to have columns for RR, CILB, CIUB, study, agegrp. Then you would enter both study and agregrp as moderators |
49,938 | Dealing with 0 in cell count for Fisher's exact test | Fisher's exact test does deal with zero cells without any problem.
Adding 0.5 to all cells is a commonly done to remove improve the small sample properties from the Chi-square test (and it asymptotically removes the first-order bias from the estimate of the log-odds ratio). This is (for a simple 2x2 table) equivalent to maximum-a-posteriori estimation using the Jefferys prior (=a Beta(0.5,0.5) prior for each proportion), as well as Firth's penalized likelihood logistic regression (for stratified tables these two approaches no longer match to adding 0.5 to each cell). Doing so penalizes the effect size (and test decision) towards no effect.
If you have prior information, a Bayesian approach with informative priors may be another option. | Dealing with 0 in cell count for Fisher's exact test | Fisher's exact test does deal with zero cells without any problem.
Adding 0.5 to all cells is a commonly done to remove improve the small sample properties from the Chi-square test (and it asymptotica | Dealing with 0 in cell count for Fisher's exact test
Fisher's exact test does deal with zero cells without any problem.
Adding 0.5 to all cells is a commonly done to remove improve the small sample properties from the Chi-square test (and it asymptotically removes the first-order bias from the estimate of the log-odds ratio). This is (for a simple 2x2 table) equivalent to maximum-a-posteriori estimation using the Jefferys prior (=a Beta(0.5,0.5) prior for each proportion), as well as Firth's penalized likelihood logistic regression (for stratified tables these two approaches no longer match to adding 0.5 to each cell). Doing so penalizes the effect size (and test decision) towards no effect.
If you have prior information, a Bayesian approach with informative priors may be another option. | Dealing with 0 in cell count for Fisher's exact test
Fisher's exact test does deal with zero cells without any problem.
Adding 0.5 to all cells is a commonly done to remove improve the small sample properties from the Chi-square test (and it asymptotica |
49,939 | Introductory Statistics and Basic Experimental Designs for Computer Scientists | There are two ways I can see making these things interesting to graduate students in computer science. You can share statistical problems with them and ask them to think critically about the computational problems. Or you can share computational problems with them, and ask them to think critically about how statistics can help them reason about those problems.
By this time they're very familiar with algorithms, and comfortable receiving new computational problems. You could take a look at these course notes for inspiration on giving them statistical problems to reason about with their computational skills. By implementing their own algorithms to compute the statistics, they'll get some intuition about what the results mean.
On the other side of the same coin, they can get an appreciation for why these statistics matter to them by learning about applications of statistics in computer science, which I think is the main thrust of your question. To that end, there's plenty to draw from. Things in this category include voice and image recognition, recommendation systems, cryptography, optimization and parallelization. | Introductory Statistics and Basic Experimental Designs for Computer Scientists | There are two ways I can see making these things interesting to graduate students in computer science. You can share statistical problems with them and ask them to think critically about the computati | Introductory Statistics and Basic Experimental Designs for Computer Scientists
There are two ways I can see making these things interesting to graduate students in computer science. You can share statistical problems with them and ask them to think critically about the computational problems. Or you can share computational problems with them, and ask them to think critically about how statistics can help them reason about those problems.
By this time they're very familiar with algorithms, and comfortable receiving new computational problems. You could take a look at these course notes for inspiration on giving them statistical problems to reason about with their computational skills. By implementing their own algorithms to compute the statistics, they'll get some intuition about what the results mean.
On the other side of the same coin, they can get an appreciation for why these statistics matter to them by learning about applications of statistics in computer science, which I think is the main thrust of your question. To that end, there's plenty to draw from. Things in this category include voice and image recognition, recommendation systems, cryptography, optimization and parallelization. | Introductory Statistics and Basic Experimental Designs for Computer Scientists
There are two ways I can see making these things interesting to graduate students in computer science. You can share statistical problems with them and ask them to think critically about the computati |
49,940 | Introductory Statistics and Basic Experimental Designs for Computer Scientists | There are already quite some questions like this on this site, with good answers. Some are What should a graduate course in experimental design cover?
Recommended Text for Essential Experimental Statistics or search this site!
But you say "for computer scientists". What will you do use experimental design for? One specific thing of interest for computer science is design for computer experiments. They are different from the usual kind in that the outcomes are not really random, but deterministic (if we talk about "simulations" with deterministic models like complicated computer codes for differential equations). Then, the usual ideas behind factorial experiments do not really apply, and you need some kind of "space-filling designs".
A book going into such ideas (like latin hypercubes) is "Design and Modeling for Computer Experiments" by Kai-Tai Fang, Runze Li and Agus Sudjianto: http://www.amazon.com/Modeling-Computer-Experiments-Chapman-Analysis/dp/1584885467/ref=sr_1_2?s=books&ie=UTF8&qid=1455574142&sr=1-2&keywords=experimental+design+for+computer+experiments
ADDED AFTER THE EDITS TO THE QUESTION
You can find a lot of information by googling, for instance the search term "use of experimental design in computer science". Here http://sing.stanford.edu/cs303-sp11/ is syllabus for a course similar to the one you have to teach (701). Start with possible uses for experimental design within computer science:
Comparison of algorithms
Comparison of user interface designs
planning of marketing campains | Introductory Statistics and Basic Experimental Designs for Computer Scientists | There are already quite some questions like this on this site, with good answers. Some are What should a graduate course in experimental design cover?
Recommended Text for Essential Experimental Stat | Introductory Statistics and Basic Experimental Designs for Computer Scientists
There are already quite some questions like this on this site, with good answers. Some are What should a graduate course in experimental design cover?
Recommended Text for Essential Experimental Statistics or search this site!
But you say "for computer scientists". What will you do use experimental design for? One specific thing of interest for computer science is design for computer experiments. They are different from the usual kind in that the outcomes are not really random, but deterministic (if we talk about "simulations" with deterministic models like complicated computer codes for differential equations). Then, the usual ideas behind factorial experiments do not really apply, and you need some kind of "space-filling designs".
A book going into such ideas (like latin hypercubes) is "Design and Modeling for Computer Experiments" by Kai-Tai Fang, Runze Li and Agus Sudjianto: http://www.amazon.com/Modeling-Computer-Experiments-Chapman-Analysis/dp/1584885467/ref=sr_1_2?s=books&ie=UTF8&qid=1455574142&sr=1-2&keywords=experimental+design+for+computer+experiments
ADDED AFTER THE EDITS TO THE QUESTION
You can find a lot of information by googling, for instance the search term "use of experimental design in computer science". Here http://sing.stanford.edu/cs303-sp11/ is syllabus for a course similar to the one you have to teach (701). Start with possible uses for experimental design within computer science:
Comparison of algorithms
Comparison of user interface designs
planning of marketing campains | Introductory Statistics and Basic Experimental Designs for Computer Scientists
There are already quite some questions like this on this site, with good answers. Some are What should a graduate course in experimental design cover?
Recommended Text for Essential Experimental Stat |
49,941 | Including both individual and state fixed effects | If you have individual fixed effects, your estimate of the state dummy will be based upon within individual variation (i.e. it will be based upon the people that move across state lines). If no one switches state, then the state dummy will not be identified. | Including both individual and state fixed effects | If you have individual fixed effects, your estimate of the state dummy will be based upon within individual variation (i.e. it will be based upon the people that move across state lines). If no one sw | Including both individual and state fixed effects
If you have individual fixed effects, your estimate of the state dummy will be based upon within individual variation (i.e. it will be based upon the people that move across state lines). If no one switches state, then the state dummy will not be identified. | Including both individual and state fixed effects
If you have individual fixed effects, your estimate of the state dummy will be based upon within individual variation (i.e. it will be based upon the people that move across state lines). If no one sw |
49,942 | Is there a word for believing events are independent when they are not? | "Pseudo-replication" is one such name for the result of the error, if not the act of believing the premise. In a repeated measurements design, for instance, ignoring the correlation structure arising from the groups would make it look like you had more information than you actually did. (I remember an example where biology students divided tissue samples into smaller pieces to get a higher sample size and achieve significance.) But the naive point estimates would still be okay. In fact, sometimes they are used intentionally with the variance estimates corrected by a robust formula.
Another name might be "working assumption," though you wouldn't be guilty of belief either. Back when I worked with gene expression data, I remember the "independence working assumption" being used for constructing likelihoods for the vectors of gene expression measurements. As there were thousands of measured genes and the inter-dependencies were not understood, this independence assumption was tolerated. | Is there a word for believing events are independent when they are not? | "Pseudo-replication" is one such name for the result of the error, if not the act of believing the premise. In a repeated measurements design, for instance, ignoring the correlation structure arising | Is there a word for believing events are independent when they are not?
"Pseudo-replication" is one such name for the result of the error, if not the act of believing the premise. In a repeated measurements design, for instance, ignoring the correlation structure arising from the groups would make it look like you had more information than you actually did. (I remember an example where biology students divided tissue samples into smaller pieces to get a higher sample size and achieve significance.) But the naive point estimates would still be okay. In fact, sometimes they are used intentionally with the variance estimates corrected by a robust formula.
Another name might be "working assumption," though you wouldn't be guilty of belief either. Back when I worked with gene expression data, I remember the "independence working assumption" being used for constructing likelihoods for the vectors of gene expression measurements. As there were thousands of measured genes and the inter-dependencies were not understood, this independence assumption was tolerated. | Is there a word for believing events are independent when they are not?
"Pseudo-replication" is one such name for the result of the error, if not the act of believing the premise. In a repeated measurements design, for instance, ignoring the correlation structure arising |
49,943 | Ways to make a straight in poker | I am not a keen poker player but I believe this refers to the possibility of selecting a card from each of the four suits. For each of the five cards in the straight, this can be done in $\binom{4}{1} = 4$ ways . For instance, for a straight of 1,2,3,4,5 there are four fives among which you can choose. And likewise for the rest of the cards.
Also, a word about the $\binom{10}{1}$ factor. This accounts for the number of ways one can select five adjacent cards out of the thirteen, there are nine ways to do so, plus the possibility of selecting the last four cards and having a high ace, which still counts as a straight I hear. In all, 10 ways. | Ways to make a straight in poker | I am not a keen poker player but I believe this refers to the possibility of selecting a card from each of the four suits. For each of the five cards in the straight, this can be done in $\binom{4}{1} | Ways to make a straight in poker
I am not a keen poker player but I believe this refers to the possibility of selecting a card from each of the four suits. For each of the five cards in the straight, this can be done in $\binom{4}{1} = 4$ ways . For instance, for a straight of 1,2,3,4,5 there are four fives among which you can choose. And likewise for the rest of the cards.
Also, a word about the $\binom{10}{1}$ factor. This accounts for the number of ways one can select five adjacent cards out of the thirteen, there are nine ways to do so, plus the possibility of selecting the last four cards and having a high ace, which still counts as a straight I hear. In all, 10 ways. | Ways to make a straight in poker
I am not a keen poker player but I believe this refers to the possibility of selecting a card from each of the four suits. For each of the five cards in the straight, this can be done in $\binom{4}{1} |
49,944 | How to justify a "complicated" regression model over a simpler model to a non-technical audience? | I need a good, intuitive explanation for why not explicitly accounting for these potential sources of variation will lead to a worse model
In my experience it is fairly easy to justify the use of mixed models to a non-techincal audience, at a level that they will be able to understand. I generally try to do this with real-world examples. The main justification for mixed model that I use is that it controls for non-independence of the observations due to repeated measures within a subject or group. Independence is one of the main assumptions of linear regression and should be familiar to anyone who has encountered regression. A good example is an investigation into the effect of a medication used to lower blood pressure. Let's say that we take 2 measurements, one at baseline and one at followup, say 1 month after starting the medication. It is obvious that if we had only 1 subject, not only would the result not be generalisable to any reasonable population, but with only 2 observations the model would be overfitted, indeed it would have perfect fit, so no inference would be possible. So right here, we can establish that fitting models to subsets of the data is not a good idea. Also, anyone who has even a rudimentary experience of statistics knows that a large sample is better. This does not require graduate level education at all (if the audience was not convinced on this point I would simply use another basic example, continuing on the blood pressure example, asking how we would estimate the average blood pressure of all the employees in the client's company. The larger the sample, the better the estimate, etc.)
Having discarded the idea of subsets, based on overfitting, and statistical power, we then consider a regression model ignoring the repeated measures within groups. Here, continuing on the blood pressure theme, I point out that there if we measure different people over a few points in time, the measures for each person will be more similar to each other than to measures of another person. Again, no advanced statistical knowledge is needed for this. It violoates the assumption of independence in linear regression. A person with high blood pressure today, will likely have high blood pressure tomorrow. Fitting models with random intercepts for subjects specifically accounts for this non-independence. Furthermore, we might expect that these correlations within individuals would diminish over time, or that the response to a drug may be different in different individuals. Again, no advanced statistical knowledge is needed. Mixed models allow for this to be explicitly modelled using random slopes.
and sterling references for that contention
The use of mixed models is well established in lots of different areas of applied research, so I would just point to some of the classic textbooks: (eg Bryk & Raudenbush (1992), Snijders & Bosker (2011) and Pinheiro & Bates (2000)). Between them, these 3 books have around 45,000 citations according to Google Scholar, which I believe qualifies them as "sterling" references.
Bryk, A. S., & Raudenbush, S. W. (1992). Hierarchical linear models: Applications and data analysis methods. Sage Publications, Inc.
Pinheiro, J., & Bates, D. (2000). Mixed-effects models in S and S-PLUS. Springer Science & Business Media.
Snijders, T. A., & Bosker, R. J. (2011). Multilevel analysis: An introduction to basic and advanced multilevel modeling. Sage. | How to justify a "complicated" regression model over a simpler model to a non-technical audience? | I need a good, intuitive explanation for why not explicitly accounting for these potential sources of variation will lead to a worse model
In my experience it is fairly easy to justify the use of mix | How to justify a "complicated" regression model over a simpler model to a non-technical audience?
I need a good, intuitive explanation for why not explicitly accounting for these potential sources of variation will lead to a worse model
In my experience it is fairly easy to justify the use of mixed models to a non-techincal audience, at a level that they will be able to understand. I generally try to do this with real-world examples. The main justification for mixed model that I use is that it controls for non-independence of the observations due to repeated measures within a subject or group. Independence is one of the main assumptions of linear regression and should be familiar to anyone who has encountered regression. A good example is an investigation into the effect of a medication used to lower blood pressure. Let's say that we take 2 measurements, one at baseline and one at followup, say 1 month after starting the medication. It is obvious that if we had only 1 subject, not only would the result not be generalisable to any reasonable population, but with only 2 observations the model would be overfitted, indeed it would have perfect fit, so no inference would be possible. So right here, we can establish that fitting models to subsets of the data is not a good idea. Also, anyone who has even a rudimentary experience of statistics knows that a large sample is better. This does not require graduate level education at all (if the audience was not convinced on this point I would simply use another basic example, continuing on the blood pressure example, asking how we would estimate the average blood pressure of all the employees in the client's company. The larger the sample, the better the estimate, etc.)
Having discarded the idea of subsets, based on overfitting, and statistical power, we then consider a regression model ignoring the repeated measures within groups. Here, continuing on the blood pressure theme, I point out that there if we measure different people over a few points in time, the measures for each person will be more similar to each other than to measures of another person. Again, no advanced statistical knowledge is needed for this. It violoates the assumption of independence in linear regression. A person with high blood pressure today, will likely have high blood pressure tomorrow. Fitting models with random intercepts for subjects specifically accounts for this non-independence. Furthermore, we might expect that these correlations within individuals would diminish over time, or that the response to a drug may be different in different individuals. Again, no advanced statistical knowledge is needed. Mixed models allow for this to be explicitly modelled using random slopes.
and sterling references for that contention
The use of mixed models is well established in lots of different areas of applied research, so I would just point to some of the classic textbooks: (eg Bryk & Raudenbush (1992), Snijders & Bosker (2011) and Pinheiro & Bates (2000)). Between them, these 3 books have around 45,000 citations according to Google Scholar, which I believe qualifies them as "sterling" references.
Bryk, A. S., & Raudenbush, S. W. (1992). Hierarchical linear models: Applications and data analysis methods. Sage Publications, Inc.
Pinheiro, J., & Bates, D. (2000). Mixed-effects models in S and S-PLUS. Springer Science & Business Media.
Snijders, T. A., & Bosker, R. J. (2011). Multilevel analysis: An introduction to basic and advanced multilevel modeling. Sage. | How to justify a "complicated" regression model over a simpler model to a non-technical audience?
I need a good, intuitive explanation for why not explicitly accounting for these potential sources of variation will lead to a worse model
In my experience it is fairly easy to justify the use of mix |
49,945 | Producing samples from exponential family conditional on minimal sufficient statistic | Given that the vector $\mathbf{T}(X)=(T_1(X),\ldots,T_K(X))$ is sufficient, this means that the distribution of $X$ conditional on $\mathbf{T}(X)$ does not depend on the parameters $\eta_k$.
If $$p(x)\propto h(x)\exp\left\{-\sum_k \eta_k T_k(x)\right\}$$ the distribution of $X$ given $\mathbf{T}(X)=t$ will have a density proportional to $h(x)$ on the manifold $\mathbf{T}(x)=t$.
I am not aware of a generic algorithm to handle this simulation. In the case of the energy function, the distribution of $X$ is then uniform over the set of $x$'s such that $E(x)=e$, the observed value of the sufficient statistic. | Producing samples from exponential family conditional on minimal sufficient statistic | Given that the vector $\mathbf{T}(X)=(T_1(X),\ldots,T_K(X))$ is sufficient, this means that the distribution of $X$ conditional on $\mathbf{T}(X)$ does not depend on the parameters $\eta_k$.
If $$p(x) | Producing samples from exponential family conditional on minimal sufficient statistic
Given that the vector $\mathbf{T}(X)=(T_1(X),\ldots,T_K(X))$ is sufficient, this means that the distribution of $X$ conditional on $\mathbf{T}(X)$ does not depend on the parameters $\eta_k$.
If $$p(x)\propto h(x)\exp\left\{-\sum_k \eta_k T_k(x)\right\}$$ the distribution of $X$ given $\mathbf{T}(X)=t$ will have a density proportional to $h(x)$ on the manifold $\mathbf{T}(x)=t$.
I am not aware of a generic algorithm to handle this simulation. In the case of the energy function, the distribution of $X$ is then uniform over the set of $x$'s such that $E(x)=e$, the observed value of the sufficient statistic. | Producing samples from exponential family conditional on minimal sufficient statistic
Given that the vector $\mathbf{T}(X)=(T_1(X),\ldots,T_K(X))$ is sufficient, this means that the distribution of $X$ conditional on $\mathbf{T}(X)$ does not depend on the parameters $\eta_k$.
If $$p(x) |
49,946 | R - Approach to find outliers/artefacts in blood pressure curve | I'd suggest looking at the changepoint package, in particular cpt.var. At least based on your three examples, it looks like your artifacts involve breaks in variance (first two examples lower, last example higher).
On a more empirical note, you could also try the runmad (windowed MAD) from the caTools package. | R - Approach to find outliers/artefacts in blood pressure curve | I'd suggest looking at the changepoint package, in particular cpt.var. At least based on your three examples, it looks like your artifacts involve breaks in variance (first two examples lower, last ex | R - Approach to find outliers/artefacts in blood pressure curve
I'd suggest looking at the changepoint package, in particular cpt.var. At least based on your three examples, it looks like your artifacts involve breaks in variance (first two examples lower, last example higher).
On a more empirical note, you could also try the runmad (windowed MAD) from the caTools package. | R - Approach to find outliers/artefacts in blood pressure curve
I'd suggest looking at the changepoint package, in particular cpt.var. At least based on your three examples, it looks like your artifacts involve breaks in variance (first two examples lower, last ex |
49,947 | R - Approach to find outliers/artefacts in blood pressure curve | Blood pressure traces have the advantage of containing a well defined nearly periodic structure. As I recall from my long-ago training in physiology there is a substantial history of frequency analysis of blood pressure traces.
So you might consider an application of time series frequency analysis. The artifacts seem to have either much lower or much higher frequency components (flat resp. noisy traces) than the normal blood pressure traces do, so detecting breaks over time in the frequency components of the signal might work well. See the Frequency Analysis and the Decomposition and Filtering sections of the CRAN Task View for Time Series. The kza package in particular includes facilities for break detection, with an example in its manual.
Distinguishing true signals from artifacts in blood pressure, pulse-oximeter and electrocardiogram traces is of great practical importance for clinical device manufacturers, medical professionals, and patients. I've seen artifacts cut out or noted electronically on pulse-oximeter and electrocardiogram traces when visiting friends in the hospital, so there probably already are real-time solutions for this problem, although they might be covered by intellectual property restrictions rather than being open-source. | R - Approach to find outliers/artefacts in blood pressure curve | Blood pressure traces have the advantage of containing a well defined nearly periodic structure. As I recall from my long-ago training in physiology there is a substantial history of frequency analysi | R - Approach to find outliers/artefacts in blood pressure curve
Blood pressure traces have the advantage of containing a well defined nearly periodic structure. As I recall from my long-ago training in physiology there is a substantial history of frequency analysis of blood pressure traces.
So you might consider an application of time series frequency analysis. The artifacts seem to have either much lower or much higher frequency components (flat resp. noisy traces) than the normal blood pressure traces do, so detecting breaks over time in the frequency components of the signal might work well. See the Frequency Analysis and the Decomposition and Filtering sections of the CRAN Task View for Time Series. The kza package in particular includes facilities for break detection, with an example in its manual.
Distinguishing true signals from artifacts in blood pressure, pulse-oximeter and electrocardiogram traces is of great practical importance for clinical device manufacturers, medical professionals, and patients. I've seen artifacts cut out or noted electronically on pulse-oximeter and electrocardiogram traces when visiting friends in the hospital, so there probably already are real-time solutions for this problem, although they might be covered by intellectual property restrictions rather than being open-source. | R - Approach to find outliers/artefacts in blood pressure curve
Blood pressure traces have the advantage of containing a well defined nearly periodic structure. As I recall from my long-ago training in physiology there is a substantial history of frequency analysi |
49,948 | Unable to reproduce paper results (sample size) | Maybe the author's
implementation - despite the description - starts with a large n and
works back and stops at the first time we are above alpha, thus
returning the last n where the sum was below alpha? At least an
implementation of such an algo would be consistent with the results of the paper
(test code below).
There appear to be more than one n, where a switch from the sum being
above alpha for n and being below alpha for n+1... However, I do think
it's better to go from small n and up.
##Go the other way...not my 1st choice though...
ciss.midp2 <- function (p0, d, alpha, nMax = 1e+05) {
pi.L <- p0 - d
pi.U <- p0 + d
if (pi.L < 0)
stop("p0 - d is below zero!")
if (pi.U > 1)
stop("p0 + d is above one!")
n <- nMax
done <- FALSE
while (!done & (n > 0)) {
n <- n - 1
x <- round(p0 * n)
lhs2 <- 1/2 * dbinom(x, size = n, prob = pi.L) +
pbinom(x, size = n, prob = pi.L, lower.tail = FALSE) +
(1 - pbinom(x-1, size = n, prob = pi.U, lower.tail=FALSE)) +
1/2*dbinom(x, size = n, prob = pi.U)
if (!is.na(lhs2)) {
done <- (lhs2 > alpha)
}
}
return(n+1)
}
library("binomSamSize")
p.grid <- seq(0.5, 0.9, 0.05)
sapply(p.grid, function(i) ciss.midp(p0=i, d=0.1, alpha=0.1))
[1] 68 67 65 61 57 50 42 34 23
sapply(p.grid, function(i) ciss.midp2(p0=i, d=0.1, alpha=0.1)
[1] 68 67 65 61 57 52 45 39 29 | Unable to reproduce paper results (sample size) | Maybe the author's
implementation - despite the description - starts with a large n and
works back and stops at the first time we are above alpha, thus
returning the last n where the sum was below al | Unable to reproduce paper results (sample size)
Maybe the author's
implementation - despite the description - starts with a large n and
works back and stops at the first time we are above alpha, thus
returning the last n where the sum was below alpha? At least an
implementation of such an algo would be consistent with the results of the paper
(test code below).
There appear to be more than one n, where a switch from the sum being
above alpha for n and being below alpha for n+1... However, I do think
it's better to go from small n and up.
##Go the other way...not my 1st choice though...
ciss.midp2 <- function (p0, d, alpha, nMax = 1e+05) {
pi.L <- p0 - d
pi.U <- p0 + d
if (pi.L < 0)
stop("p0 - d is below zero!")
if (pi.U > 1)
stop("p0 + d is above one!")
n <- nMax
done <- FALSE
while (!done & (n > 0)) {
n <- n - 1
x <- round(p0 * n)
lhs2 <- 1/2 * dbinom(x, size = n, prob = pi.L) +
pbinom(x, size = n, prob = pi.L, lower.tail = FALSE) +
(1 - pbinom(x-1, size = n, prob = pi.U, lower.tail=FALSE)) +
1/2*dbinom(x, size = n, prob = pi.U)
if (!is.na(lhs2)) {
done <- (lhs2 > alpha)
}
}
return(n+1)
}
library("binomSamSize")
p.grid <- seq(0.5, 0.9, 0.05)
sapply(p.grid, function(i) ciss.midp(p0=i, d=0.1, alpha=0.1))
[1] 68 67 65 61 57 50 42 34 23
sapply(p.grid, function(i) ciss.midp2(p0=i, d=0.1, alpha=0.1)
[1] 68 67 65 61 57 52 45 39 29 | Unable to reproduce paper results (sample size)
Maybe the author's
implementation - despite the description - starts with a large n and
works back and stops at the first time we are above alpha, thus
returning the last n where the sum was below al |
49,949 | Is this a mistake in my exercise? A density for a transformed variable | It's a mistake in the exercise: you made all the progress one can.
After all, suppose there exists a continuous version of the marginal density $\tilde f$ with $\tilde{f}(0)\ne 0$ and $g(x,y)=\tilde{f}(x)\tilde{f}(y)$. (The standard bivariate Normal distribution has this property.)
Modify $\tilde f$ to a new density $f$ by setting $\tilde{f}(0)=0$. Because this changes $\tilde f$ only on a set $\{0\}$ of measure zero, $f$ also is a marginal density for either $X$ or $Y$. Moreover, $f(x)f(y)$ differs from $\tilde{f}(x)\tilde{f}(y)$ at most on the set $\left(\{0\}\times\mathbb{R}\right) \cup \left(\mathbb{R}\times \{0\}\right)$, which has measure zero. If the conclusion of the exercise were correct, it would imply
$$g(x,y)=\tilde{f}(x)\tilde{f}(y) = f(0)f\left(\sqrt{x^2+y^2}\right) [\text{a.e.}] = 0\times f\left(\sqrt{x^2+y^2}\right) = 0.$$
However, that's not a density because it integrates to zero. | Is this a mistake in my exercise? A density for a transformed variable | It's a mistake in the exercise: you made all the progress one can.
After all, suppose there exists a continuous version of the marginal density $\tilde f$ with $\tilde{f}(0)\ne 0$ and $g(x,y)=\tilde{ | Is this a mistake in my exercise? A density for a transformed variable
It's a mistake in the exercise: you made all the progress one can.
After all, suppose there exists a continuous version of the marginal density $\tilde f$ with $\tilde{f}(0)\ne 0$ and $g(x,y)=\tilde{f}(x)\tilde{f}(y)$. (The standard bivariate Normal distribution has this property.)
Modify $\tilde f$ to a new density $f$ by setting $\tilde{f}(0)=0$. Because this changes $\tilde f$ only on a set $\{0\}$ of measure zero, $f$ also is a marginal density for either $X$ or $Y$. Moreover, $f(x)f(y)$ differs from $\tilde{f}(x)\tilde{f}(y)$ at most on the set $\left(\{0\}\times\mathbb{R}\right) \cup \left(\mathbb{R}\times \{0\}\right)$, which has measure zero. If the conclusion of the exercise were correct, it would imply
$$g(x,y)=\tilde{f}(x)\tilde{f}(y) = f(0)f\left(\sqrt{x^2+y^2}\right) [\text{a.e.}] = 0\times f\left(\sqrt{x^2+y^2}\right) = 0.$$
However, that's not a density because it integrates to zero. | Is this a mistake in my exercise? A density for a transformed variable
It's a mistake in the exercise: you made all the progress one can.
After all, suppose there exists a continuous version of the marginal density $\tilde f$ with $\tilde{f}(0)\ne 0$ and $g(x,y)=\tilde{ |
49,950 | Random Forests for predictor importance (Matlab) | What you describe would be one approach. For classification, TreeBagger by default randomly selects sqrt(p) predictors for each decision split (setting recommended by Breiman). Depending on your data and tree depth, some of your 50 predictors could be considered fewer times than others for splits just because they get unlucky. This is why for estimation of predictor importance I usually set 'nvartosample' to 'all'. This gives a model with somewhat lower accuracy ensures that every predictor is sensibly included.
If you run TreeBagger at the default settings, this is generally not a problem. For example, if you have two strongly correlated features and one of them is included in the 7 predictors selected at random for a split, the other is likely not included in these 7. If you want to adopt my scheme by inspecting all predictors for each split, use surrogate splits by setting 'surrogate' to 'all'. Training will take longer but you will have full information about predictor importance, irrespective of other predictors, associations among predictors.
The TreeBagger doc and help have this statement at the bottom:
"In addition to the optional arguments above, this method accepts all optional fitctree and fitrtree arguments with the exception of 'minparent'. Refer to the documentation for fitctree and fitrtree for more detail."
Look at the doc for fitctree and fitrtree.
fitensemble for the 'Bag' method implements Breiman's random forest with the same default settings as in TreeBagger. You can change the number of features to sample to whatever you like; just read the doc for templateTree. The object returned by fitensemble has a predictorImportance method which shows cumulative gains due to splits on each predictor. The TreeBagger's equivalent of that the DeltaCriterionDecisionSplit property (or something like that). In addition, TreeBagger has 3 OOBPermuted properties that are alternative measures of predictor importance. | Random Forests for predictor importance (Matlab) | What you describe would be one approach. For classification, TreeBagger by default randomly selects sqrt(p) predictors for each decision split (setting recommended by Breiman). Depending on your data | Random Forests for predictor importance (Matlab)
What you describe would be one approach. For classification, TreeBagger by default randomly selects sqrt(p) predictors for each decision split (setting recommended by Breiman). Depending on your data and tree depth, some of your 50 predictors could be considered fewer times than others for splits just because they get unlucky. This is why for estimation of predictor importance I usually set 'nvartosample' to 'all'. This gives a model with somewhat lower accuracy ensures that every predictor is sensibly included.
If you run TreeBagger at the default settings, this is generally not a problem. For example, if you have two strongly correlated features and one of them is included in the 7 predictors selected at random for a split, the other is likely not included in these 7. If you want to adopt my scheme by inspecting all predictors for each split, use surrogate splits by setting 'surrogate' to 'all'. Training will take longer but you will have full information about predictor importance, irrespective of other predictors, associations among predictors.
The TreeBagger doc and help have this statement at the bottom:
"In addition to the optional arguments above, this method accepts all optional fitctree and fitrtree arguments with the exception of 'minparent'. Refer to the documentation for fitctree and fitrtree for more detail."
Look at the doc for fitctree and fitrtree.
fitensemble for the 'Bag' method implements Breiman's random forest with the same default settings as in TreeBagger. You can change the number of features to sample to whatever you like; just read the doc for templateTree. The object returned by fitensemble has a predictorImportance method which shows cumulative gains due to splits on each predictor. The TreeBagger's equivalent of that the DeltaCriterionDecisionSplit property (or something like that). In addition, TreeBagger has 3 OOBPermuted properties that are alternative measures of predictor importance. | Random Forests for predictor importance (Matlab)
What you describe would be one approach. For classification, TreeBagger by default randomly selects sqrt(p) predictors for each decision split (setting recommended by Breiman). Depending on your data |
49,951 | Random Forests for predictor importance (Matlab) | Yes, sampling all predictors would typically hurt the model accuracy. It is predictor importance values we are after, not accuracy. Either way, this is a heuristic procedure. Using random forest to estimate predictor importance for SVM can only give you a notion of what predictors could be important. One can construct datasets in which RF fails to identify predictors that are important for SVM (false negatives) and the other way around (false positives). If you want to have more trust in your predictor selection procedure, do sequential backward elimination using SVM as the underlying learner. | Random Forests for predictor importance (Matlab) | Yes, sampling all predictors would typically hurt the model accuracy. It is predictor importance values we are after, not accuracy. Either way, this is a heuristic procedure. Using random forest to es | Random Forests for predictor importance (Matlab)
Yes, sampling all predictors would typically hurt the model accuracy. It is predictor importance values we are after, not accuracy. Either way, this is a heuristic procedure. Using random forest to estimate predictor importance for SVM can only give you a notion of what predictors could be important. One can construct datasets in which RF fails to identify predictors that are important for SVM (false negatives) and the other way around (false positives). If you want to have more trust in your predictor selection procedure, do sequential backward elimination using SVM as the underlying learner. | Random Forests for predictor importance (Matlab)
Yes, sampling all predictors would typically hurt the model accuracy. It is predictor importance values we are after, not accuracy. Either way, this is a heuristic procedure. Using random forest to es |
49,952 | How exactly does one marginalize over parameters in an N-dimensional likelihood? | Say you have the state of Information $I$, some observations $\{y_i\}$ and some parameters $\theta_p$ where $p \in \{1,2, \ldots n\}$ then in the continuous case you get the marginal likelihoods from the joint likelihood $p(\theta_1,\theta_2, \ldots \theta_n |\{y_i\},I)$ by integration:
\begin{align*}
p(\theta_k|\{y_i\},I) &=\int_{-\infty}^{+\infty} \ldots \int_{-\infty}^{+\infty} p(\theta_1,\theta_2, \ldots \theta_n |\{y_i\},I)\,d\theta_{p_1} \cdots \,d\theta_{p_{n-1}}
\end{align*}
where $p_j \in \{1,2, \ldots n\}\setminus k$
In the discrete case you will have sums instead of integrals as you can see here. | How exactly does one marginalize over parameters in an N-dimensional likelihood? | Say you have the state of Information $I$, some observations $\{y_i\}$ and some parameters $\theta_p$ where $p \in \{1,2, \ldots n\}$ then in the continuous case you get the marginal likelihoods from | How exactly does one marginalize over parameters in an N-dimensional likelihood?
Say you have the state of Information $I$, some observations $\{y_i\}$ and some parameters $\theta_p$ where $p \in \{1,2, \ldots n\}$ then in the continuous case you get the marginal likelihoods from the joint likelihood $p(\theta_1,\theta_2, \ldots \theta_n |\{y_i\},I)$ by integration:
\begin{align*}
p(\theta_k|\{y_i\},I) &=\int_{-\infty}^{+\infty} \ldots \int_{-\infty}^{+\infty} p(\theta_1,\theta_2, \ldots \theta_n |\{y_i\},I)\,d\theta_{p_1} \cdots \,d\theta_{p_{n-1}}
\end{align*}
where $p_j \in \{1,2, \ldots n\}\setminus k$
In the discrete case you will have sums instead of integrals as you can see here. | How exactly does one marginalize over parameters in an N-dimensional likelihood?
Say you have the state of Information $I$, some observations $\{y_i\}$ and some parameters $\theta_p$ where $p \in \{1,2, \ldots n\}$ then in the continuous case you get the marginal likelihoods from |
49,953 | How exactly does one marginalize over parameters in an N-dimensional likelihood? | You should always reference your quotes.
The only reference on the Internet I found with this quote
"For each model, we determine the best fit parameters from the peak of
the N-dimensional likelihood surface. For each parameter in the model
we also compute its one dimensional likelihood function by
marginalizing over all other parameters." (p.3)
is an cosmology arxiv paper on WMAP by Spergel et al. (2003). If you turn the page after this quote, you will find an equation defining the expectation under the "marginal likelihood" as
$$<α_i> = \int d^N α\mathcal{L}(α)α_i\,.$$ This means that the "marginal likelihood" is
$$\int d^{N-1} α_{-i}\mathcal{L}(α)\,.$$
If I quote from the previous sentences of the paper as well
"For each model studied in the paper, we use a Monte Carlo Markov Chain to explore the likelihood surface. We assume flat priors in our basic parameters, impose positivity constraints on the matter and baryon
density (these limits lie at such low likelihood that they are
unimportant for the models. We assume a flat prior in τ , the optical
depth, but bound τ < 0.3. This prior has little effect on the fits but
keeps the Markov Chain out of unphysical regions of parameter space. (p.3)"
this means that the authors take a Bayesian stance with a flat prior in their parametrisation. | How exactly does one marginalize over parameters in an N-dimensional likelihood? | You should always reference your quotes.
The only reference on the Internet I found with this quote
"For each model, we determine the best fit parameters from the peak of
the N-dimensional likelih | How exactly does one marginalize over parameters in an N-dimensional likelihood?
You should always reference your quotes.
The only reference on the Internet I found with this quote
"For each model, we determine the best fit parameters from the peak of
the N-dimensional likelihood surface. For each parameter in the model
we also compute its one dimensional likelihood function by
marginalizing over all other parameters." (p.3)
is an cosmology arxiv paper on WMAP by Spergel et al. (2003). If you turn the page after this quote, you will find an equation defining the expectation under the "marginal likelihood" as
$$<α_i> = \int d^N α\mathcal{L}(α)α_i\,.$$ This means that the "marginal likelihood" is
$$\int d^{N-1} α_{-i}\mathcal{L}(α)\,.$$
If I quote from the previous sentences of the paper as well
"For each model studied in the paper, we use a Monte Carlo Markov Chain to explore the likelihood surface. We assume flat priors in our basic parameters, impose positivity constraints on the matter and baryon
density (these limits lie at such low likelihood that they are
unimportant for the models. We assume a flat prior in τ , the optical
depth, but bound τ < 0.3. This prior has little effect on the fits but
keeps the Markov Chain out of unphysical regions of parameter space. (p.3)"
this means that the authors take a Bayesian stance with a flat prior in their parametrisation. | How exactly does one marginalize over parameters in an N-dimensional likelihood?
You should always reference your quotes.
The only reference on the Internet I found with this quote
"For each model, we determine the best fit parameters from the peak of
the N-dimensional likelih |
49,954 | overtrain the CNN? | CNN, like any other neural network, overfits to the training data if it is trained for too long on the same training dataset. The purpose of the validation set is to stop training when performance on validation set starts decreasing, indicating that the model is overfitting the training data. Check this for more info. | overtrain the CNN? | CNN, like any other neural network, overfits to the training data if it is trained for too long on the same training dataset. The purpose of the validation set is to stop training when performance on | overtrain the CNN?
CNN, like any other neural network, overfits to the training data if it is trained for too long on the same training dataset. The purpose of the validation set is to stop training when performance on validation set starts decreasing, indicating that the model is overfitting the training data. Check this for more info. | overtrain the CNN?
CNN, like any other neural network, overfits to the training data if it is trained for too long on the same training dataset. The purpose of the validation set is to stop training when performance on |
49,955 | Stationarity Tests in R, checking mean, variance and covariance | I also asked myself a similar question.
There is a stationarity() from {fractal} package in R; which uses PSR test based on spectral analysis; and hwtos2 from {locits}, which uses wavelet spectrum test.
That answer can be found on the following link:
http://www.maths.bris.ac.uk/~guy/Research/LSTS/TOS.html | Stationarity Tests in R, checking mean, variance and covariance | I also asked myself a similar question.
There is a stationarity() from {fractal} package in R; which uses PSR test based on spectral analysis; and hwtos2 from {locits}, which uses wavelet spectrum te | Stationarity Tests in R, checking mean, variance and covariance
I also asked myself a similar question.
There is a stationarity() from {fractal} package in R; which uses PSR test based on spectral analysis; and hwtos2 from {locits}, which uses wavelet spectrum test.
That answer can be found on the following link:
http://www.maths.bris.ac.uk/~guy/Research/LSTS/TOS.html | Stationarity Tests in R, checking mean, variance and covariance
I also asked myself a similar question.
There is a stationarity() from {fractal} package in R; which uses PSR test based on spectral analysis; and hwtos2 from {locits}, which uses wavelet spectrum te |
49,956 | Scaling vs Offsetting in Quasi-Poisson GLM | We can consider the GLM as having two components, model for the mean and a model for the variance. This is even more explicit with the quasi-GLM case.
The mean is assumed proportional to the exposure; with a log-link (which is what I presume you have), you could try to adjust for the effect of exposure on the mean either by dividing the data by exposure or by using an offset of log-exposure. Both have the same effect on the mean.
However, depending on the particular distribution that's operating*, they can have different effects on the variance.
*(as well as other drivers like dependence and unmodelled effects)
When you divide by exposure you divide the variance by exposure-squared (this is just a basic variance property - $\text{Var}(\frac{X}{e_i})=\frac{1}{e_i^2} \text{Var}(X)$). Equivalently, scaling by exposure reduces the standard deviation in proportion to the mean (leaving the coefficient of variation constant). This might suit claim amounts but doesn't fit with a quasi-Poisson model for claim counts.
[For example, a model for aggregate claim payments might consider a Gamma GLM (which has variance proportional to mean squared, or constant coefficient of variation) having an offset of log-exposure will reduce the fitted mean by a factor of exposure and so (because the model has variance proportional to mean-squared) will reduce variance by the square of exposure. So for a Gamma GLM with log-link the two approaches are identical; this is also true for other models where your model for the mean is proportional to a scale parameter and the variance is proportional to the square of the mean, including lognormal models, Weibull models and a number of others.]
For a quasi-Poisson GLM with log link, in the model, the variance is proportional to mean, not mean squared. As such, when you fit log-exposure as an offset it reduces fitted variance according to the model - proportional to the change in mean. As we saw above, when you divide by exposure you change it according to mean-squared.
If the quasi-Poisson model was actually the correct model for your counts, then you should certainly use an offset of log-exposure, since it would describe the impact on variance correctly as Ben indicated.
However, for claim counts, a quasi-Poisson model is at best a rough approximation.
If you have heterogeneity, a negative binomial would tend to model the variability better, and it doesn't have variance proportional to mean; however often it, too doesn't really capture the variance effect -- some important drivers of claim frequency may lead to even stronger relationship to the mean.
Realistically, exposure won't exactly impact the variance in proportion to the mean. Many effects we're aware of will work to make that contribution to the variance increase somewhat faster than the mean does.
For counts, the variance assumption in the quasi-Poisson model will at least sometimes be close to correct; if your model is quasi Poisson, then you'll certainly get the variance wrong (according to your model) if you divide by exposure.
You could make an assessment of whether variance is well approximated as proportional to mean at model fitting time by considering the usual model diagnostics (if it isn't, you should not be using a model that says it is; if it is, then you should deal with exposure properly according to your model).
[Of course, exposure may not impact the variance in the model the same way as the rest of the drivers tend to, but that might be introducing more complexity than you have data to deal with.] | Scaling vs Offsetting in Quasi-Poisson GLM | We can consider the GLM as having two components, model for the mean and a model for the variance. This is even more explicit with the quasi-GLM case.
The mean is assumed proportional to the exposure; | Scaling vs Offsetting in Quasi-Poisson GLM
We can consider the GLM as having two components, model for the mean and a model for the variance. This is even more explicit with the quasi-GLM case.
The mean is assumed proportional to the exposure; with a log-link (which is what I presume you have), you could try to adjust for the effect of exposure on the mean either by dividing the data by exposure or by using an offset of log-exposure. Both have the same effect on the mean.
However, depending on the particular distribution that's operating*, they can have different effects on the variance.
*(as well as other drivers like dependence and unmodelled effects)
When you divide by exposure you divide the variance by exposure-squared (this is just a basic variance property - $\text{Var}(\frac{X}{e_i})=\frac{1}{e_i^2} \text{Var}(X)$). Equivalently, scaling by exposure reduces the standard deviation in proportion to the mean (leaving the coefficient of variation constant). This might suit claim amounts but doesn't fit with a quasi-Poisson model for claim counts.
[For example, a model for aggregate claim payments might consider a Gamma GLM (which has variance proportional to mean squared, or constant coefficient of variation) having an offset of log-exposure will reduce the fitted mean by a factor of exposure and so (because the model has variance proportional to mean-squared) will reduce variance by the square of exposure. So for a Gamma GLM with log-link the two approaches are identical; this is also true for other models where your model for the mean is proportional to a scale parameter and the variance is proportional to the square of the mean, including lognormal models, Weibull models and a number of others.]
For a quasi-Poisson GLM with log link, in the model, the variance is proportional to mean, not mean squared. As such, when you fit log-exposure as an offset it reduces fitted variance according to the model - proportional to the change in mean. As we saw above, when you divide by exposure you change it according to mean-squared.
If the quasi-Poisson model was actually the correct model for your counts, then you should certainly use an offset of log-exposure, since it would describe the impact on variance correctly as Ben indicated.
However, for claim counts, a quasi-Poisson model is at best a rough approximation.
If you have heterogeneity, a negative binomial would tend to model the variability better, and it doesn't have variance proportional to mean; however often it, too doesn't really capture the variance effect -- some important drivers of claim frequency may lead to even stronger relationship to the mean.
Realistically, exposure won't exactly impact the variance in proportion to the mean. Many effects we're aware of will work to make that contribution to the variance increase somewhat faster than the mean does.
For counts, the variance assumption in the quasi-Poisson model will at least sometimes be close to correct; if your model is quasi Poisson, then you'll certainly get the variance wrong (according to your model) if you divide by exposure.
You could make an assessment of whether variance is well approximated as proportional to mean at model fitting time by considering the usual model diagnostics (if it isn't, you should not be using a model that says it is; if it is, then you should deal with exposure properly according to your model).
[Of course, exposure may not impact the variance in the model the same way as the rest of the drivers tend to, but that might be introducing more complexity than you have data to deal with.] | Scaling vs Offsetting in Quasi-Poisson GLM
We can consider the GLM as having two components, model for the mean and a model for the variance. This is even more explicit with the quasi-GLM case.
The mean is assumed proportional to the exposure; |
49,957 | igraph betweenness depends on order of edges | The following is based on the response of Tamás under the bug report I filed for this.
Betweenness is a property of the graph, so it does not depend on representation details such as edge ordering.
Why does igraph's result depend on ordering then?
As part of betweenness calculations, we must do equality comparisons between the total weights of different paths. Your weights are fractional floating point numbers and equality comparison between floating point numbers are unreliable.
Your weights are 1, 1/2, 1/3, 1/4 and 1/5, i.e. fractions that tend to add up to integers. Some of these number are however not exactly representable in binary. When they are actually added up, we might get tiny deviations from exact integers. Furthermore, the result may depend on the order of the summation due to roundoff errors. Due to these tiny deviations, the equality test will in some cases fail (i.e. return false).
The solution
This problem can be avoided if you only use integer weights because integers are always exactly representable. Multiplying all weights by the same factor does not change the betweenness values. Since your weights are the inverses of 1,2,3,4 and 5, just multiply them by the least common multiple, i.e. 60, to get integers.
Change
E(social.graph)$weight <- 1 / E(social.graph)$weight
to
E(social.graph)$weight <- 60 / E(social.graph)$weight | igraph betweenness depends on order of edges | The following is based on the response of Tamás under the bug report I filed for this.
Betweenness is a property of the graph, so it does not depend on representation details such as edge ordering.
Wh | igraph betweenness depends on order of edges
The following is based on the response of Tamás under the bug report I filed for this.
Betweenness is a property of the graph, so it does not depend on representation details such as edge ordering.
Why does igraph's result depend on ordering then?
As part of betweenness calculations, we must do equality comparisons between the total weights of different paths. Your weights are fractional floating point numbers and equality comparison between floating point numbers are unreliable.
Your weights are 1, 1/2, 1/3, 1/4 and 1/5, i.e. fractions that tend to add up to integers. Some of these number are however not exactly representable in binary. When they are actually added up, we might get tiny deviations from exact integers. Furthermore, the result may depend on the order of the summation due to roundoff errors. Due to these tiny deviations, the equality test will in some cases fail (i.e. return false).
The solution
This problem can be avoided if you only use integer weights because integers are always exactly representable. Multiplying all weights by the same factor does not change the betweenness values. Since your weights are the inverses of 1,2,3,4 and 5, just multiply them by the least common multiple, i.e. 60, to get integers.
Change
E(social.graph)$weight <- 1 / E(social.graph)$weight
to
E(social.graph)$weight <- 60 / E(social.graph)$weight | igraph betweenness depends on order of edges
The following is based on the response of Tamás under the bug report I filed for this.
Betweenness is a property of the graph, so it does not depend on representation details such as edge ordering.
Wh |
49,958 | When is Markov chain a generator for iid sequences | Q1: The above statement means that the probability of a random variable X being equal to some value x at time n + 1, given all the x values that came before it in the sequence, is equal to the probability of X being equal to some value x at time n + 1 given just the value of x that came before it. In other words, X at time n + 1 is only dependent on x at time n, not any other value of x. So in a sequence, you can say that X at time n + 1 is independent of all other x except X at time n.
Q2: By the answer to Q1, all values in the Markov Chain are not independent of each other because $P(X_{n+1} | X_n) \ne P(X_{n+1})$. After enough iterations, the chain (usually) converges in distribution so they would be identically distributed.
That's all I know. | When is Markov chain a generator for iid sequences | Q1: The above statement means that the probability of a random variable X being equal to some value x at time n + 1, given all the x values that came before it in the sequence, is equal to the probabi | When is Markov chain a generator for iid sequences
Q1: The above statement means that the probability of a random variable X being equal to some value x at time n + 1, given all the x values that came before it in the sequence, is equal to the probability of X being equal to some value x at time n + 1 given just the value of x that came before it. In other words, X at time n + 1 is only dependent on x at time n, not any other value of x. So in a sequence, you can say that X at time n + 1 is independent of all other x except X at time n.
Q2: By the answer to Q1, all values in the Markov Chain are not independent of each other because $P(X_{n+1} | X_n) \ne P(X_{n+1})$. After enough iterations, the chain (usually) converges in distribution so they would be identically distributed.
That's all I know. | When is Markov chain a generator for iid sequences
Q1: The above statement means that the probability of a random variable X being equal to some value x at time n + 1, given all the x values that came before it in the sequence, is equal to the probabi |
49,959 | Conventional wisdom for designing and training neural networks | In addition, to the paper by Le Cun outlined in the comments, two more recent practical guides by pioneers in the area are:
Practical Recommendations for Gradient-Based Training of Deep
Architectures. Yoshua Bengio. 2012.
A Practical Guide to Training Restricted Boltzmann Machines. Geoffrey Hinton. 2010.
Both of these come from the deeplearning.net reading list, and they are also featured in a book "Neural Networks: Tricks of the Trade". Two other resources I've found helpful are:
Andrej Karpathy's course notes for Stanford 231n on Convolutional Neural networks esp. Part 1, Part 2, Part 3.
Stochastic Gradient Descent Tricks. Leon Bottou. 2012.
This is written as an update to LeCun et al.'s paper
More specific recommendations might depend on the problem domain, the network architecture and the type of data you're working with. | Conventional wisdom for designing and training neural networks | In addition, to the paper by Le Cun outlined in the comments, two more recent practical guides by pioneers in the area are:
Practical Recommendations for Gradient-Based Training of Deep
Architectures | Conventional wisdom for designing and training neural networks
In addition, to the paper by Le Cun outlined in the comments, two more recent practical guides by pioneers in the area are:
Practical Recommendations for Gradient-Based Training of Deep
Architectures. Yoshua Bengio. 2012.
A Practical Guide to Training Restricted Boltzmann Machines. Geoffrey Hinton. 2010.
Both of these come from the deeplearning.net reading list, and they are also featured in a book "Neural Networks: Tricks of the Trade". Two other resources I've found helpful are:
Andrej Karpathy's course notes for Stanford 231n on Convolutional Neural networks esp. Part 1, Part 2, Part 3.
Stochastic Gradient Descent Tricks. Leon Bottou. 2012.
This is written as an update to LeCun et al.'s paper
More specific recommendations might depend on the problem domain, the network architecture and the type of data you're working with. | Conventional wisdom for designing and training neural networks
In addition, to the paper by Le Cun outlined in the comments, two more recent practical guides by pioneers in the area are:
Practical Recommendations for Gradient-Based Training of Deep
Architectures |
49,960 | Conventional wisdom for designing and training neural networks | The advice given verbatim from Aurélien Géron' "Hands-On Machine Learning with Scikit-Learn and TensorFlow" on DNN Architecture:
- Initialization: He initialization
- Activation function: ELU
- Normalization: Batch Normalization
- Regularization: Dropout
- Optimizer: Adam
- Learning rate schedule: None
You may tinker with this parameters depending on the size of your NN and if the speed or accuracy, whatever you define it, is your objective.
As far as batch size is concerned, you might be interested in this excellent discussion supported by links to academic papers at the CV question Tradeoff batch size vs. number of iterations to train a neural network | Conventional wisdom for designing and training neural networks | The advice given verbatim from Aurélien Géron' "Hands-On Machine Learning with Scikit-Learn and TensorFlow" on DNN Architecture:
- Initialization: He initialization
- Activation function: ELU
- | Conventional wisdom for designing and training neural networks
The advice given verbatim from Aurélien Géron' "Hands-On Machine Learning with Scikit-Learn and TensorFlow" on DNN Architecture:
- Initialization: He initialization
- Activation function: ELU
- Normalization: Batch Normalization
- Regularization: Dropout
- Optimizer: Adam
- Learning rate schedule: None
You may tinker with this parameters depending on the size of your NN and if the speed or accuracy, whatever you define it, is your objective.
As far as batch size is concerned, you might be interested in this excellent discussion supported by links to academic papers at the CV question Tradeoff batch size vs. number of iterations to train a neural network | Conventional wisdom for designing and training neural networks
The advice given verbatim from Aurélien Géron' "Hands-On Machine Learning with Scikit-Learn and TensorFlow" on DNN Architecture:
- Initialization: He initialization
- Activation function: ELU
- |
49,961 | CART for unsupervised learning - clustering | I think this will only work on toy data sets like this, and where there is plenty of dead space. It relies on a shuffled object $(x_1,y_2)$ being totally different than the real data.
Try the same on a toy data set with four clusters, at (5,5), (5,15), (15,5), (5,5) and it will no longer work. Clusters and dense areas exist, but your approach will fail on this data.
There exists a similar approach for measuring if two attributes are associated, though. I.e. a form of correlation between X and Y. If X and Y are independen, you cannot distinguish (X,Y) from a shuffled version (X,Y'). | CART for unsupervised learning - clustering | I think this will only work on toy data sets like this, and where there is plenty of dead space. It relies on a shuffled object $(x_1,y_2)$ being totally different than the real data.
Try the same on | CART for unsupervised learning - clustering
I think this will only work on toy data sets like this, and where there is plenty of dead space. It relies on a shuffled object $(x_1,y_2)$ being totally different than the real data.
Try the same on a toy data set with four clusters, at (5,5), (5,15), (15,5), (5,5) and it will no longer work. Clusters and dense areas exist, but your approach will fail on this data.
There exists a similar approach for measuring if two attributes are associated, though. I.e. a form of correlation between X and Y. If X and Y are independen, you cannot distinguish (X,Y) from a shuffled version (X,Y'). | CART for unsupervised learning - clustering
I think this will only work on toy data sets like this, and where there is plenty of dead space. It relies on a shuffled object $(x_1,y_2)$ being totally different than the real data.
Try the same on |
49,962 | Bootstrapping to Test for Homogeneity of Variance between Samples | A test proper could be devised by assuming that the two groups differ only in location $\mu$ & scale $\sigma$, so that their distribution functions are $$F_1(x_1) = F((x_1-\mu_1)/\sigma_1)$$ & $$F_2(x_2) = F((x_2-\mu_2)/\sigma_2)$$; then under the null hypothesis of a common scale $\sigma=\sigma_1=\sigma_2$, the centred variable $Y= X_i-\mu_i$ has the distribution $$F_\mathrm{c}(y) = F(y/\sigma)$$, which can be approximated by the empirical distribution function $$\hat F_\mathrm{c}(y) = \frac{\sum_{i=1}^{2}\sum_{j=1}^{n_i} I(x_{ij} - \bar x_i \leq y)}{n_1+n_2}$$ where $I$ is the indicator function.
So the bootstrap procedure would resample from the pooled differences between each observation & the mean of its group, & compare the distribution of the test statistic, say the usual F statistic, whose distribution shouldn't depend overmuch on the unknown characteristics of $F_c(\cdot)$, with the value in fact observed.
(This is a rather off-the-cuff answer—it illustrates a basic bootstrap test, but I daresay there are better methods to use in practice.)
Here's some code to test it:
# sample sizes
n1 <- 12
n2 <- 20
# location parameters
mu1 <- 3
mu2 <- 6
# scale parameters (alt. hyp sigma1 > sigma2)
sigma1 <- 1
sigma2 <- 1
# distribution function, e.g. Student's t with 3 degrees of freedom
rdist <- function(n) rt(n, df=3)
# no. simulations to perform
no.sims <- 1000
# no. bootstrap samples to take in each simulation
no.boots <- 1000
# initialize vector of p-values - for normal F test
p.normFtest <- numeric(no.sims)
# initialize vector of p-values - for bootstrap test
p.bsFtest <- numeric(no.sims)
# simulate!
for (j in 1:no.sims){
# simulate samples
rdist(n1)*sigma1 + mu1 -> x1
rdist(n2)*sigma2 + mu2 -> x2
# calculate observed test statistic
var(x1)/var(x2) -> F.obs
# calculate its p-value
1-pf(F.obs,n1-1,n2-1) -> p.normFtest[j]
# initialize vector of test statistics
F.boot <- numeric(no.boots)
# define bootstrap population
c(x1-mean(x1),x2-mean(x2)) -> boot.pop
# bootstrap!
for (i in 1:no.boots){
# 1st sample
x1.boot <- sample(boot.pop, n1, replace=T)
# 2nd sample
x2.boot <- sample(boot.pop, n2, replace=T)
# calculate bootstrap test statistic
var(x1.boot)/var(x2.boot) -> F.boot[i]
}
# estimate bootstrap p-value
sum(F.boot >= F.obs)/no.boots -> p.bsFtest[j]
}
# examine distributions of p-values (should be uniform under null)
plot(ecdf(p.normFtest), col="black", do.points=F, verticals=T, xlab="calculated p-value", ylab="simulated distribution function", main="")
plot(ecdf(p.bsFtest), col="red", add=T, do.points=F, verticals=T)
abline(a=0,b=1, col="grey", lty="dashed")
legend("bottomright", legend=c("normal", "bootstrapped"), lty=1, col=c("black","red"))
So in this particular case this bootstrap F test maintains size rather better than the F test based on an incorrect assumption of normality. It'd be interesting to compare it with more robust tests like Levene's†, & to examine its power vs the normal F test when the normality assumption is correct. And using a double bootstrap might help to keep the test's size closer to the nominal level.
† With the commonly used Brown–Forsythe modification. | Bootstrapping to Test for Homogeneity of Variance between Samples | A test proper could be devised by assuming that the two groups differ only in location $\mu$ & scale $\sigma$, so that their distribution functions are $$F_1(x_1) = F((x_1-\mu_1)/\sigma_1)$$ & $$F_2(x | Bootstrapping to Test for Homogeneity of Variance between Samples
A test proper could be devised by assuming that the two groups differ only in location $\mu$ & scale $\sigma$, so that their distribution functions are $$F_1(x_1) = F((x_1-\mu_1)/\sigma_1)$$ & $$F_2(x_2) = F((x_2-\mu_2)/\sigma_2)$$; then under the null hypothesis of a common scale $\sigma=\sigma_1=\sigma_2$, the centred variable $Y= X_i-\mu_i$ has the distribution $$F_\mathrm{c}(y) = F(y/\sigma)$$, which can be approximated by the empirical distribution function $$\hat F_\mathrm{c}(y) = \frac{\sum_{i=1}^{2}\sum_{j=1}^{n_i} I(x_{ij} - \bar x_i \leq y)}{n_1+n_2}$$ where $I$ is the indicator function.
So the bootstrap procedure would resample from the pooled differences between each observation & the mean of its group, & compare the distribution of the test statistic, say the usual F statistic, whose distribution shouldn't depend overmuch on the unknown characteristics of $F_c(\cdot)$, with the value in fact observed.
(This is a rather off-the-cuff answer—it illustrates a basic bootstrap test, but I daresay there are better methods to use in practice.)
Here's some code to test it:
# sample sizes
n1 <- 12
n2 <- 20
# location parameters
mu1 <- 3
mu2 <- 6
# scale parameters (alt. hyp sigma1 > sigma2)
sigma1 <- 1
sigma2 <- 1
# distribution function, e.g. Student's t with 3 degrees of freedom
rdist <- function(n) rt(n, df=3)
# no. simulations to perform
no.sims <- 1000
# no. bootstrap samples to take in each simulation
no.boots <- 1000
# initialize vector of p-values - for normal F test
p.normFtest <- numeric(no.sims)
# initialize vector of p-values - for bootstrap test
p.bsFtest <- numeric(no.sims)
# simulate!
for (j in 1:no.sims){
# simulate samples
rdist(n1)*sigma1 + mu1 -> x1
rdist(n2)*sigma2 + mu2 -> x2
# calculate observed test statistic
var(x1)/var(x2) -> F.obs
# calculate its p-value
1-pf(F.obs,n1-1,n2-1) -> p.normFtest[j]
# initialize vector of test statistics
F.boot <- numeric(no.boots)
# define bootstrap population
c(x1-mean(x1),x2-mean(x2)) -> boot.pop
# bootstrap!
for (i in 1:no.boots){
# 1st sample
x1.boot <- sample(boot.pop, n1, replace=T)
# 2nd sample
x2.boot <- sample(boot.pop, n2, replace=T)
# calculate bootstrap test statistic
var(x1.boot)/var(x2.boot) -> F.boot[i]
}
# estimate bootstrap p-value
sum(F.boot >= F.obs)/no.boots -> p.bsFtest[j]
}
# examine distributions of p-values (should be uniform under null)
plot(ecdf(p.normFtest), col="black", do.points=F, verticals=T, xlab="calculated p-value", ylab="simulated distribution function", main="")
plot(ecdf(p.bsFtest), col="red", add=T, do.points=F, verticals=T)
abline(a=0,b=1, col="grey", lty="dashed")
legend("bottomright", legend=c("normal", "bootstrapped"), lty=1, col=c("black","red"))
So in this particular case this bootstrap F test maintains size rather better than the F test based on an incorrect assumption of normality. It'd be interesting to compare it with more robust tests like Levene's†, & to examine its power vs the normal F test when the normality assumption is correct. And using a double bootstrap might help to keep the test's size closer to the nominal level.
† With the commonly used Brown–Forsythe modification. | Bootstrapping to Test for Homogeneity of Variance between Samples
A test proper could be devised by assuming that the two groups differ only in location $\mu$ & scale $\sigma$, so that their distribution functions are $$F_1(x_1) = F((x_1-\mu_1)/\sigma_1)$$ & $$F_2(x |
49,963 | Bayesian modeling and FDR correction | I am not sure to fully answer your question but think that the following can help.
What you can try is to a design multilevel model including a partial pooling of the $\lambda_i$ by considering for example:
$$ \begin{align} c_i &\sim P(\lambda_i) \\ λ_i &\sim Gamma(a,b) \\ a &\sim \text{Find a good prior here} \\ b &\sim findTheGoodPriorHere \end{align}$$
Or another possibility:
$$\begin{align} c_i &\sim P(\lambda_i). \\ log(\lambda_i) &=\alpha + \theta_i \\ \mbox{with } \theta_i &\sim N(0,\sigma^2) \\ \alpha &\sim u_{R} \\ \sigma &\sim gamma(\epsilon,\epsilon) \end{align}$$
or any other suitable pooling, so that the margin posterior for the $\lambda_i$:
$$
p(\lambda_i|(c_j)_{j=1:N}) \propto \int_{R}[ p(c_i|\lambda_i) p(\lambda_i|\beta) d\beta ] \cdot \prod_{j\ne i} [\int_{R^+} \int_{R} p(c_j|\lambda_j) p(\lambda_j|\beta) d\beta d\lambda_j]
$$
(calling for generality $\beta$ the hyperparameter of $p(\lambda_j|\beta)$)
are no longer independant each other (because the hyperparameter $\beta$ is common to all the $\lambda_i$) and does no more result in independent inferences/comparisons (while this needs a dedicated discussion e.g. Why don't Bayesian methods require multiple testing corrections?).
The important question is weither or not the partial pooling model is suitable for your design.
Here (http://www.stat.columbia.edu/~gelman/research/unpublished/multiple2.pdf) is a reference for such a consideration by Gelman (it does not use Bayes factor but to my knowledge, there is no limitation in using such a pooling model with Bayes factor) | Bayesian modeling and FDR correction | I am not sure to fully answer your question but think that the following can help.
What you can try is to a design multilevel model including a partial pooling of the $\lambda_i$ by considering for ex | Bayesian modeling and FDR correction
I am not sure to fully answer your question but think that the following can help.
What you can try is to a design multilevel model including a partial pooling of the $\lambda_i$ by considering for example:
$$ \begin{align} c_i &\sim P(\lambda_i) \\ λ_i &\sim Gamma(a,b) \\ a &\sim \text{Find a good prior here} \\ b &\sim findTheGoodPriorHere \end{align}$$
Or another possibility:
$$\begin{align} c_i &\sim P(\lambda_i). \\ log(\lambda_i) &=\alpha + \theta_i \\ \mbox{with } \theta_i &\sim N(0,\sigma^2) \\ \alpha &\sim u_{R} \\ \sigma &\sim gamma(\epsilon,\epsilon) \end{align}$$
or any other suitable pooling, so that the margin posterior for the $\lambda_i$:
$$
p(\lambda_i|(c_j)_{j=1:N}) \propto \int_{R}[ p(c_i|\lambda_i) p(\lambda_i|\beta) d\beta ] \cdot \prod_{j\ne i} [\int_{R^+} \int_{R} p(c_j|\lambda_j) p(\lambda_j|\beta) d\beta d\lambda_j]
$$
(calling for generality $\beta$ the hyperparameter of $p(\lambda_j|\beta)$)
are no longer independant each other (because the hyperparameter $\beta$ is common to all the $\lambda_i$) and does no more result in independent inferences/comparisons (while this needs a dedicated discussion e.g. Why don't Bayesian methods require multiple testing corrections?).
The important question is weither or not the partial pooling model is suitable for your design.
Here (http://www.stat.columbia.edu/~gelman/research/unpublished/multiple2.pdf) is a reference for such a consideration by Gelman (it does not use Bayes factor but to my knowledge, there is no limitation in using such a pooling model with Bayes factor) | Bayesian modeling and FDR correction
I am not sure to fully answer your question but think that the following can help.
What you can try is to a design multilevel model including a partial pooling of the $\lambda_i$ by considering for ex |
49,964 | Choosing a model for classification: decision tree or naive bayes | Don't make a choice based on believes try both! Once you developed a cross validation framework it is not very hard to feed it with various models and pick the best one. Sometimes, these cross validation framework already exists (caret in R, but there must be plenty of others!)
The paper : "Do we Need Hundreds of Classifiers to Solve Real World
Classification Problems?" http://jmlr.org/papers/volume15/delgado14a/delgado14a.pdf is a very good review on existing models, their implementations and their performances on various data sets. You can find information about the performances of models according to the number of features, per example.
But even if performances of a model depend a priori on the number of features, the number of observations (some models have been developed to handle specific situations), and the type of observations, it is not obvious that one model will perform better than another.
IMHO, the only thing that you should consider when selecting a model is the time to train it. Some training times become prohibitive with large data sets. Per example, training a kernel SVM on 1M+ observations will never end. With 10k records and 5 features, you can train almost anything though.
As for the feature engineering, you should try every idea you have as well! For categorical variables a first start is to encode them into dummy variables, in order to end up with a numeric matrix. But you could also want to remove scarce factors, consider interactions... And keep observing the influence on the predictive performance! | Choosing a model for classification: decision tree or naive bayes | Don't make a choice based on believes try both! Once you developed a cross validation framework it is not very hard to feed it with various models and pick the best one. Sometimes, these cross validat | Choosing a model for classification: decision tree or naive bayes
Don't make a choice based on believes try both! Once you developed a cross validation framework it is not very hard to feed it with various models and pick the best one. Sometimes, these cross validation framework already exists (caret in R, but there must be plenty of others!)
The paper : "Do we Need Hundreds of Classifiers to Solve Real World
Classification Problems?" http://jmlr.org/papers/volume15/delgado14a/delgado14a.pdf is a very good review on existing models, their implementations and their performances on various data sets. You can find information about the performances of models according to the number of features, per example.
But even if performances of a model depend a priori on the number of features, the number of observations (some models have been developed to handle specific situations), and the type of observations, it is not obvious that one model will perform better than another.
IMHO, the only thing that you should consider when selecting a model is the time to train it. Some training times become prohibitive with large data sets. Per example, training a kernel SVM on 1M+ observations will never end. With 10k records and 5 features, you can train almost anything though.
As for the feature engineering, you should try every idea you have as well! For categorical variables a first start is to encode them into dummy variables, in order to end up with a numeric matrix. But you could also want to remove scarce factors, consider interactions... And keep observing the influence on the predictive performance! | Choosing a model for classification: decision tree or naive bayes
Don't make a choice based on believes try both! Once you developed a cross validation framework it is not very hard to feed it with various models and pick the best one. Sometimes, these cross validat |
49,965 | Assessing the relationship between continuous variables | There are examples of using Multiple Linear Regression for similar studies[1]
Here is an notebook example of doing this in R.
[1] Genomic ancestry and somatic alterations correlate with age at diagnosis in Hispanic children with B-cell ALL | Assessing the relationship between continuous variables | There are examples of using Multiple Linear Regression for similar studies[1]
Here is an notebook example of doing this in R.
[1] Genomic ancestry and somatic alterations correlate with age at diagnos | Assessing the relationship between continuous variables
There are examples of using Multiple Linear Regression for similar studies[1]
Here is an notebook example of doing this in R.
[1] Genomic ancestry and somatic alterations correlate with age at diagnosis in Hispanic children with B-cell ALL | Assessing the relationship between continuous variables
There are examples of using Multiple Linear Regression for similar studies[1]
Here is an notebook example of doing this in R.
[1] Genomic ancestry and somatic alterations correlate with age at diagnos |
49,966 | Assessing the relationship between continuous variables | Here is a somewhat simplistic (Bayesian) model for your problem. I did not complete it (I am not 100% positive I can). Let me know if this seems reasonable, it looks like a nice problem!
Imagine a human being is made of three independent parts: a native american part (1), a european part (2) and an african part (3). Thus, a person is represented as $x \in \mathbb{R}_{+}^3$ (meaning $x_1,x_2, x_3 \geq 0$) with the constraint $x_1+x_2+x_3 = 1$.
In order to get cancer, a human must get "enough" cancer in their inidividual parts. So if a person $x = (\frac{1}{3},\frac{1}{3},\frac{1}{3})^t$, then they can get cacner if each part gets cancer independently. Alternatively, they can get cancer by summing: say the european part gets cancer twice, the african part gets cancer once and the native american doesn't get cancer at all. In this context it is assumed that getting cancer and being diagnosed are the same (I know it might not hold but we can make things complicated later).
The probability for a part to get some number of cancers in time $T$ is modeled as Poisson process(very reasonable: it counts how many hits one gets in a given time) with some corresponding parameter $\lambda_i$. Consequently, for and individual $x$, the occurence of cancers until time $T$ is distributed like $ \sum_{i=1}^3 x_i N_i(T)$, where each $N_i(t)$ is a Poisson process with parameter $\lambda_i$. We are only interested in the first time the individual gets a total of 1 cancer or more. I believe this time should be exponential random variable with parameter that is a kind of average of the $\lambda_i$'s (I don't know that for sure, this is the missing part).
\begin{eqnarray}
\begin{split}
%
%
P( \text{Cancer before time } t|\lambda, x)&= P( \sum_i x_i N_i(t) \geq 1) \\
%
%
&= \sum_{k_1,k_2,k_3} \prod_{i=1}^3 \frac{e^{-\lambda_i} (\lambda_it)^{k_i}}{k_i!} \\
%
%
&= \sum_{k_1,k_2,k_3} \frac{e^{\lambda_1+\lambda_2+\lambda_3} \lambda_1^{k_1}\lambda_2^{k_2}\lambda_3^{k_3}t^{k_1+k_2+k_3}}{k_1!k_2!k_3!}
\end{split}
\end{eqnarray}
where the sum is over triplets $(k_1,k_2,k_3)$ such that $\sum_i k_i x_i \geq 1$. This seems to be the hard part. However, it looks like one can estimate it numerically, so that might just be enough.
If one has this distribution $P( \text{Cancer before time } t|\lambda, x)$, then calculating the data likelihood $p(\text{Data} | \lambda )$ should be trivial (a product of iids with the above distributions). Choose a reasonable prior for $\lambda$ and Bayes rule will allow you to get the probability $p(\lambda | \text{Data})$. For a new individual $x$, the distribution of age they get cancer is going to be the predictive distribution
$$
p( \text{Cancer} | x, \text{Data} ) = \int p( \text{Cancer} | \lambda, x) p(\lambda | \text{Data} ) d\lambda,
$$
which can (probably) be estimated using Gibbs sampling. | Assessing the relationship between continuous variables | Here is a somewhat simplistic (Bayesian) model for your problem. I did not complete it (I am not 100% positive I can). Let me know if this seems reasonable, it looks like a nice problem!
Imagine a hum | Assessing the relationship between continuous variables
Here is a somewhat simplistic (Bayesian) model for your problem. I did not complete it (I am not 100% positive I can). Let me know if this seems reasonable, it looks like a nice problem!
Imagine a human being is made of three independent parts: a native american part (1), a european part (2) and an african part (3). Thus, a person is represented as $x \in \mathbb{R}_{+}^3$ (meaning $x_1,x_2, x_3 \geq 0$) with the constraint $x_1+x_2+x_3 = 1$.
In order to get cancer, a human must get "enough" cancer in their inidividual parts. So if a person $x = (\frac{1}{3},\frac{1}{3},\frac{1}{3})^t$, then they can get cacner if each part gets cancer independently. Alternatively, they can get cancer by summing: say the european part gets cancer twice, the african part gets cancer once and the native american doesn't get cancer at all. In this context it is assumed that getting cancer and being diagnosed are the same (I know it might not hold but we can make things complicated later).
The probability for a part to get some number of cancers in time $T$ is modeled as Poisson process(very reasonable: it counts how many hits one gets in a given time) with some corresponding parameter $\lambda_i$. Consequently, for and individual $x$, the occurence of cancers until time $T$ is distributed like $ \sum_{i=1}^3 x_i N_i(T)$, where each $N_i(t)$ is a Poisson process with parameter $\lambda_i$. We are only interested in the first time the individual gets a total of 1 cancer or more. I believe this time should be exponential random variable with parameter that is a kind of average of the $\lambda_i$'s (I don't know that for sure, this is the missing part).
\begin{eqnarray}
\begin{split}
%
%
P( \text{Cancer before time } t|\lambda, x)&= P( \sum_i x_i N_i(t) \geq 1) \\
%
%
&= \sum_{k_1,k_2,k_3} \prod_{i=1}^3 \frac{e^{-\lambda_i} (\lambda_it)^{k_i}}{k_i!} \\
%
%
&= \sum_{k_1,k_2,k_3} \frac{e^{\lambda_1+\lambda_2+\lambda_3} \lambda_1^{k_1}\lambda_2^{k_2}\lambda_3^{k_3}t^{k_1+k_2+k_3}}{k_1!k_2!k_3!}
\end{split}
\end{eqnarray}
where the sum is over triplets $(k_1,k_2,k_3)$ such that $\sum_i k_i x_i \geq 1$. This seems to be the hard part. However, it looks like one can estimate it numerically, so that might just be enough.
If one has this distribution $P( \text{Cancer before time } t|\lambda, x)$, then calculating the data likelihood $p(\text{Data} | \lambda )$ should be trivial (a product of iids with the above distributions). Choose a reasonable prior for $\lambda$ and Bayes rule will allow you to get the probability $p(\lambda | \text{Data})$. For a new individual $x$, the distribution of age they get cancer is going to be the predictive distribution
$$
p( \text{Cancer} | x, \text{Data} ) = \int p( \text{Cancer} | \lambda, x) p(\lambda | \text{Data} ) d\lambda,
$$
which can (probably) be estimated using Gibbs sampling. | Assessing the relationship between continuous variables
Here is a somewhat simplistic (Bayesian) model for your problem. I did not complete it (I am not 100% positive I can). Let me know if this seems reasonable, it looks like a nice problem!
Imagine a hum |
49,967 | Why do we need to use the Markov property in solving this PDE? | In applying the Feynman-Kac formula, there is no need to use the Markov property.
In proving the Feynman-Kac formula, the Markov property is needed.
Showing that $G(t,x)$ indeed satisfies the PDE requires showing that $G(t,X_t)$ is a martingale which relies on $X_t$ having the Markov property, which it has because it is a solution of an SDE.
Or something like that.
From Shreve's Stochastic Calculus for Finance: | Why do we need to use the Markov property in solving this PDE? | In applying the Feynman-Kac formula, there is no need to use the Markov property.
In proving the Feynman-Kac formula, the Markov property is needed.
Showing that $G(t,x)$ indeed satisfies the PDE requ | Why do we need to use the Markov property in solving this PDE?
In applying the Feynman-Kac formula, there is no need to use the Markov property.
In proving the Feynman-Kac formula, the Markov property is needed.
Showing that $G(t,x)$ indeed satisfies the PDE requires showing that $G(t,X_t)$ is a martingale which relies on $X_t$ having the Markov property, which it has because it is a solution of an SDE.
Or something like that.
From Shreve's Stochastic Calculus for Finance: | Why do we need to use the Markov property in solving this PDE?
In applying the Feynman-Kac formula, there is no need to use the Markov property.
In proving the Feynman-Kac formula, the Markov property is needed.
Showing that $G(t,x)$ indeed satisfies the PDE requ |
49,968 | Why do we need to use the Markov property in solving this PDE? | Perhaps what's meant here is that since $\mathcal{F}_t$ is a filtration, then for $s\leq t$, $\mathcal{F}_s\subseteq \mathcal{F}_t$. In other words the filtration contains all information up to time $t$ so that you really are invoking the Markov property since $X_t$ really just specifies $X_t$ at time $t$ only. | Why do we need to use the Markov property in solving this PDE? | Perhaps what's meant here is that since $\mathcal{F}_t$ is a filtration, then for $s\leq t$, $\mathcal{F}_s\subseteq \mathcal{F}_t$. In other words the filtration contains all information up to time $ | Why do we need to use the Markov property in solving this PDE?
Perhaps what's meant here is that since $\mathcal{F}_t$ is a filtration, then for $s\leq t$, $\mathcal{F}_s\subseteq \mathcal{F}_t$. In other words the filtration contains all information up to time $t$ so that you really are invoking the Markov property since $X_t$ really just specifies $X_t$ at time $t$ only. | Why do we need to use the Markov property in solving this PDE?
Perhaps what's meant here is that since $\mathcal{F}_t$ is a filtration, then for $s\leq t$, $\mathcal{F}_s\subseteq \mathcal{F}_t$. In other words the filtration contains all information up to time $ |
49,969 | How do I determine whether my data is spherically separable? | I think the best and easiest thing you can do when you have data is to just implement your model (k-means), train your model, and then validate your model on unseen data. The validation error tells you how good your model is. You can safely compare any number of models this way.
Visualization might work for small models, but it's really hard to project the 48-dimensional vectors you have to 2 dimensions and expect to see class separations. Essentially, your k-means is doing a projection already.
Other answers are pointing out that k-means makes assumptions. All models make assumptions. If they make the wrong assumptions, then that will be revealed when you validate. | How do I determine whether my data is spherically separable? | I think the best and easiest thing you can do when you have data is to just implement your model (k-means), train your model, and then validate your model on unseen data. The validation error tells y | How do I determine whether my data is spherically separable?
I think the best and easiest thing you can do when you have data is to just implement your model (k-means), train your model, and then validate your model on unseen data. The validation error tells you how good your model is. You can safely compare any number of models this way.
Visualization might work for small models, but it's really hard to project the 48-dimensional vectors you have to 2 dimensions and expect to see class separations. Essentially, your k-means is doing a projection already.
Other answers are pointing out that k-means makes assumptions. All models make assumptions. If they make the wrong assumptions, then that will be revealed when you validate. | How do I determine whether my data is spherically separable?
I think the best and easiest thing you can do when you have data is to just implement your model (k-means), train your model, and then validate your model on unseen data. The validation error tells y |
49,970 | How do I determine whether my data is spherically separable? | The two main approaches are:
Visualize (yes, there are methods)
try clustering and evaluate carefully on your data
Do not rely on any automatic method or statistic. | How do I determine whether my data is spherically separable? | The two main approaches are:
Visualize (yes, there are methods)
try clustering and evaluate carefully on your data
Do not rely on any automatic method or statistic. | How do I determine whether my data is spherically separable?
The two main approaches are:
Visualize (yes, there are methods)
try clustering and evaluate carefully on your data
Do not rely on any automatic method or statistic. | How do I determine whether my data is spherically separable?
The two main approaches are:
Visualize (yes, there are methods)
try clustering and evaluate carefully on your data
Do not rely on any automatic method or statistic. |
49,971 | How do I determine whether my data is spherically separable? | Using this blog post as a reference it appears that it's possible to do better than 'try clustering' and 'visualize':
1) all variables should have the same variance so I can use Bartlett's test on all variables.
2) the prior probability for all k clusters are the same (i.e. each cluster has roughly equal number of observations) and this is something I can check as well.
3) k-means assume the variance of the distribution of each variable is spherical
Now, I'm not sure how to test point 3 which is my question. But, at least these three conditions must hold. So I am not limited to checking whether the variance of the distribution of each variable is spherical. | How do I determine whether my data is spherically separable? | Using this blog post as a reference it appears that it's possible to do better than 'try clustering' and 'visualize':
1) all variables should have the same variance so I can use Bartlett's test on al | How do I determine whether my data is spherically separable?
Using this blog post as a reference it appears that it's possible to do better than 'try clustering' and 'visualize':
1) all variables should have the same variance so I can use Bartlett's test on all variables.
2) the prior probability for all k clusters are the same (i.e. each cluster has roughly equal number of observations) and this is something I can check as well.
3) k-means assume the variance of the distribution of each variable is spherical
Now, I'm not sure how to test point 3 which is my question. But, at least these three conditions must hold. So I am not limited to checking whether the variance of the distribution of each variable is spherical. | How do I determine whether my data is spherically separable?
Using this blog post as a reference it appears that it's possible to do better than 'try clustering' and 'visualize':
1) all variables should have the same variance so I can use Bartlett's test on al |
49,972 | What is the difference between Support Vector Machines and Conditional Random Fields models in the context of Named Entity Recognition? | Assume that for our data, we have observations (each observation is a set of features) and labels (1 label for each observation). In the case of NER, we have a set of features about a word (observation) and the labels are commonly a BIO scheme, where we might have B-Loc for beginning of location, I-Loc inside location, B-Pers for beginning of person word, I-Pers for inside person word, and O for outside of named entity. The sentence: "The man was George W. Bush." has labeling "The (O) man (O) was (O) George (B-Pers) W. (I-Pers) Bush (I-Pers)."
An SVM is predicting a label given the set of features, and none of these features are related to the label of the preceding observations. However, the features used to train an SVM can be related to the features of the preceding and following observations. For instance, we couldn't have features for preceding-B-Loc preceding-I-Loc, etc, but we can have preceding-NOUN preceding-VERB following-NOUN following-VERB, etc., where each of these features is given a binary 0/1 value (in training). In practice, an SVM might want you to name your features from 1 to |num_features|, and then only list the features names that have value 1. Then, in test, we can generate labels by looking at the observation, such as the part-of-speech of the word, the preceding and following word, etc., and the SVM uses these observations to predict the label. One would create many features, and the observations would be sparse.
In short, the CRF uses the probability of a label given a set of features plus all other labels (training is slow). This is a simplification because the CRF model iteratively propagates the probability of labels forward and backwards (see forward-backward algorithm) until it converges on a local maximum. A CRF uses a graph ( vertices and edges) over the observations of the input: if you picture the tokens from left to right, each token is associated with a vertical column underneath it, where along this column are vertices for all possible labels (B-Loc, I-Loc, B-Pers, I-Pers, O, etc.) - so in our simple BIO scheme, we have 7 vertices per token. There are no edges in a column: each vertex is connected to each vertex in the adjacent columns (forward and back). After the model has run its course, the edge weights are used to predict labels for each token given the observation sequence.
The preference between these choices: it depends. Off-the-shelf NERs like the Stanford NLP Parser give you a pre-trained CRF. If you've got a specialized domain where you need to train, CRF is probably overkill and the SVM is sufficient. | What is the difference between Support Vector Machines and Conditional Random Fields models in the c | Assume that for our data, we have observations (each observation is a set of features) and labels (1 label for each observation). In the case of NER, we have a set of features about a word (observatio | What is the difference between Support Vector Machines and Conditional Random Fields models in the context of Named Entity Recognition?
Assume that for our data, we have observations (each observation is a set of features) and labels (1 label for each observation). In the case of NER, we have a set of features about a word (observation) and the labels are commonly a BIO scheme, where we might have B-Loc for beginning of location, I-Loc inside location, B-Pers for beginning of person word, I-Pers for inside person word, and O for outside of named entity. The sentence: "The man was George W. Bush." has labeling "The (O) man (O) was (O) George (B-Pers) W. (I-Pers) Bush (I-Pers)."
An SVM is predicting a label given the set of features, and none of these features are related to the label of the preceding observations. However, the features used to train an SVM can be related to the features of the preceding and following observations. For instance, we couldn't have features for preceding-B-Loc preceding-I-Loc, etc, but we can have preceding-NOUN preceding-VERB following-NOUN following-VERB, etc., where each of these features is given a binary 0/1 value (in training). In practice, an SVM might want you to name your features from 1 to |num_features|, and then only list the features names that have value 1. Then, in test, we can generate labels by looking at the observation, such as the part-of-speech of the word, the preceding and following word, etc., and the SVM uses these observations to predict the label. One would create many features, and the observations would be sparse.
In short, the CRF uses the probability of a label given a set of features plus all other labels (training is slow). This is a simplification because the CRF model iteratively propagates the probability of labels forward and backwards (see forward-backward algorithm) until it converges on a local maximum. A CRF uses a graph ( vertices and edges) over the observations of the input: if you picture the tokens from left to right, each token is associated with a vertical column underneath it, where along this column are vertices for all possible labels (B-Loc, I-Loc, B-Pers, I-Pers, O, etc.) - so in our simple BIO scheme, we have 7 vertices per token. There are no edges in a column: each vertex is connected to each vertex in the adjacent columns (forward and back). After the model has run its course, the edge weights are used to predict labels for each token given the observation sequence.
The preference between these choices: it depends. Off-the-shelf NERs like the Stanford NLP Parser give you a pre-trained CRF. If you've got a specialized domain where you need to train, CRF is probably overkill and the SVM is sufficient. | What is the difference between Support Vector Machines and Conditional Random Fields models in the c
Assume that for our data, we have observations (each observation is a set of features) and labels (1 label for each observation). In the case of NER, we have a set of features about a word (observatio |
49,973 | Adaboost - update of weights | For 1), yes on both counts. You can view training a new classifier as selecting the best classifier from the "pool" defined as the range (i.e. the collection of all possible resultant classifiers) of the classification algorithm.
For 2), this re-weighting scheme is simply part of the definition of the adaboost algorithm. A reasonable question is, of course, why this choice? Reweighing in this way allows one to bound the training error with an exponentially decreasing function. Here is Theorem 3.1 from Boosting by Schapire and Freund:
Given the notation of algorithm 1.1 (adaboost) let $\lambda_t = \frac{1}{2} - e_t$, and let $D_1$ be any initial distribution over the training set. Then the weighted training error of the combined classifier $H$ with respect to $D_1$ is bounded as
$$ Pr( H(x_i) \neq y_i) \leq \exp \left( -2 \sum_t \lambda_t^2 \right) $$
You can use this to show that, if your base (weak) classifiers have a fixed edge over being random (i.e. a small bias to being correct, no matter how small), then adaboost drives down the training error exponentially fast. The proof of this inequality uses the relation (3) in a fundamental way.
I should note, there is nothing obvious about the algorithm. I'm sure it took years and years of meditation and pots and pots of coffee to come into its final form - so there is nothing wrong with an initial ??? response to the setup. | Adaboost - update of weights | For 1), yes on both counts. You can view training a new classifier as selecting the best classifier from the "pool" defined as the range (i.e. the collection of all possible resultant classifiers) of | Adaboost - update of weights
For 1), yes on both counts. You can view training a new classifier as selecting the best classifier from the "pool" defined as the range (i.e. the collection of all possible resultant classifiers) of the classification algorithm.
For 2), this re-weighting scheme is simply part of the definition of the adaboost algorithm. A reasonable question is, of course, why this choice? Reweighing in this way allows one to bound the training error with an exponentially decreasing function. Here is Theorem 3.1 from Boosting by Schapire and Freund:
Given the notation of algorithm 1.1 (adaboost) let $\lambda_t = \frac{1}{2} - e_t$, and let $D_1$ be any initial distribution over the training set. Then the weighted training error of the combined classifier $H$ with respect to $D_1$ is bounded as
$$ Pr( H(x_i) \neq y_i) \leq \exp \left( -2 \sum_t \lambda_t^2 \right) $$
You can use this to show that, if your base (weak) classifiers have a fixed edge over being random (i.e. a small bias to being correct, no matter how small), then adaboost drives down the training error exponentially fast. The proof of this inequality uses the relation (3) in a fundamental way.
I should note, there is nothing obvious about the algorithm. I'm sure it took years and years of meditation and pots and pots of coffee to come into its final form - so there is nothing wrong with an initial ??? response to the setup. | Adaboost - update of weights
For 1), yes on both counts. You can view training a new classifier as selecting the best classifier from the "pool" defined as the range (i.e. the collection of all possible resultant classifiers) of |
49,974 | Justification for invoking Maximum Entropy | First of all, I tried to comment on the question but I couldn't because I didn't have (still don't) 50 reputation, so I'm posting my opinion as an answer despite knowing it is not a complete answer to what's asked..
On a Bayesian probabilistic framework, probabilities are considered as "degrees of belief", that is, the "most rational" measure of plausabilities of assertions on situations of incomplete information (1).
On that context, one can show (2) that the Principle of Maximum Entropy is the reasonable choice to update a "state of knowledge" of a rational agent to another state when stumbled with new information that constrains its' knowledge (for example, more data). So that's Maximum Entropy as I see it.
Your question, however, concerns of priors, which are a much more delicate matter. That's more or less terrain of philosophical, epistemological questions: some will argue that priors are a non-sense, because science is remarkable certain, and can't allow space to subjectivities; on the other side, some other people will argue that it's truly subjective, and there is no complaint about it, because we will never have access the "true nature" of our object of study.
I prefer to justify bayesian subjectivity and priors by saying that we normally won't have access to the objective, mechanistical description of our event - or maybe we have, but we don't want to! (because of the many degrees of freedom, low computational power et cetera). On these scenarios, we take advantage on every piece of key information that we have on the problem (not the data!): be it symmetries, moments, or whatsoever, so that we can codify our knowledge as probabilities and update it on the basis of more data.
On the behalf of Bayesian and Entropic Inference, I shall try to explain the reasoning behind these wrongly understood topics by quoting quote Bertrand Russell:
I wish to propose for the reader’s favourable consideration a doctrine
which may, I fear, appear wildly paradoxical and subversive. The
doctrine in question is this: that it is undesirable to believe in a
proposition when there is no ground whatever for supposing it true.
Bertrand Russell, in Sceptical Essays
The main point is that we can't use information that we don't have or ignore information that we do have when constructing our priors. All knowledge must be considered.
I must apologize for my bad english, I'm not a native speaker and sometimes (many times) I write some things thinking they're right but they are not. I'd really appreciate if someone pointed my mistakes ^^ | Justification for invoking Maximum Entropy | First of all, I tried to comment on the question but I couldn't because I didn't have (still don't) 50 reputation, so I'm posting my opinion as an answer despite knowing it is not a complete answer to | Justification for invoking Maximum Entropy
First of all, I tried to comment on the question but I couldn't because I didn't have (still don't) 50 reputation, so I'm posting my opinion as an answer despite knowing it is not a complete answer to what's asked..
On a Bayesian probabilistic framework, probabilities are considered as "degrees of belief", that is, the "most rational" measure of plausabilities of assertions on situations of incomplete information (1).
On that context, one can show (2) that the Principle of Maximum Entropy is the reasonable choice to update a "state of knowledge" of a rational agent to another state when stumbled with new information that constrains its' knowledge (for example, more data). So that's Maximum Entropy as I see it.
Your question, however, concerns of priors, which are a much more delicate matter. That's more or less terrain of philosophical, epistemological questions: some will argue that priors are a non-sense, because science is remarkable certain, and can't allow space to subjectivities; on the other side, some other people will argue that it's truly subjective, and there is no complaint about it, because we will never have access the "true nature" of our object of study.
I prefer to justify bayesian subjectivity and priors by saying that we normally won't have access to the objective, mechanistical description of our event - or maybe we have, but we don't want to! (because of the many degrees of freedom, low computational power et cetera). On these scenarios, we take advantage on every piece of key information that we have on the problem (not the data!): be it symmetries, moments, or whatsoever, so that we can codify our knowledge as probabilities and update it on the basis of more data.
On the behalf of Bayesian and Entropic Inference, I shall try to explain the reasoning behind these wrongly understood topics by quoting quote Bertrand Russell:
I wish to propose for the reader’s favourable consideration a doctrine
which may, I fear, appear wildly paradoxical and subversive. The
doctrine in question is this: that it is undesirable to believe in a
proposition when there is no ground whatever for supposing it true.
Bertrand Russell, in Sceptical Essays
The main point is that we can't use information that we don't have or ignore information that we do have when constructing our priors. All knowledge must be considered.
I must apologize for my bad english, I'm not a native speaker and sometimes (many times) I write some things thinking they're right but they are not. I'd really appreciate if someone pointed my mistakes ^^ | Justification for invoking Maximum Entropy
First of all, I tried to comment on the question but I couldn't because I didn't have (still don't) 50 reputation, so I'm posting my opinion as an answer despite knowing it is not a complete answer to |
49,975 | Dunn's test p-values in R are exactly half those in SPSS and GraphPad for the same data | The dunn.test package in R uses a one-sided test, whereas SPSS and GraphPad use two-sided tests. There is no facility in the dunn.test package or its function dunn.test() to change to a two-sided test, but the p-values can be multiplied by 2 if a two-sided test is required.
A two-sided Dunn's test is available from the dunnTest() function in package FSA (Fisheries Stock Assessment). This package is not available from CRAN, but can be downloaded by running the code source("http://www.rforge.net/FSA/InstallFSA.R"). It requires an R version more recent than 3.0.2, and I had trouble installing it until I updated the Rcpp package from CRAN. More information on FSA can be found on https://fishr.wordpress.com/fsa/, and documentation on the dunnTest() function can be found on http://www.rforge.net/doc/packages/FSA/dunnTest.html.
Thanks to Stephan Kolassa for his help in resolving this problem. | Dunn's test p-values in R are exactly half those in SPSS and GraphPad for the same data | The dunn.test package in R uses a one-sided test, whereas SPSS and GraphPad use two-sided tests. There is no facility in the dunn.test package or its function dunn.test() to change to a two-sided test | Dunn's test p-values in R are exactly half those in SPSS and GraphPad for the same data
The dunn.test package in R uses a one-sided test, whereas SPSS and GraphPad use two-sided tests. There is no facility in the dunn.test package or its function dunn.test() to change to a two-sided test, but the p-values can be multiplied by 2 if a two-sided test is required.
A two-sided Dunn's test is available from the dunnTest() function in package FSA (Fisheries Stock Assessment). This package is not available from CRAN, but can be downloaded by running the code source("http://www.rforge.net/FSA/InstallFSA.R"). It requires an R version more recent than 3.0.2, and I had trouble installing it until I updated the Rcpp package from CRAN. More information on FSA can be found on https://fishr.wordpress.com/fsa/, and documentation on the dunnTest() function can be found on http://www.rforge.net/doc/packages/FSA/dunnTest.html.
Thanks to Stephan Kolassa for his help in resolving this problem. | Dunn's test p-values in R are exactly half those in SPSS and GraphPad for the same data
The dunn.test package in R uses a one-sided test, whereas SPSS and GraphPad use two-sided tests. There is no facility in the dunn.test package or its function dunn.test() to change to a two-sided test |
49,976 | Serially Correlated Regressors | I see it as a special case of Structural Vector AutoRegression (SVAR). A standard SVAR model is:
$$
\textbf{A}(L)X_t = \mu +\textbf{e}_t
$$
where $X_t$ is a vector of k endogenous variables, $\mu$ is a vector of k constant parameters $\textbf{e}_t = N(0,I) $
is
a
random
vector
and
$A(L)$ is
a
matrix polynomial of order n. Where $\textbf{A}(L)X_t = A_0X_t+A_1X_{t-1}+\cdots +A_nX_{t-n}.$ In your particular case:
$$
X_t = \begin{bmatrix}
x_t \\
y_t \end{bmatrix}
$$
$$
A_0= \begin{bmatrix}
1 & 0 \\
0 & 1\end{bmatrix}
$$
$$
A_t=\begin{bmatrix}
\mbox 0 & 0 \\
\mbox - \gamma/n & 0 \end{bmatrix}
$$
for $t = 1,\ldots,n.$
SVAR models were introduced in the 80´s and they have been widely studied mainly within the econometric realm. In this paper, Lutz Kilian reviews the econometric literature regarding the identification of causal effects in the data in the context of SVARs, but in the beggining he provides a nice introduction to the topic. | Serially Correlated Regressors | I see it as a special case of Structural Vector AutoRegression (SVAR). A standard SVAR model is:
$$
\textbf{A}(L)X_t = \mu +\textbf{e}_t
$$
where $X_t$ is a vector of k endogenous variables, $\mu$ is | Serially Correlated Regressors
I see it as a special case of Structural Vector AutoRegression (SVAR). A standard SVAR model is:
$$
\textbf{A}(L)X_t = \mu +\textbf{e}_t
$$
where $X_t$ is a vector of k endogenous variables, $\mu$ is a vector of k constant parameters $\textbf{e}_t = N(0,I) $
is
a
random
vector
and
$A(L)$ is
a
matrix polynomial of order n. Where $\textbf{A}(L)X_t = A_0X_t+A_1X_{t-1}+\cdots +A_nX_{t-n}.$ In your particular case:
$$
X_t = \begin{bmatrix}
x_t \\
y_t \end{bmatrix}
$$
$$
A_0= \begin{bmatrix}
1 & 0 \\
0 & 1\end{bmatrix}
$$
$$
A_t=\begin{bmatrix}
\mbox 0 & 0 \\
\mbox - \gamma/n & 0 \end{bmatrix}
$$
for $t = 1,\ldots,n.$
SVAR models were introduced in the 80´s and they have been widely studied mainly within the econometric realm. In this paper, Lutz Kilian reviews the econometric literature regarding the identification of causal effects in the data in the context of SVARs, but in the beggining he provides a nice introduction to the topic. | Serially Correlated Regressors
I see it as a special case of Structural Vector AutoRegression (SVAR). A standard SVAR model is:
$$
\textbf{A}(L)X_t = \mu +\textbf{e}_t
$$
where $X_t$ is a vector of k endogenous variables, $\mu$ is |
49,977 | Serially Correlated Regressors | After the moving average, the regressor in your model becomes more persistent, i.e. having stronger serial correlations. The OLS estimate would still be asymptotically consistent, but the finite-sample bias likely becomes larger. You could run a simple simulation study to see the finite-sample bias.
In finance, there is a literature on forecasting stock return with financial/macroeconomic variables. These variables are often highly persistent, similar to the problem you are facing. It might be useful for you to have a look at the finance studies that investigate the impact of highly persistent regressors, e.g. "Spurious Regressions in Financial Economics?" by Ferson etc, Journal of Finance, August 2003. Hope it helps. | Serially Correlated Regressors | After the moving average, the regressor in your model becomes more persistent, i.e. having stronger serial correlations. The OLS estimate would still be asymptotically consistent, but the finite-sampl | Serially Correlated Regressors
After the moving average, the regressor in your model becomes more persistent, i.e. having stronger serial correlations. The OLS estimate would still be asymptotically consistent, but the finite-sample bias likely becomes larger. You could run a simple simulation study to see the finite-sample bias.
In finance, there is a literature on forecasting stock return with financial/macroeconomic variables. These variables are often highly persistent, similar to the problem you are facing. It might be useful for you to have a look at the finance studies that investigate the impact of highly persistent regressors, e.g. "Spurious Regressions in Financial Economics?" by Ferson etc, Journal of Finance, August 2003. Hope it helps. | Serially Correlated Regressors
After the moving average, the regressor in your model becomes more persistent, i.e. having stronger serial correlations. The OLS estimate would still be asymptotically consistent, but the finite-sampl |
49,978 | Serially Correlated Regressors | I would look into multivariate ARMA modeling for generic models of this form. (It's a sort of hacky way to approach regression, but it'll work.) | Serially Correlated Regressors | I would look into multivariate ARMA modeling for generic models of this form. (It's a sort of hacky way to approach regression, but it'll work.) | Serially Correlated Regressors
I would look into multivariate ARMA modeling for generic models of this form. (It's a sort of hacky way to approach regression, but it'll work.) | Serially Correlated Regressors
I would look into multivariate ARMA modeling for generic models of this form. (It's a sort of hacky way to approach regression, but it'll work.) |
49,979 | A closed form formula for the normalizing constant in standard normal auto-regressive series? | It's not strange you didn't calculate the AR(3) case. It's rather complicated! And no, there is no closed form for the AR(n)-case. For the AR(3) we start with the Yule-Walker equationsAR-model wikipedia(where $\gamma_j=\gamma_{-j}$):
$\gamma_1=c_1\gamma_0+c_2\gamma_{-1}+c_3\gamma_{-2}=c_1\gamma_0+c_2\gamma_{1}+c_3\gamma_{2}\\
\gamma_2=c_1\gamma_1+c_2\gamma_0+c_3\gamma_{-1}=c_1\gamma_1+c_2\gamma_0+c_3\gamma_{1}\\
\gamma_3=c_1\gamma_2+c_2\gamma_1+c_3\gamma_{0}$
Then we multiply the AR(3) expression by $Z_t$ and take the expectation:
$ E[Z_t^2]=c_1E[Z_tZ_{t-1}]+c_2E[Z_tZ_{t-2}]+c_3E[Z_tZ_{t-3}]+c^2\Rightarrow \\
\gamma_0=c_1\gamma_1+c_2\gamma_2+c_3\gamma_3+c^2$
From Yule-Walker do we get:
$ \gamma_1=\frac{c_1+c_2c_3}{1-c_2-c_1c_3-c_3^2}\gamma_0\\
\gamma_2=\left(\frac{c_1(c_1+c_2c_3)+c_3(c_1+c_2c_3)}{1-c_2-c_1c_3-c_3^2}+c_2\right)\gamma_0\\
\gamma_3=\left(\frac{(c_1^ 2+c_3c_1+c_2)(c_1+c_2c_3)}{1-c_2-c_1c_3-c_3^2}+c_2+c_3\right)\gamma_0$
Plugging these into the equation above (the second point) give what you want by setting $Var[Z_s]=\gamma_0=1$.
In Hamiltons book on time series (p59) he writes that the solutions for $\gamma_j$ takes the form:
$\gamma_j=g_1\lambda_1^j+g_2\lambda_2^j+\cdots+g_p\lambda_p^j$
where the eigenvalues are the solutions of
$\lambda^p-c_1\lambda^{p-1}-c_2\lambda^{p-2}-\cdots-c_p=0$
(This is exactly the method outlined in the paper user Hunaphu has added).
So my guess is that these equations will be very complicated for $p=5$ or above since no formula exists for the solution of an equation of degree five or higher. It has to be solved by e.g. elliptic functions or theta functions. So to find a general solution for the AR(n)-case you must find a general solution for the n'th-degree algebraic equation which does not exist. | A closed form formula for the normalizing constant in standard normal auto-regressive series? | It's not strange you didn't calculate the AR(3) case. It's rather complicated! And no, there is no closed form for the AR(n)-case. For the AR(3) we start with the Yule-Walker equationsAR-model wikiped | A closed form formula for the normalizing constant in standard normal auto-regressive series?
It's not strange you didn't calculate the AR(3) case. It's rather complicated! And no, there is no closed form for the AR(n)-case. For the AR(3) we start with the Yule-Walker equationsAR-model wikipedia(where $\gamma_j=\gamma_{-j}$):
$\gamma_1=c_1\gamma_0+c_2\gamma_{-1}+c_3\gamma_{-2}=c_1\gamma_0+c_2\gamma_{1}+c_3\gamma_{2}\\
\gamma_2=c_1\gamma_1+c_2\gamma_0+c_3\gamma_{-1}=c_1\gamma_1+c_2\gamma_0+c_3\gamma_{1}\\
\gamma_3=c_1\gamma_2+c_2\gamma_1+c_3\gamma_{0}$
Then we multiply the AR(3) expression by $Z_t$ and take the expectation:
$ E[Z_t^2]=c_1E[Z_tZ_{t-1}]+c_2E[Z_tZ_{t-2}]+c_3E[Z_tZ_{t-3}]+c^2\Rightarrow \\
\gamma_0=c_1\gamma_1+c_2\gamma_2+c_3\gamma_3+c^2$
From Yule-Walker do we get:
$ \gamma_1=\frac{c_1+c_2c_3}{1-c_2-c_1c_3-c_3^2}\gamma_0\\
\gamma_2=\left(\frac{c_1(c_1+c_2c_3)+c_3(c_1+c_2c_3)}{1-c_2-c_1c_3-c_3^2}+c_2\right)\gamma_0\\
\gamma_3=\left(\frac{(c_1^ 2+c_3c_1+c_2)(c_1+c_2c_3)}{1-c_2-c_1c_3-c_3^2}+c_2+c_3\right)\gamma_0$
Plugging these into the equation above (the second point) give what you want by setting $Var[Z_s]=\gamma_0=1$.
In Hamiltons book on time series (p59) he writes that the solutions for $\gamma_j$ takes the form:
$\gamma_j=g_1\lambda_1^j+g_2\lambda_2^j+\cdots+g_p\lambda_p^j$
where the eigenvalues are the solutions of
$\lambda^p-c_1\lambda^{p-1}-c_2\lambda^{p-2}-\cdots-c_p=0$
(This is exactly the method outlined in the paper user Hunaphu has added).
So my guess is that these equations will be very complicated for $p=5$ or above since no formula exists for the solution of an equation of degree five or higher. It has to be solved by e.g. elliptic functions or theta functions. So to find a general solution for the AR(n)-case you must find a general solution for the n'th-degree algebraic equation which does not exist. | A closed form formula for the normalizing constant in standard normal auto-regressive series?
It's not strange you didn't calculate the AR(3) case. It's rather complicated! And no, there is no closed form for the AR(n)-case. For the AR(3) we start with the Yule-Walker equationsAR-model wikiped |
49,980 | Bar plots with variable bases (intensive and extensive variables at once) | Apparently they are called cascade charts, see:
Variable Width Column Charts (Cascade Charts) - Excel
The Cascade Chart Creator add-in for Microsoft Excel
Cascade chart (graph with variable width bars) - Statalist
As a bonus, ggplot2: Variable Width Column Chart.
However, sometimes cascade chart is used as a synonymous of waterfall chart (which is a different thing from the discussed above), see e.g.:
Creating a Waterfall (Cascade) Chart - FusionCharts
In any case, judging for length I needed to get this answer, this name may be not that popular even among people creating similar bar plots (and perhaps a descriptive way may be better).
As was pointed out by @NickCox, if bars are sorted by their height, it is a discrete variant of the Lorenz curve. | Bar plots with variable bases (intensive and extensive variables at once) | Apparently they are called cascade charts, see:
Variable Width Column Charts (Cascade Charts) - Excel
The Cascade Chart Creator add-in for Microsoft Excel
Cascade chart (graph with variable width bar | Bar plots with variable bases (intensive and extensive variables at once)
Apparently they are called cascade charts, see:
Variable Width Column Charts (Cascade Charts) - Excel
The Cascade Chart Creator add-in for Microsoft Excel
Cascade chart (graph with variable width bars) - Statalist
As a bonus, ggplot2: Variable Width Column Chart.
However, sometimes cascade chart is used as a synonymous of waterfall chart (which is a different thing from the discussed above), see e.g.:
Creating a Waterfall (Cascade) Chart - FusionCharts
In any case, judging for length I needed to get this answer, this name may be not that popular even among people creating similar bar plots (and perhaps a descriptive way may be better).
As was pointed out by @NickCox, if bars are sorted by their height, it is a discrete variant of the Lorenz curve. | Bar plots with variable bases (intensive and extensive variables at once)
Apparently they are called cascade charts, see:
Variable Width Column Charts (Cascade Charts) - Excel
The Cascade Chart Creator add-in for Microsoft Excel
Cascade chart (graph with variable width bar |
49,981 | Whether to use a hierarchical linear model | While I agree that multi-level modeling is an option for data with this structure, it's not the only option, especially given the lone time series dimension. Typically, the nestings within a heterarchical model are by category, e.g., students within classes or teachers, classes within schools, and so on, not ordinal dimensions like time.
Gelman and Hill's book is great, I agree. Perhaps even better is Singer and Willet's book Applied Longitudinal Data Analysis which, to one poster's point, goes into much greater depth than G&H on some topics, e.g., growth models, issues related to constructing an interpretable intercept, curvilinearity and survival analysis but S&W lack a Bayesian focus.
If you had an additional factor called "industry", then I would lean more strongly towards a heterarchical model. Given that you don't (i.e., you haven't posited "industry" as a factor. Do these companies belong to a single industry? How about by 6 or 8 digit SIC or NAIC codes?), another consideration would be pooled time series or event history analysis as it's called in sociology. Here, the advantages are that the models can be estimated in OLS, a more tractable functional form than multi-level models and that the industrial, organizational and economic literature has a long-standing history of published papers in this domain, beginning at least with F.M. Scherer but continuing up to the near-present with books like Wooldridge's Econometric Analysis of Cross Section and Panel Data.
My personal favorite in the field of PTS is Lee Cooper's ebook, Market Share Analysis, available on his UCLA website. Ignore the "share" part and even the "marketing" part. It's simply a great introduction to this class of models and it's quite accessible without sacrificing scientific rigor (he's an emeritus professor of mktg science). Not to mention that he develops different and carefully specified functional forms in terms of the data structure, elasticities, cross-elasticities and very practical advice on how to build these into your model. Depending on what your X factors are, this could be quite useful information. | Whether to use a hierarchical linear model | While I agree that multi-level modeling is an option for data with this structure, it's not the only option, especially given the lone time series dimension. Typically, the nestings within a heterarch | Whether to use a hierarchical linear model
While I agree that multi-level modeling is an option for data with this structure, it's not the only option, especially given the lone time series dimension. Typically, the nestings within a heterarchical model are by category, e.g., students within classes or teachers, classes within schools, and so on, not ordinal dimensions like time.
Gelman and Hill's book is great, I agree. Perhaps even better is Singer and Willet's book Applied Longitudinal Data Analysis which, to one poster's point, goes into much greater depth than G&H on some topics, e.g., growth models, issues related to constructing an interpretable intercept, curvilinearity and survival analysis but S&W lack a Bayesian focus.
If you had an additional factor called "industry", then I would lean more strongly towards a heterarchical model. Given that you don't (i.e., you haven't posited "industry" as a factor. Do these companies belong to a single industry? How about by 6 or 8 digit SIC or NAIC codes?), another consideration would be pooled time series or event history analysis as it's called in sociology. Here, the advantages are that the models can be estimated in OLS, a more tractable functional form than multi-level models and that the industrial, organizational and economic literature has a long-standing history of published papers in this domain, beginning at least with F.M. Scherer but continuing up to the near-present with books like Wooldridge's Econometric Analysis of Cross Section and Panel Data.
My personal favorite in the field of PTS is Lee Cooper's ebook, Market Share Analysis, available on his UCLA website. Ignore the "share" part and even the "marketing" part. It's simply a great introduction to this class of models and it's quite accessible without sacrificing scientific rigor (he's an emeritus professor of mktg science). Not to mention that he develops different and carefully specified functional forms in terms of the data structure, elasticities, cross-elasticities and very practical advice on how to build these into your model. Depending on what your X factors are, this could be quite useful information. | Whether to use a hierarchical linear model
While I agree that multi-level modeling is an option for data with this structure, it's not the only option, especially given the lone time series dimension. Typically, the nestings within a heterarch |
49,982 | Whether to use a hierarchical linear model | I would say that using a hierarchical model is suitable in your case.
Following this guide, BRAND would be your Level-2-term and Year could be your Level-1-term, used as random slope.
You should also check the ICC afterwards to see whether hierarchical models give you any benefits over normal linear regression. | Whether to use a hierarchical linear model | I would say that using a hierarchical model is suitable in your case.
Following this guide, BRAND would be your Level-2-term and Year could be your Level-1-term, used as random slope.
You should also | Whether to use a hierarchical linear model
I would say that using a hierarchical model is suitable in your case.
Following this guide, BRAND would be your Level-2-term and Year could be your Level-1-term, used as random slope.
You should also check the ICC afterwards to see whether hierarchical models give you any benefits over normal linear regression. | Whether to use a hierarchical linear model
I would say that using a hierarchical model is suitable in your case.
Following this guide, BRAND would be your Level-2-term and Year could be your Level-1-term, used as random slope.
You should also |
49,983 | Whether to use a hierarchical linear model | Yes, you should probably use multilevel modeling, possibly with company at level 3 and year at level 2. This would be a multilevel growth curve approach. You could then analyze the different kinds of changes in x different types of companies had over time. | Whether to use a hierarchical linear model | Yes, you should probably use multilevel modeling, possibly with company at level 3 and year at level 2. This would be a multilevel growth curve approach. You could then analyze the different kinds of | Whether to use a hierarchical linear model
Yes, you should probably use multilevel modeling, possibly with company at level 3 and year at level 2. This would be a multilevel growth curve approach. You could then analyze the different kinds of changes in x different types of companies had over time. | Whether to use a hierarchical linear model
Yes, you should probably use multilevel modeling, possibly with company at level 3 and year at level 2. This would be a multilevel growth curve approach. You could then analyze the different kinds of |
49,984 | regression with circular response variable | The pattern in the residuals is not necessarily a problem. One way to check this is to simulate a set of responses from the model that you just fitted (that is, under the assumption that the model is correct), fit a new model to the results, and plot its residuals. This gives you a measure of how weird you would expect the plot to look even if nothing were wrong.
If you can reliably pick out the original model from several plots produced in this way, then you should start worrying about the residual pattern. | regression with circular response variable | The pattern in the residuals is not necessarily a problem. One way to check this is to simulate a set of responses from the model that you just fitted (that is, under the assumption that the model is | regression with circular response variable
The pattern in the residuals is not necessarily a problem. One way to check this is to simulate a set of responses from the model that you just fitted (that is, under the assumption that the model is correct), fit a new model to the results, and plot its residuals. This gives you a measure of how weird you would expect the plot to look even if nothing were wrong.
If you can reliably pick out the original model from several plots produced in this way, then you should start worrying about the residual pattern. | regression with circular response variable
The pattern in the residuals is not necessarily a problem. One way to check this is to simulate a set of responses from the model that you just fitted (that is, under the assumption that the model is |
49,985 | How was public opinion survey sampling done in early 20th century? | In his book, Théorie des Sondages (Dunod, 2001, written in French), Yves Tillé tells the history of survey sampling. According to him, A.N. Kiaer introduced in 1895 at an IIS (International Institute for Statistics) congress the idea of conducting a survey to measure certain variables on the population, instead of doing a census. His talk was very controversial, many well-known statisticians deemed it nonsense.
It's not until 1925 that the IIS officially recognized the idea of survey sampling as worthy of interest. And, in 1936, Gallup applied it for the very first time to predict the results of an election.
Before the 1930s, I believe there were a lot of "surveys" conducted among newspaper readers to predict the result of elections, but as I understand, people were not aware that their samples were biased.
EDIT : I think they did not construct samples per se : they would just put an ad somewhere in the paper saying that if you wanted to participate in their prediction, you just had to send the name of the candidate you'd vote for to their postal address. I don't think they took it too seriously though (this paper says there's no specific scientific literature on how to predict elections until 1948).
For example, in Paris there's an American bar that's been "predicting" presidential elections results since 1924, but there's nothing scientific in their methodology, it's just for fun ! | How was public opinion survey sampling done in early 20th century? | In his book, Théorie des Sondages (Dunod, 2001, written in French), Yves Tillé tells the history of survey sampling. According to him, A.N. Kiaer introduced in 1895 at an IIS (International Institute | How was public opinion survey sampling done in early 20th century?
In his book, Théorie des Sondages (Dunod, 2001, written in French), Yves Tillé tells the history of survey sampling. According to him, A.N. Kiaer introduced in 1895 at an IIS (International Institute for Statistics) congress the idea of conducting a survey to measure certain variables on the population, instead of doing a census. His talk was very controversial, many well-known statisticians deemed it nonsense.
It's not until 1925 that the IIS officially recognized the idea of survey sampling as worthy of interest. And, in 1936, Gallup applied it for the very first time to predict the results of an election.
Before the 1930s, I believe there were a lot of "surveys" conducted among newspaper readers to predict the result of elections, but as I understand, people were not aware that their samples were biased.
EDIT : I think they did not construct samples per se : they would just put an ad somewhere in the paper saying that if you wanted to participate in their prediction, you just had to send the name of the candidate you'd vote for to their postal address. I don't think they took it too seriously though (this paper says there's no specific scientific literature on how to predict elections until 1948).
For example, in Paris there's an American bar that's been "predicting" presidential elections results since 1924, but there's nothing scientific in their methodology, it's just for fun ! | How was public opinion survey sampling done in early 20th century?
In his book, Théorie des Sondages (Dunod, 2001, written in French), Yves Tillé tells the history of survey sampling. According to him, A.N. Kiaer introduced in 1895 at an IIS (International Institute |
49,986 | How was public opinion survey sampling done in early 20th century? | To the best of my knowledge, in 1936, George Gallup used a version of quota sampling. It worked in some cases, like in 1936, and failed in others, like just 12 years later in 1948, when Gallup, like some others, said that "Dewey defeats Truman". | How was public opinion survey sampling done in early 20th century? | To the best of my knowledge, in 1936, George Gallup used a version of quota sampling. It worked in some cases, like in 1936, and failed in others, like just 12 years later in 1948, when Gallup, like s | How was public opinion survey sampling done in early 20th century?
To the best of my knowledge, in 1936, George Gallup used a version of quota sampling. It worked in some cases, like in 1936, and failed in others, like just 12 years later in 1948, when Gallup, like some others, said that "Dewey defeats Truman". | How was public opinion survey sampling done in early 20th century?
To the best of my knowledge, in 1936, George Gallup used a version of quota sampling. It worked in some cases, like in 1936, and failed in others, like just 12 years later in 1948, when Gallup, like s |
49,987 | Estimating bias in surveys | Here's a solution using survey sampling theory ("frequentist" solution).
Let's assume that the estimate of males and females in the entire country is a good one (its variance is sufficiently low). Then the estimate in each city can be improved using calibration techniques (such as post-stratification or Deville and Särndal's). The estimate of number of males and females in the country is then called a calibration margin.
Calibration will give you new weights, which define the calibrated estimator, which is :
also unbiased, as long as you have a sufficient number of units in your survey
more precise (precision increases with correlation between calibration margin and survey variables, which is likely high in your case)
If the quality of the estimation in the entire country is very poor (variance is much higher than for the estimation in the two cities), then it can't be used as a calibration margin. | Estimating bias in surveys | Here's a solution using survey sampling theory ("frequentist" solution).
Let's assume that the estimate of males and females in the entire country is a good one (its variance is sufficiently low). The | Estimating bias in surveys
Here's a solution using survey sampling theory ("frequentist" solution).
Let's assume that the estimate of males and females in the entire country is a good one (its variance is sufficiently low). Then the estimate in each city can be improved using calibration techniques (such as post-stratification or Deville and Särndal's). The estimate of number of males and females in the country is then called a calibration margin.
Calibration will give you new weights, which define the calibrated estimator, which is :
also unbiased, as long as you have a sufficient number of units in your survey
more precise (precision increases with correlation between calibration margin and survey variables, which is likely high in your case)
If the quality of the estimation in the entire country is very poor (variance is much higher than for the estimation in the two cities), then it can't be used as a calibration margin. | Estimating bias in surveys
Here's a solution using survey sampling theory ("frequentist" solution).
Let's assume that the estimate of males and females in the entire country is a good one (its variance is sufficiently low). The |
49,988 | Binomial data: Null Hypothesis $p = 0$ when all Sample Values are 0 - testing and power analysis | Before assessing the power, we have to make it clear what the test is.
This null hypothesis $H_0:p=0$ posits that successes have no chance of occurring. Observing even a single success would be convincing evidence against the null. But what if no successes (in $n$ independent trials) are observed?
For a test at level $\alpha$ you need, by definition, to have less than an $\alpha$ chance of rejecting the null when it is true. When the null is true, there are zero successes. Thus, a test may still reject the null when no successes are observed. It's just not allowed to do that more than $100\alpha$ percent of the time in the long run.
These considerations show that the test must be one of the following:
When one or more successes are observed, reject the null.
When no successes are observed, randomly reject the null with a chance $\gamma$ no greater than $\alpha$ (the "False Positive rate").
These tests are determined by the number of trials $n$ and your choice of $\gamma.$ (See the discussion at the end of this post concerning the implications of that.)
Now we can compute the power from its definition: it's the chance of rejecting the null under the alternative hypothesis. The alternative hypothesis corresponds to all nonzero values of $p$ (the success probability). In such a case, elementary probability calculations show
The chance of observing one or more successes in $n$ independent trials is $1 - (1-p)^n.$
The chance of observing zero successes and then randomly rejecting the null is $\gamma (1-p)^n.$
Thus, the chance of rejecting the null is
$$\operatorname{Power}(p;\gamma,n) = 1 - (1-p)^n + \gamma(1-p)^n = 1 - (1-\gamma)(1-p)^n.$$
For a given $n$ and $\gamma,$ these are functions of $p$ in the interval $(0,1].$ I graphed a bunch of them so you can see how they behave:
In each plot the yellow dotted lines are horizontal at a height of $\gamma=0.20$ and the yellow solid lines are the corresponding power curves. Similarly, the green colors correspond to $\gamma=0.05$ and the blue colors to $\gamma=0.$
It is follows from the formula and is visually clear in the plots that larger values of $\gamma$ lead to consistently higher power, across the board. Thus, in the usual balancing one does in selecting a test, you would want to make $\gamma$ as large as possible consistent with your need to limit the false positive rate. Evidently, then, you would choose $\gamma=\alpha.$
Thus, given your choice of test size $\alpha$ and the number of observations $n,$ the power can be made as great as $1 - (1-\alpha)(1-p)^n$ by using the test with $\gamma=\alpha.$
Note that $\gamma=0$ means your test never rejects the null when no successes are observed. All other values of $\gamma$ mean your decision is randomized: it depends not only on the observations, but also on the outcome of an independent random variable (that has nothing to do with the observations). Some people are uncomfortable using randomized tests. That's ok, but they will be forced to use the versions of this test with the lowest possible power (shown with the blue curves). That's worth pondering. (It was Jack Kiefer, I recall, who pointed out that many of the same people who refuse to use randomized tests nevertheless have no problem selecting observations randomly ;-).) | Binomial data: Null Hypothesis $p = 0$ when all Sample Values are 0 - testing and power analysis | Before assessing the power, we have to make it clear what the test is.
This null hypothesis $H_0:p=0$ posits that successes have no chance of occurring. Observing even a single success would be convi | Binomial data: Null Hypothesis $p = 0$ when all Sample Values are 0 - testing and power analysis
Before assessing the power, we have to make it clear what the test is.
This null hypothesis $H_0:p=0$ posits that successes have no chance of occurring. Observing even a single success would be convincing evidence against the null. But what if no successes (in $n$ independent trials) are observed?
For a test at level $\alpha$ you need, by definition, to have less than an $\alpha$ chance of rejecting the null when it is true. When the null is true, there are zero successes. Thus, a test may still reject the null when no successes are observed. It's just not allowed to do that more than $100\alpha$ percent of the time in the long run.
These considerations show that the test must be one of the following:
When one or more successes are observed, reject the null.
When no successes are observed, randomly reject the null with a chance $\gamma$ no greater than $\alpha$ (the "False Positive rate").
These tests are determined by the number of trials $n$ and your choice of $\gamma.$ (See the discussion at the end of this post concerning the implications of that.)
Now we can compute the power from its definition: it's the chance of rejecting the null under the alternative hypothesis. The alternative hypothesis corresponds to all nonzero values of $p$ (the success probability). In such a case, elementary probability calculations show
The chance of observing one or more successes in $n$ independent trials is $1 - (1-p)^n.$
The chance of observing zero successes and then randomly rejecting the null is $\gamma (1-p)^n.$
Thus, the chance of rejecting the null is
$$\operatorname{Power}(p;\gamma,n) = 1 - (1-p)^n + \gamma(1-p)^n = 1 - (1-\gamma)(1-p)^n.$$
For a given $n$ and $\gamma,$ these are functions of $p$ in the interval $(0,1].$ I graphed a bunch of them so you can see how they behave:
In each plot the yellow dotted lines are horizontal at a height of $\gamma=0.20$ and the yellow solid lines are the corresponding power curves. Similarly, the green colors correspond to $\gamma=0.05$ and the blue colors to $\gamma=0.$
It is follows from the formula and is visually clear in the plots that larger values of $\gamma$ lead to consistently higher power, across the board. Thus, in the usual balancing one does in selecting a test, you would want to make $\gamma$ as large as possible consistent with your need to limit the false positive rate. Evidently, then, you would choose $\gamma=\alpha.$
Thus, given your choice of test size $\alpha$ and the number of observations $n,$ the power can be made as great as $1 - (1-\alpha)(1-p)^n$ by using the test with $\gamma=\alpha.$
Note that $\gamma=0$ means your test never rejects the null when no successes are observed. All other values of $\gamma$ mean your decision is randomized: it depends not only on the observations, but also on the outcome of an independent random variable (that has nothing to do with the observations). Some people are uncomfortable using randomized tests. That's ok, but they will be forced to use the versions of this test with the lowest possible power (shown with the blue curves). That's worth pondering. (It was Jack Kiefer, I recall, who pointed out that many of the same people who refuse to use randomized tests nevertheless have no problem selecting observations randomly ;-).) | Binomial data: Null Hypothesis $p = 0$ when all Sample Values are 0 - testing and power analysis
Before assessing the power, we have to make it clear what the test is.
This null hypothesis $H_0:p=0$ posits that successes have no chance of occurring. Observing even a single success would be convi |
49,989 | Two equivalent forms of logistic regression | I think you lost (or added) a minus sign in one of the formulations (maybe going from log-lik to neg-log-lik?).
Using $z=\beta x$, if we write the losses as:
$l_{01} = -y_{01} z + \log (1+e^z)$
$l_{\pm1} = \log (1+e^{-y_{\pm1}z})$
Then when $y_{01} = y_{\pm1} = 1$, we have:
$$-z + \log (1 + e^z) \quad \text{and} \quad \log (1 + e^{-z})$$
which we can show are equal with a bit of algebra.
When $y_{01} = 0$ and $y_{\pm1} = -1$, we have:
$$\log (1 + e^z) \quad \text{and} \quad \log (1 + e^z)$$
which is what we need.
While this doesn't give you the substitution you were looking for, it does show the equivalence. See also a related answer: https://stats.stackexchange.com/a/279698/1704. | Two equivalent forms of logistic regression | I think you lost (or added) a minus sign in one of the formulations (maybe going from log-lik to neg-log-lik?).
Using $z=\beta x$, if we write the losses as:
$l_{01} = -y_{01} z + \log (1+e^z)$
$l_{\ | Two equivalent forms of logistic regression
I think you lost (or added) a minus sign in one of the formulations (maybe going from log-lik to neg-log-lik?).
Using $z=\beta x$, if we write the losses as:
$l_{01} = -y_{01} z + \log (1+e^z)$
$l_{\pm1} = \log (1+e^{-y_{\pm1}z})$
Then when $y_{01} = y_{\pm1} = 1$, we have:
$$-z + \log (1 + e^z) \quad \text{and} \quad \log (1 + e^{-z})$$
which we can show are equal with a bit of algebra.
When $y_{01} = 0$ and $y_{\pm1} = -1$, we have:
$$\log (1 + e^z) \quad \text{and} \quad \log (1 + e^z)$$
which is what we need.
While this doesn't give you the substitution you were looking for, it does show the equivalence. See also a related answer: https://stats.stackexchange.com/a/279698/1704. | Two equivalent forms of logistic regression
I think you lost (or added) a minus sign in one of the formulations (maybe going from log-lik to neg-log-lik?).
Using $z=\beta x$, if we write the losses as:
$l_{01} = -y_{01} z + \log (1+e^z)$
$l_{\ |
49,990 | MANOVA multiple comparisons with equivalence testing | For an R package, you might take a look at lsmeans. For mlm models, it sets up the multivariate response as if it were a factor whose levels are the dimenstions of the response. Then you can do estimates or contrasts of those, with or without other factors being involved. See the example for the MOats dataset that accompanies the package.
It also supports equivalence tests via providing a delta argument in summary or test. A section of the vignette (see vignette("using-lsmeans")) covers equivalence testing. | MANOVA multiple comparisons with equivalence testing | For an R package, you might take a look at lsmeans. For mlm models, it sets up the multivariate response as if it were a factor whose levels are the dimenstions of the response. Then you can do estima | MANOVA multiple comparisons with equivalence testing
For an R package, you might take a look at lsmeans. For mlm models, it sets up the multivariate response as if it were a factor whose levels are the dimenstions of the response. Then you can do estimates or contrasts of those, with or without other factors being involved. See the example for the MOats dataset that accompanies the package.
It also supports equivalence tests via providing a delta argument in summary or test. A section of the vignette (see vignette("using-lsmeans")) covers equivalence testing. | MANOVA multiple comparisons with equivalence testing
For an R package, you might take a look at lsmeans. For mlm models, it sets up the multivariate response as if it were a factor whose levels are the dimenstions of the response. Then you can do estima |
49,991 | conditional probability, change of variable and Jacobian | Ok. I just figured out that
$$
f_{X|Z}(x|z) = f_{X|Y}(x|A^Tz+\mu).
$$
To see this, first, the change of variable technique shows that:
$$
f_{X,Z}(x,z) = f_{X,Y}(x,A^Tz+\mu) |A|.
$$
The change of variable technique also shows that:
$$
f_{Z}(z) = f_{Y}(A^Tz+\mu) |A|.
$$
(You can get this result by simply integrating out the $f(X,Z)$ above with respect to $X$).
Therefore,
$$
f_{X|Z}(x|z) = \frac{f_{X,Z}(x,z)}{f_Z(z)} = \frac{f_{X,Y}(x,A^Tz+\mu) |A|}{f_{Y}(A^Tz+\mu) |A|}= \frac{f_{X|Y}(x|A^Tz+\mu) f_Y(A^Tz+\mu)|A|}{f_{Y}(A^Tz+\mu) |A|}=f_{X|Y}(x|A^Tz+\mu).
$$ | conditional probability, change of variable and Jacobian | Ok. I just figured out that
$$
f_{X|Z}(x|z) = f_{X|Y}(x|A^Tz+\mu).
$$
To see this, first, the change of variable technique shows that:
$$
f_{X,Z}(x,z) = f_{X,Y}(x,A^Tz+\mu) |A|.
$$
The change of va | conditional probability, change of variable and Jacobian
Ok. I just figured out that
$$
f_{X|Z}(x|z) = f_{X|Y}(x|A^Tz+\mu).
$$
To see this, first, the change of variable technique shows that:
$$
f_{X,Z}(x,z) = f_{X,Y}(x,A^Tz+\mu) |A|.
$$
The change of variable technique also shows that:
$$
f_{Z}(z) = f_{Y}(A^Tz+\mu) |A|.
$$
(You can get this result by simply integrating out the $f(X,Z)$ above with respect to $X$).
Therefore,
$$
f_{X|Z}(x|z) = \frac{f_{X,Z}(x,z)}{f_Z(z)} = \frac{f_{X,Y}(x,A^Tz+\mu) |A|}{f_{Y}(A^Tz+\mu) |A|}= \frac{f_{X|Y}(x|A^Tz+\mu) f_Y(A^Tz+\mu)|A|}{f_{Y}(A^Tz+\mu) |A|}=f_{X|Y}(x|A^Tz+\mu).
$$ | conditional probability, change of variable and Jacobian
Ok. I just figured out that
$$
f_{X|Z}(x|z) = f_{X|Y}(x|A^Tz+\mu).
$$
To see this, first, the change of variable technique shows that:
$$
f_{X,Z}(x,z) = f_{X,Y}(x,A^Tz+\mu) |A|.
$$
The change of va |
49,992 | Can a 1-D risk score (binary outcome) be sensibly used to create multiple treatment groups? | What would be the most convincing argument for a layperson to hear?
Different types of treatments carry different risks.
Here is an example:
A sample model:
For sick patients:
No treatment has a success rate (spontaneous recovery) of 0.01
Treatment A has a success rate of 0.80
Treatment B has a success rate of 0.95
For healthy patients:
Treatment A can kill a healthy patient with probability 0.01
Treatment B can kill a healthy patient with probability 0.03
Also assume:
We can't give both treatments to the same patient
The risk score is calibrated, hence risk score = p(sick).
Our goal:
Find a strategy that maximizes the expected percentage of lives saved, by assigning a treatment to each patient, given his risk score.
Optimization:
The probability of a patient with risk score p ending up healthy with no treatment is
p*0.01 + (1-p)
The probability of a patient with risk score p ending up healthy with treatment A is
p*0.8 + (1-p)*0.99
The probability of a patient with risk score p ending up healthy with treatment B is
p*0.95 + (1-p)*0.97
Now, let's plot these three probability functions as a function of p:
It is easy to see the rationale for using 2 cutoff values.
The best treatment is not the same for every patient. | Can a 1-D risk score (binary outcome) be sensibly used to create multiple treatment groups? | What would be the most convincing argument for a layperson to hear?
Different types of treatments carry different risks.
Here is an example:
A sample model:
For sick patients:
No treatment has a suc | Can a 1-D risk score (binary outcome) be sensibly used to create multiple treatment groups?
What would be the most convincing argument for a layperson to hear?
Different types of treatments carry different risks.
Here is an example:
A sample model:
For sick patients:
No treatment has a success rate (spontaneous recovery) of 0.01
Treatment A has a success rate of 0.80
Treatment B has a success rate of 0.95
For healthy patients:
Treatment A can kill a healthy patient with probability 0.01
Treatment B can kill a healthy patient with probability 0.03
Also assume:
We can't give both treatments to the same patient
The risk score is calibrated, hence risk score = p(sick).
Our goal:
Find a strategy that maximizes the expected percentage of lives saved, by assigning a treatment to each patient, given his risk score.
Optimization:
The probability of a patient with risk score p ending up healthy with no treatment is
p*0.01 + (1-p)
The probability of a patient with risk score p ending up healthy with treatment A is
p*0.8 + (1-p)*0.99
The probability of a patient with risk score p ending up healthy with treatment B is
p*0.95 + (1-p)*0.97
Now, let's plot these three probability functions as a function of p:
It is easy to see the rationale for using 2 cutoff values.
The best treatment is not the same for every patient. | Can a 1-D risk score (binary outcome) be sensibly used to create multiple treatment groups?
What would be the most convincing argument for a layperson to hear?
Different types of treatments carry different risks.
Here is an example:
A sample model:
For sick patients:
No treatment has a suc |
49,993 | Can a 1-D risk score (binary outcome) be sensibly used to create multiple treatment groups? | Consider you have two treatments available:
1. costs 1000 but has a 99% chance of helping
2. costs 10 but has a 90% chance of working
Would you rather treat 1 with the first, or 100 with the second?
Assume your risk distribution is 1, 0, ..., 0 then you should treat only the first.
If your risk distribution is 0.60,0.599,0.598,0.597,... then you could save over 45 by using the second drug.
There is two points in this model that you may have overlooked:
Treatments are not guaranteed to work
Prediction will barely ever be 100% correct
Assume the real risk is 1, 0, ..., 0 as before. But your method did a mistake, and produced the risk scores 0.9, 1.0, 0, ... 0. If you would bet everything on one treatment, you would be treating the wrong person. If you treat the top 100, you have a 90% success chance of curing the one that was really sick in this toy example. | Can a 1-D risk score (binary outcome) be sensibly used to create multiple treatment groups? | Consider you have two treatments available:
1. costs 1000 but has a 99% chance of helping
2. costs 10 but has a 90% chance of working
Would you rather treat 1 with the first, or 100 with the second?
A | Can a 1-D risk score (binary outcome) be sensibly used to create multiple treatment groups?
Consider you have two treatments available:
1. costs 1000 but has a 99% chance of helping
2. costs 10 but has a 90% chance of working
Would you rather treat 1 with the first, or 100 with the second?
Assume your risk distribution is 1, 0, ..., 0 then you should treat only the first.
If your risk distribution is 0.60,0.599,0.598,0.597,... then you could save over 45 by using the second drug.
There is two points in this model that you may have overlooked:
Treatments are not guaranteed to work
Prediction will barely ever be 100% correct
Assume the real risk is 1, 0, ..., 0 as before. But your method did a mistake, and produced the risk scores 0.9, 1.0, 0, ... 0. If you would bet everything on one treatment, you would be treating the wrong person. If you treat the top 100, you have a 90% success chance of curing the one that was really sick in this toy example. | Can a 1-D risk score (binary outcome) be sensibly used to create multiple treatment groups?
Consider you have two treatments available:
1. costs 1000 but has a 99% chance of helping
2. costs 10 but has a 90% chance of working
Would you rather treat 1 with the first, or 100 with the second?
A |
49,994 | Can a 1-D risk score (binary outcome) be sensibly used to create multiple treatment groups? | Suppose there were three kinds of patient:
Patients with risk 0 will never catch the virus
Patients with risk 1 will catch the virus only if untreated
Patients with risk 2 will always catch the virus
Then the optimal strategy would be to use multiple cutpoints and only treat patients of risk 1.
I came up with that pathological example by thinking about decision trees. The model I had in mind was
A patient of risk r arrives
If they're treated, the probability of infection is $t(r)$.
If they're not treated, the probability of infection is $u(r)$.
I originally had costs attached to each outcome, but it turns out those aren't important. What's important is that even if you insist that the risk $r$ is 'honest' in that $t(r), u(r)$ are both increasing functions - so higher risk makes for a higher probability of infection - that doesn't mean that the 'benefit' of the treatment $t(r) - u(r)$ has to be increasing with respect to risk. | Can a 1-D risk score (binary outcome) be sensibly used to create multiple treatment groups? | Suppose there were three kinds of patient:
Patients with risk 0 will never catch the virus
Patients with risk 1 will catch the virus only if untreated
Patients with risk 2 will always catch the virus | Can a 1-D risk score (binary outcome) be sensibly used to create multiple treatment groups?
Suppose there were three kinds of patient:
Patients with risk 0 will never catch the virus
Patients with risk 1 will catch the virus only if untreated
Patients with risk 2 will always catch the virus
Then the optimal strategy would be to use multiple cutpoints and only treat patients of risk 1.
I came up with that pathological example by thinking about decision trees. The model I had in mind was
A patient of risk r arrives
If they're treated, the probability of infection is $t(r)$.
If they're not treated, the probability of infection is $u(r)$.
I originally had costs attached to each outcome, but it turns out those aren't important. What's important is that even if you insist that the risk $r$ is 'honest' in that $t(r), u(r)$ are both increasing functions - so higher risk makes for a higher probability of infection - that doesn't mean that the 'benefit' of the treatment $t(r) - u(r)$ has to be increasing with respect to risk. | Can a 1-D risk score (binary outcome) be sensibly used to create multiple treatment groups?
Suppose there were three kinds of patient:
Patients with risk 0 will never catch the virus
Patients with risk 1 will catch the virus only if untreated
Patients with risk 2 will always catch the virus |
49,995 | Distribution of test statistic under null and alternative | First, review some basic properties of expectation and variance:
Expectation (in particular, linearity, so $E(\Sigma_i X_i)=\sum_i E(X_i)$ and $E(aX)=aE(X)$)
Variance (in particular, that $\text{Var}(aX)=a^2\text{Var}(X)$), and
Variance of a sum of uncorrelated variables so $\text{Var}(\Sigma_i X_i)=\sum_i \text{Var}(X_i)$
(keeping in mind that independent implies uncorrelated)
So that I don't keep taking differences, let $D_i = Y_i-X_i$.
Under $H_0$:
Once you're clear what region(s) of values of the test statistic are consistent with the alternative (identifying what are the 'extreme' values under the null), it's only the distribution under the null that matters for finding the critical value.
If $D_i\sim N(0,1)$, then $\sum_i D_i\sim N(0,n)$ (variance of a sum of independent r.v.s), and so $\bar{D}\sim N(0,\frac{1}{n})$ ($\text{Var}(aX)=a^2\text{Var}(X)$).
Hence $\sqrt{n}\bar{D}\sim N(0,1)$ (again, $\text{Var}(aX)=a^2\text{Var}(X)$).
Under $H_1$:
$D_i\sim N(a,1)$, so
$\sum_i D_i\sim N(na,n)$ (as above plus linearity of expectation*), and so $\bar{D}\sim N(a,\frac{1}{n})$ (ditto).
* though strictly I used it for the $H_0$ case as well, but I imagine that it wasn't presenting a problem for you there.
Hence $\sqrt{n}\bar{D}\sim N(\sqrt{n}a,1)$ (ditto). | Distribution of test statistic under null and alternative | First, review some basic properties of expectation and variance:
Expectation (in particular, linearity, so $E(\Sigma_i X_i)=\sum_i E(X_i)$ and $E(aX)=aE(X)$)
Variance (in particular, that $\text{Var} | Distribution of test statistic under null and alternative
First, review some basic properties of expectation and variance:
Expectation (in particular, linearity, so $E(\Sigma_i X_i)=\sum_i E(X_i)$ and $E(aX)=aE(X)$)
Variance (in particular, that $\text{Var}(aX)=a^2\text{Var}(X)$), and
Variance of a sum of uncorrelated variables so $\text{Var}(\Sigma_i X_i)=\sum_i \text{Var}(X_i)$
(keeping in mind that independent implies uncorrelated)
So that I don't keep taking differences, let $D_i = Y_i-X_i$.
Under $H_0$:
Once you're clear what region(s) of values of the test statistic are consistent with the alternative (identifying what are the 'extreme' values under the null), it's only the distribution under the null that matters for finding the critical value.
If $D_i\sim N(0,1)$, then $\sum_i D_i\sim N(0,n)$ (variance of a sum of independent r.v.s), and so $\bar{D}\sim N(0,\frac{1}{n})$ ($\text{Var}(aX)=a^2\text{Var}(X)$).
Hence $\sqrt{n}\bar{D}\sim N(0,1)$ (again, $\text{Var}(aX)=a^2\text{Var}(X)$).
Under $H_1$:
$D_i\sim N(a,1)$, so
$\sum_i D_i\sim N(na,n)$ (as above plus linearity of expectation*), and so $\bar{D}\sim N(a,\frac{1}{n})$ (ditto).
* though strictly I used it for the $H_0$ case as well, but I imagine that it wasn't presenting a problem for you there.
Hence $\sqrt{n}\bar{D}\sim N(\sqrt{n}a,1)$ (ditto). | Distribution of test statistic under null and alternative
First, review some basic properties of expectation and variance:
Expectation (in particular, linearity, so $E(\Sigma_i X_i)=\sum_i E(X_i)$ and $E(aX)=aE(X)$)
Variance (in particular, that $\text{Var} |
49,996 | Variance Inflation Factor less than 1 in ridge regression? | I would like to suggest that you calculate the diagonal elements of matrix directly.
It is assumed that the design matrix is centered and scaled.
We can adopt the eigen value decomposition $R_{XX}=X'X=T\Lambda T'$.
$\begin{align} (R_{XX}+cI)^{-1}R_{XX}(R_{XX}+cI)^{-1}&=(R_{XX}+cI)^{-1}(R_{XX}+cI)(R_{XX}+cI)^{-1}-c(R_{XX}+cI)^{-1}(R_{XX}+cI)^{-1}\\
&=(R_{XX}+cI)^{-1}-c(R_{XX}+cI)^{-1}(R_{XX}+cI)^{-1} \\
&=(T\Lambda T'+cTT')^{-1}-c(T\Lambda T'+cTT')^{-1}(T\Lambda T'+cTT')^{-1}\\
&=T\left( (\Lambda+cI)^{-1}-c (\Lambda+cI)^{-1} (\Lambda+cI)^{-1} \right)T'
\end{align}$
The matrix $ (\Lambda+cI)^{-1}$ is a diagonal matrix that its $i$th element is $\frac{1}{\lambda_i+c}$.
So the matrix $(\Lambda+cI)^{-1}-c (\Lambda+cI)^{-1} (\Lambda+cI)^{-1}$ is also a diagonal matrix and its ith element is $\frac{\lambda_i}{(\lambda_i+c)^2}$.
In OLS, it is known that vif values are the diagonal elements of the matrix $T\Lambda^{-1}T'$.
Comparing this $\Lambda^{-1}$ matrix with the corresponding of ridge$(\Lambda+cI)^{-1}-c (\Lambda+cI)^{-1} (\Lambda+cI)^{-1}$, every diagonal elements of the ridge case are deflated by the factor $\frac{\lambda_i^2}{(\lambda_i+c)^2}$.
I guess now we can conclude the bigger the ridge constant, we would get the more deflated VIFs.
I am not a native English speaker. Please don't mind my awkward expressions and it would be nice of you if you correct my grammar errors. Thank you. | Variance Inflation Factor less than 1 in ridge regression? | I would like to suggest that you calculate the diagonal elements of matrix directly.
It is assumed that the design matrix is centered and scaled.
We can adopt the eigen value decomposition $R_{XX}=X | Variance Inflation Factor less than 1 in ridge regression?
I would like to suggest that you calculate the diagonal elements of matrix directly.
It is assumed that the design matrix is centered and scaled.
We can adopt the eigen value decomposition $R_{XX}=X'X=T\Lambda T'$.
$\begin{align} (R_{XX}+cI)^{-1}R_{XX}(R_{XX}+cI)^{-1}&=(R_{XX}+cI)^{-1}(R_{XX}+cI)(R_{XX}+cI)^{-1}-c(R_{XX}+cI)^{-1}(R_{XX}+cI)^{-1}\\
&=(R_{XX}+cI)^{-1}-c(R_{XX}+cI)^{-1}(R_{XX}+cI)^{-1} \\
&=(T\Lambda T'+cTT')^{-1}-c(T\Lambda T'+cTT')^{-1}(T\Lambda T'+cTT')^{-1}\\
&=T\left( (\Lambda+cI)^{-1}-c (\Lambda+cI)^{-1} (\Lambda+cI)^{-1} \right)T'
\end{align}$
The matrix $ (\Lambda+cI)^{-1}$ is a diagonal matrix that its $i$th element is $\frac{1}{\lambda_i+c}$.
So the matrix $(\Lambda+cI)^{-1}-c (\Lambda+cI)^{-1} (\Lambda+cI)^{-1}$ is also a diagonal matrix and its ith element is $\frac{\lambda_i}{(\lambda_i+c)^2}$.
In OLS, it is known that vif values are the diagonal elements of the matrix $T\Lambda^{-1}T'$.
Comparing this $\Lambda^{-1}$ matrix with the corresponding of ridge$(\Lambda+cI)^{-1}-c (\Lambda+cI)^{-1} (\Lambda+cI)^{-1}$, every diagonal elements of the ridge case are deflated by the factor $\frac{\lambda_i^2}{(\lambda_i+c)^2}$.
I guess now we can conclude the bigger the ridge constant, we would get the more deflated VIFs.
I am not a native English speaker. Please don't mind my awkward expressions and it would be nice of you if you correct my grammar errors. Thank you. | Variance Inflation Factor less than 1 in ridge regression?
I would like to suggest that you calculate the diagonal elements of matrix directly.
It is assumed that the design matrix is centered and scaled.
We can adopt the eigen value decomposition $R_{XX}=X |
49,997 | Variance Inflation Factor less than 1 in ridge regression? | I was wrestling with this issue myself, and then discovered this article by Garcia et al., where they show how traditional definition of the VIF does in fact lead to values less than unity in the case of ridge regression. They subsequently propose an alternate definition, involving the ridge parameter $k$, which leads to VIFs bounded which are bounded from below by one. In the case of a two estimators with correlation $\rho$, if the number of observations is $n$, the expression for the VIF is
$$
VIF(k, n)=\frac{((n + 2)(1 + k) - k)^2}{(n + 2)^2((1 + k)^2 - \rho^2) - 2(n + 2)k(1 + k-\rho)}\ .
$$ | Variance Inflation Factor less than 1 in ridge regression? | I was wrestling with this issue myself, and then discovered this article by Garcia et al., where they show how traditional definition of the VIF does in fact lead to values less than unity in the case | Variance Inflation Factor less than 1 in ridge regression?
I was wrestling with this issue myself, and then discovered this article by Garcia et al., where they show how traditional definition of the VIF does in fact lead to values less than unity in the case of ridge regression. They subsequently propose an alternate definition, involving the ridge parameter $k$, which leads to VIFs bounded which are bounded from below by one. In the case of a two estimators with correlation $\rho$, if the number of observations is $n$, the expression for the VIF is
$$
VIF(k, n)=\frac{((n + 2)(1 + k) - k)^2}{(n + 2)^2((1 + k)^2 - \rho^2) - 2(n + 2)k(1 + k-\rho)}\ .
$$ | Variance Inflation Factor less than 1 in ridge regression?
I was wrestling with this issue myself, and then discovered this article by Garcia et al., where they show how traditional definition of the VIF does in fact lead to values less than unity in the case |
49,998 | Variance Inflation Factor less than 1 in ridge regression? | Recall that you are minimizing
$$
\|Ax -y\|^2 + c\|x-0\|^2
$$
So as $c$ increases $x\to 0$ i.e. the bias is such that your estimator approches $0$ as $c$ increases. So when your VIFs are approaching one then stop increasing $c$, that is probably your optimal $c$.
Reference: p. 434 in Applied Statistical Models. | Variance Inflation Factor less than 1 in ridge regression? | Recall that you are minimizing
$$
\|Ax -y\|^2 + c\|x-0\|^2
$$
So as $c$ increases $x\to 0$ i.e. the bias is such that your estimator approches $0$ as $c$ increases. So when your VIFs are approaching | Variance Inflation Factor less than 1 in ridge regression?
Recall that you are minimizing
$$
\|Ax -y\|^2 + c\|x-0\|^2
$$
So as $c$ increases $x\to 0$ i.e. the bias is such that your estimator approches $0$ as $c$ increases. So when your VIFs are approaching one then stop increasing $c$, that is probably your optimal $c$.
Reference: p. 434 in Applied Statistical Models. | Variance Inflation Factor less than 1 in ridge regression?
Recall that you are minimizing
$$
\|Ax -y\|^2 + c\|x-0\|^2
$$
So as $c$ increases $x\to 0$ i.e. the bias is such that your estimator approches $0$ as $c$ increases. So when your VIFs are approaching |
49,999 | Short term for Probability of Type I error | The pithiest word I've seen is size | Short term for Probability of Type I error | The pithiest word I've seen is size | Short term for Probability of Type I error
The pithiest word I've seen is size | Short term for Probability of Type I error
The pithiest word I've seen is size |
50,000 | Short term for Probability of Type I error | Significance level $\alpha$, see here
Power $1-\beta$, sensitivity or recall rate | Short term for Probability of Type I error | Significance level $\alpha$, see here
Power $1-\beta$, sensitivity or recall rate | Short term for Probability of Type I error
Significance level $\alpha$, see here
Power $1-\beta$, sensitivity or recall rate | Short term for Probability of Type I error
Significance level $\alpha$, see here
Power $1-\beta$, sensitivity or recall rate |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.