idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
55,301 | To what extent can statistics improve patient's treatment? | Never, NEVER, apply statistics on health matters of a single person. Not because of ethical reasons, but because the processes that drive our biology are still very poorly understood, especially in their interactions. They are so complex that the level of sophistication of our current statistical knowledge is a laugh compared to them. Of course medicine depends a lot on statistics but not to analyze a single person's health data, but in order to assess and evaluate this person's health data based on accumulated knowledge (including statistical results) from many-many patients...
I understand that this is more of a viewpoint rather than an answer, but I tried to offer also some arguments for my viewpoint. I love statistics, and I don't want to see them being implicated in murder. | To what extent can statistics improve patient's treatment? | Never, NEVER, apply statistics on health matters of a single person. Not because of ethical reasons, but because the processes that drive our biology are still very poorly understood, especially in th | To what extent can statistics improve patient's treatment?
Never, NEVER, apply statistics on health matters of a single person. Not because of ethical reasons, but because the processes that drive our biology are still very poorly understood, especially in their interactions. They are so complex that the level of sophistication of our current statistical knowledge is a laugh compared to them. Of course medicine depends a lot on statistics but not to analyze a single person's health data, but in order to assess and evaluate this person's health data based on accumulated knowledge (including statistical results) from many-many patients...
I understand that this is more of a viewpoint rather than an answer, but I tried to offer also some arguments for my viewpoint. I love statistics, and I don't want to see them being implicated in murder. | To what extent can statistics improve patient's treatment?
Never, NEVER, apply statistics on health matters of a single person. Not because of ethical reasons, but because the processes that drive our biology are still very poorly understood, especially in th |
55,302 | To what extent can statistics improve patient's treatment? | I would say it is well worth the time of a person with a personal stake to investigate data on the individual level. What can be learned by comparing averages is severely limited, as was noted by Poisson et al when "numerical" methods were first being applied to medicine nearly 200 years ago:
In the field of statistics, that is to say in the various attempts at
numerical assessment of facts, the first task is to lose sight of the
individual seen in isolation, to consider him only as a fraction of
the species. He must be stripped of his individuality so as to
eliminate anything accidental that this individuality might introduce
into the issue in hand.
In applied medicine, on the contrary, the problem is always
individual, facts to which a solution must be found only present
themselves one by one; it is always the patient's individual
personality that is in question, and in the end it is always a single
man with all his idiosyncrasies that the physician must treat. For us,
the masses are quite irrelevant to the issue.
Calculations of probability, in general, show that, all other things
being equal, the truth or the laws that are to be determined are all
the better approached if the observations used embrace a large number
of facts or individuals at once. These laws, then, by the very manner
in which they are derived, no longer have any individual character;
therefore it is impossible to apply them to the individual chances of
a single man, without exposing oneself to numerous errors.
Statistical research on conditions caused by calculi by Doctor Civiale. 1835. Int J Epidemiol. 2001 Dec;30(6):1246-9. Reprint of the classical report by Poisson, Dulong, Larrey, Double.
The thing to do with individual data is to look for patterns and theorize as to what mechanisms could be responsible the patterns. Group averages often hide the individual patterns and can lead to bad inference regarding the underlying mechanism under many circumstances.
Another issue with analyzing group level data is that individual differences are treated as noise, and if it is repeated measures then the intra-individual differences are also treated as noise. The idea that biological data consists of a deterministic signal + random noise is a rather unjustified assumption. It simplifies the analysis but also has caused many researchers to ignore studying the variability which is not necessarily "random".
One field that studies variability is cardiovascular physiology:
Heart Rate Variability
Standards of Measurement, Physiological Interpretation, and Clinical Use
Circulation. 1996; 93: 1043-1065 doi: 10.1161/01.CIR.93.5.1043
Another is the study of motor systems:
Variability and Determinism in Motor Behavior
Michael A. Riley and M. T. Turvey
Journal of Motor Behavior, 2002, Vol. 34, No. 2, 99-125
So what you could do is plot the data for each parameter over time, and investigate the structure of the variability for patterns. Even just eyeballing it may work, you may also be able to find methods from the fields mentioned above.
One of your goals is probably to compare treatments. The best "n=1" experiments are to measure baseline, give treatment, take away treatment, then give treatment again. If the treatment is doing what you expect you should expect the effect to appear, disappear, then appear again. This is not possible in your case, but perhaps there are similarities and differences of the putative mechanisms behind the treatments that could play a similar role. | To what extent can statistics improve patient's treatment? | I would say it is well worth the time of a person with a personal stake to investigate data on the individual level. What can be learned by comparing averages is severely limited, as was noted by Pois | To what extent can statistics improve patient's treatment?
I would say it is well worth the time of a person with a personal stake to investigate data on the individual level. What can be learned by comparing averages is severely limited, as was noted by Poisson et al when "numerical" methods were first being applied to medicine nearly 200 years ago:
In the field of statistics, that is to say in the various attempts at
numerical assessment of facts, the first task is to lose sight of the
individual seen in isolation, to consider him only as a fraction of
the species. He must be stripped of his individuality so as to
eliminate anything accidental that this individuality might introduce
into the issue in hand.
In applied medicine, on the contrary, the problem is always
individual, facts to which a solution must be found only present
themselves one by one; it is always the patient's individual
personality that is in question, and in the end it is always a single
man with all his idiosyncrasies that the physician must treat. For us,
the masses are quite irrelevant to the issue.
Calculations of probability, in general, show that, all other things
being equal, the truth or the laws that are to be determined are all
the better approached if the observations used embrace a large number
of facts or individuals at once. These laws, then, by the very manner
in which they are derived, no longer have any individual character;
therefore it is impossible to apply them to the individual chances of
a single man, without exposing oneself to numerous errors.
Statistical research on conditions caused by calculi by Doctor Civiale. 1835. Int J Epidemiol. 2001 Dec;30(6):1246-9. Reprint of the classical report by Poisson, Dulong, Larrey, Double.
The thing to do with individual data is to look for patterns and theorize as to what mechanisms could be responsible the patterns. Group averages often hide the individual patterns and can lead to bad inference regarding the underlying mechanism under many circumstances.
Another issue with analyzing group level data is that individual differences are treated as noise, and if it is repeated measures then the intra-individual differences are also treated as noise. The idea that biological data consists of a deterministic signal + random noise is a rather unjustified assumption. It simplifies the analysis but also has caused many researchers to ignore studying the variability which is not necessarily "random".
One field that studies variability is cardiovascular physiology:
Heart Rate Variability
Standards of Measurement, Physiological Interpretation, and Clinical Use
Circulation. 1996; 93: 1043-1065 doi: 10.1161/01.CIR.93.5.1043
Another is the study of motor systems:
Variability and Determinism in Motor Behavior
Michael A. Riley and M. T. Turvey
Journal of Motor Behavior, 2002, Vol. 34, No. 2, 99-125
So what you could do is plot the data for each parameter over time, and investigate the structure of the variability for patterns. Even just eyeballing it may work, you may also be able to find methods from the fields mentioned above.
One of your goals is probably to compare treatments. The best "n=1" experiments are to measure baseline, give treatment, take away treatment, then give treatment again. If the treatment is doing what you expect you should expect the effect to appear, disappear, then appear again. This is not possible in your case, but perhaps there are similarities and differences of the putative mechanisms behind the treatments that could play a similar role. | To what extent can statistics improve patient's treatment?
I would say it is well worth the time of a person with a personal stake to investigate data on the individual level. What can be learned by comparing averages is severely limited, as was noted by Pois |
55,303 | Decision tree and missing values | You will need to modify the algorithm slightly
The modification:
Building algo
suppose that we have some splitting test criterion $T$ and dataset $S$
information gain for splitting $S$ using $T$ is
$\Delta I (S, T) = I(S) - \sum_k \alpha_{T, k} \cdot I(S_k)$
let $S_0 \subseteq S$ for which we can't evaluate $T$ (because some values are NAs)
if $S_0 \not \equiv \varnothing$
calculate the information gain as
$\frac{|S - S_0|}{| S |} \Delta I (S - S_0, T)$
suppose such $T$ is chosen, what to do with values from $S_0$?
add them to all the subsets with weight proportional to the size of these subsets
$w_k = \frac{| S |}{|S - S_0|}$
and information gain is computed using sums of weights instead of counts
Classification
let $P(C | E,T)$ be the probability of classifying case $E$ to class $C$ using tree $T$
define it recursively:
if $t = \text{root}(T)$ is a leaf (i.e. it's a singleton tree)
then $P(C \ |\ E,T)$ is the relative frequency of training cases in class $C$ that reach $T$
if $t = \text{root}(T)$ is not a leaf and $t$ is partitioned using attribute $X$
if $E.X = x_k$
then $P(C \ |\ E,T) = P(C \ |\ E,T_k)$ where $T_k$ is a subtree of $T$ where $X = x_k$
if $E.X$ is unknown,
then $P(C \ | \ E,T) = \sum_{k=1}^{K} \frac{|S_k|}{|S-S_0|} \cdot P(C \ | \ E,T)$
so we sum up probabilities of belonging to class $C$ from each child of $t$
predict that a record belongs to class $C$ by selecting the highest probability $P(C \ | \ E,T)$
Example
Building
Suppose we have the following data:
There is one missing value for $X$: $(?, 90, \text{Yes}, +)$
let $I$ be the misclassification error
$I(S - S_0) = 5/13$ (5 in "-", 8 in "+")
$I(S - S_0 \ | \ X = a) = 2/5$
$I(S - S_0 \ |\ X = b) = 0$
$I(S - S_0 \ |\ X = c) = 2/5$
calculate IG $\frac{|S - S_0|}{| S |} \Delta I (S - S_0, T)$
$\Delta I = \frac{13}{14} \cdot (\frac{5}{13} - \frac{5}{13} \cdot \frac{2}{5} - \frac{3}{13} \cdot 0 - \frac{5}{13} \cdot \frac{2}{5}) = \frac{1}{14}$
So we obtain the following split
Classification
assume that $X$ is unknown - how to classify the case?
$P(+ \ | \ E,T) = \sum_{k=1}^{K} P(+ \ | \ E,T_k) = \frac{20}{50} \cdot \frac{15}{20} + \frac{30}{50} \cdot \frac{5}{30} = \frac{20}{50}$
$P(- \ | \ E,T) = \sum_{k=1}^{K} P(- \ | \ E,T_k) = \frac{20}{50} \cdot \frac{5}{20} + \frac{30}{50} \cdot \frac{25}{30} = \frac{30}{50}$
$P(- \ | \ E,T) > P(+ \ | \ E,T) \Rightarrow$ predict "$-$"
Source
http://mlwiki.org/index.php/Decision_Tree_(Data_Mining)#Handling_Missing_Values (there's more on decision trees) | Decision tree and missing values | You will need to modify the algorithm slightly
The modification:
Building algo
suppose that we have some splitting test criterion $T$ and dataset $S$
information gain for splitting $S$ using $T$ is | Decision tree and missing values
You will need to modify the algorithm slightly
The modification:
Building algo
suppose that we have some splitting test criterion $T$ and dataset $S$
information gain for splitting $S$ using $T$ is
$\Delta I (S, T) = I(S) - \sum_k \alpha_{T, k} \cdot I(S_k)$
let $S_0 \subseteq S$ for which we can't evaluate $T$ (because some values are NAs)
if $S_0 \not \equiv \varnothing$
calculate the information gain as
$\frac{|S - S_0|}{| S |} \Delta I (S - S_0, T)$
suppose such $T$ is chosen, what to do with values from $S_0$?
add them to all the subsets with weight proportional to the size of these subsets
$w_k = \frac{| S |}{|S - S_0|}$
and information gain is computed using sums of weights instead of counts
Classification
let $P(C | E,T)$ be the probability of classifying case $E$ to class $C$ using tree $T$
define it recursively:
if $t = \text{root}(T)$ is a leaf (i.e. it's a singleton tree)
then $P(C \ |\ E,T)$ is the relative frequency of training cases in class $C$ that reach $T$
if $t = \text{root}(T)$ is not a leaf and $t$ is partitioned using attribute $X$
if $E.X = x_k$
then $P(C \ |\ E,T) = P(C \ |\ E,T_k)$ where $T_k$ is a subtree of $T$ where $X = x_k$
if $E.X$ is unknown,
then $P(C \ | \ E,T) = \sum_{k=1}^{K} \frac{|S_k|}{|S-S_0|} \cdot P(C \ | \ E,T)$
so we sum up probabilities of belonging to class $C$ from each child of $t$
predict that a record belongs to class $C$ by selecting the highest probability $P(C \ | \ E,T)$
Example
Building
Suppose we have the following data:
There is one missing value for $X$: $(?, 90, \text{Yes}, +)$
let $I$ be the misclassification error
$I(S - S_0) = 5/13$ (5 in "-", 8 in "+")
$I(S - S_0 \ | \ X = a) = 2/5$
$I(S - S_0 \ |\ X = b) = 0$
$I(S - S_0 \ |\ X = c) = 2/5$
calculate IG $\frac{|S - S_0|}{| S |} \Delta I (S - S_0, T)$
$\Delta I = \frac{13}{14} \cdot (\frac{5}{13} - \frac{5}{13} \cdot \frac{2}{5} - \frac{3}{13} \cdot 0 - \frac{5}{13} \cdot \frac{2}{5}) = \frac{1}{14}$
So we obtain the following split
Classification
assume that $X$ is unknown - how to classify the case?
$P(+ \ | \ E,T) = \sum_{k=1}^{K} P(+ \ | \ E,T_k) = \frac{20}{50} \cdot \frac{15}{20} + \frac{30}{50} \cdot \frac{5}{30} = \frac{20}{50}$
$P(- \ | \ E,T) = \sum_{k=1}^{K} P(- \ | \ E,T_k) = \frac{20}{50} \cdot \frac{5}{20} + \frac{30}{50} \cdot \frac{25}{30} = \frac{30}{50}$
$P(- \ | \ E,T) > P(+ \ | \ E,T) \Rightarrow$ predict "$-$"
Source
http://mlwiki.org/index.php/Decision_Tree_(Data_Mining)#Handling_Missing_Values (there's more on decision trees) | Decision tree and missing values
You will need to modify the algorithm slightly
The modification:
Building algo
suppose that we have some splitting test criterion $T$ and dataset $S$
information gain for splitting $S$ using $T$ is |
55,304 | Decision tree and missing values | In case of decision tree, such missing data inputation makes sense (especially, that here this huge number of days clearly makes sense, as it stand for "infinity"). You can also find this response:
Assigning values to missing data for use in binary logistic regression in SAS
usefull, as it concerns similar issue. | Decision tree and missing values | In case of decision tree, such missing data inputation makes sense (especially, that here this huge number of days clearly makes sense, as it stand for "infinity"). You can also find this response:
As | Decision tree and missing values
In case of decision tree, such missing data inputation makes sense (especially, that here this huge number of days clearly makes sense, as it stand for "infinity"). You can also find this response:
Assigning values to missing data for use in binary logistic regression in SAS
usefull, as it concerns similar issue. | Decision tree and missing values
In case of decision tree, such missing data inputation makes sense (especially, that here this huge number of days clearly makes sense, as it stand for "infinity"). You can also find this response:
As |
55,305 | Results of bootstrap reliable? | As explained by Nick Cox and an anonymous user, what you think of as instability is just what the mixture models do: they don't care about labels unless you make it very clear that you know what your modes look like, roughly.
In terms of what you can do about fixing the labels where you need them to be, you would want to feed the full sample estimates of everything (both $\mu$s, both $\sigma$s, not just the $\lambda$ that you are feeding in now) as starting values. One can argue that this violates the spirit of maximum likelihood, but that may be the best you can do. If that does not really work, you may have to force even more information in, like insisting that $\mu_1 < \mu_2 - \delta$ and $\sigma_1 < \sigma_2 - \Delta$ and $\lambda > \frac12$. If normalmixEM() does not support that kind of cruelty to the parameter space, you would need to write your own likelihood with your own parameterization that accounts for such relations. | Results of bootstrap reliable? | As explained by Nick Cox and an anonymous user, what you think of as instability is just what the mixture models do: they don't care about labels unless you make it very clear that you know what your | Results of bootstrap reliable?
As explained by Nick Cox and an anonymous user, what you think of as instability is just what the mixture models do: they don't care about labels unless you make it very clear that you know what your modes look like, roughly.
In terms of what you can do about fixing the labels where you need them to be, you would want to feed the full sample estimates of everything (both $\mu$s, both $\sigma$s, not just the $\lambda$ that you are feeding in now) as starting values. One can argue that this violates the spirit of maximum likelihood, but that may be the best you can do. If that does not really work, you may have to force even more information in, like insisting that $\mu_1 < \mu_2 - \delta$ and $\sigma_1 < \sigma_2 - \Delta$ and $\lambda > \frac12$. If normalmixEM() does not support that kind of cruelty to the parameter space, you would need to write your own likelihood with your own parameterization that accounts for such relations. | Results of bootstrap reliable?
As explained by Nick Cox and an anonymous user, what you think of as instability is just what the mixture models do: they don't care about labels unless you make it very clear that you know what your |
55,306 | Results of bootstrap reliable? | My hunch is that your approach might not be reliable due to label switching, that is, each time you fit the mixture model, it's possible that the roles of the two normal distributions has been reversed.
That is, for different runs of the EM algorithm (mu1, sigma1) and (mu2, sigma2) might be switching roles.
It looks like the boot.se function provided by mixtools tries to account for this issue. | Results of bootstrap reliable? | My hunch is that your approach might not be reliable due to label switching, that is, each time you fit the mixture model, it's possible that the roles of the two normal distributions has been reverse | Results of bootstrap reliable?
My hunch is that your approach might not be reliable due to label switching, that is, each time you fit the mixture model, it's possible that the roles of the two normal distributions has been reversed.
That is, for different runs of the EM algorithm (mu1, sigma1) and (mu2, sigma2) might be switching roles.
It looks like the boot.se function provided by mixtools tries to account for this issue. | Results of bootstrap reliable?
My hunch is that your approach might not be reliable due to label switching, that is, each time you fit the mixture model, it's possible that the roles of the two normal distributions has been reverse |
55,307 | Which statistical topics to teach in European Studies study programme? | The scientific literature in (most areas of) experimental psychology is still choke-full of hypothesis tests with authors and reviewers alike expecting a p-value next to every number and very little interest for/understanding of effect sizes, modeling or anything else.
Consequently, a typical psychology graduate will have heard a lot about ANOVA and often think that statistical analysis is a search for a “significant” result complicated by abstract rules (“Am I allowed to do X?”) and mostly consists in the identification of the “right” test based on the level of measurement (ordinal, interval, etc.) culminating in obtaining a p-value.
I am certainly not advocating this as the right way to teach statistics to anybody but your students will need to be able to understand hypothesis testing and its limitations if they need to read the psychological literature or might go on to do a Ph.D. in psychology. This is in fact quite unfortunate as this material is far from easy to digest, probably not all that useful for many other careers/applications, and would eat up a lot of time that could in principe profitably be devoted to other things.
I don't know as much about political science or quantitative sociology but I would expect more emphasis on modeling, regression, the GLM, etc. | Which statistical topics to teach in European Studies study programme? | The scientific literature in (most areas of) experimental psychology is still choke-full of hypothesis tests with authors and reviewers alike expecting a p-value next to every number and very little i | Which statistical topics to teach in European Studies study programme?
The scientific literature in (most areas of) experimental psychology is still choke-full of hypothesis tests with authors and reviewers alike expecting a p-value next to every number and very little interest for/understanding of effect sizes, modeling or anything else.
Consequently, a typical psychology graduate will have heard a lot about ANOVA and often think that statistical analysis is a search for a “significant” result complicated by abstract rules (“Am I allowed to do X?”) and mostly consists in the identification of the “right” test based on the level of measurement (ordinal, interval, etc.) culminating in obtaining a p-value.
I am certainly not advocating this as the right way to teach statistics to anybody but your students will need to be able to understand hypothesis testing and its limitations if they need to read the psychological literature or might go on to do a Ph.D. in psychology. This is in fact quite unfortunate as this material is far from easy to digest, probably not all that useful for many other careers/applications, and would eat up a lot of time that could in principe profitably be devoted to other things.
I don't know as much about political science or quantitative sociology but I would expect more emphasis on modeling, regression, the GLM, etc. | Which statistical topics to teach in European Studies study programme?
The scientific literature in (most areas of) experimental psychology is still choke-full of hypothesis tests with authors and reviewers alike expecting a p-value next to every number and very little i |
55,308 | Which statistical topics to teach in European Studies study programme? | Thom Baguley, an outgoing editor of the British Journal of Mathematical and Statistical Psychology, wrote a good and effective book for the upper undergraduate level. It is very modern in many respects, including use of R and discussion of the advanced models such as multilevel stuff.
Another good book is "Mostly Harmless Econometrics" about tweaking linear regression to work well when the data are contaminated with intertwined social effects. It is written without matrix algebra at all, but at a level of methodological rigor that is very appropriate at the doctorate level.
If you can somehow combine the two books together, this would be a fabulous course. With a degree in pure math stat, though, either one will blow you mind with the stuff that you have NEVER seen in your scholastic statistics classes. | Which statistical topics to teach in European Studies study programme? | Thom Baguley, an outgoing editor of the British Journal of Mathematical and Statistical Psychology, wrote a good and effective book for the upper undergraduate level. It is very modern in many respect | Which statistical topics to teach in European Studies study programme?
Thom Baguley, an outgoing editor of the British Journal of Mathematical and Statistical Psychology, wrote a good and effective book for the upper undergraduate level. It is very modern in many respects, including use of R and discussion of the advanced models such as multilevel stuff.
Another good book is "Mostly Harmless Econometrics" about tweaking linear regression to work well when the data are contaminated with intertwined social effects. It is written without matrix algebra at all, but at a level of methodological rigor that is very appropriate at the doctorate level.
If you can somehow combine the two books together, this would be a fabulous course. With a degree in pure math stat, though, either one will blow you mind with the stuff that you have NEVER seen in your scholastic statistics classes. | Which statistical topics to teach in European Studies study programme?
Thom Baguley, an outgoing editor of the British Journal of Mathematical and Statistical Psychology, wrote a good and effective book for the upper undergraduate level. It is very modern in many respect |
55,309 | How to understand the label-bias problem in HMM? | Label-bias is not a problem for HMM,because input sequence is generated by the model. By global normalization, CRF model avoid this problem. | How to understand the label-bias problem in HMM? | Label-bias is not a problem for HMM,because input sequence is generated by the model. By global normalization, CRF model avoid this problem. | How to understand the label-bias problem in HMM?
Label-bias is not a problem for HMM,because input sequence is generated by the model. By global normalization, CRF model avoid this problem. | How to understand the label-bias problem in HMM?
Label-bias is not a problem for HMM,because input sequence is generated by the model. By global normalization, CRF model avoid this problem. |
55,310 | How to understand the label-bias problem in HMM? | Based on Section 2 of "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data" by Lafferty, J. et al,
I think it is "states with a single outgoing transition effectively ignores their observation. More generally, states with low-entropy next state distributions will take little notice of observations".
Honestly, I'm not quite sure if this is the Label Bias Problem. Because, I don't know why this is a problem. Aren't the next state distributions inferred from the training data? So low entropy next state distributions happen because that is what the data has..then the problem isn't the model.. it's the data's...
HTH. | How to understand the label-bias problem in HMM? | Based on Section 2 of "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data" by Lafferty, J. et al,
I think it is "states with a single outgoing transition effect | How to understand the label-bias problem in HMM?
Based on Section 2 of "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data" by Lafferty, J. et al,
I think it is "states with a single outgoing transition effectively ignores their observation. More generally, states with low-entropy next state distributions will take little notice of observations".
Honestly, I'm not quite sure if this is the Label Bias Problem. Because, I don't know why this is a problem. Aren't the next state distributions inferred from the training data? So low entropy next state distributions happen because that is what the data has..then the problem isn't the model.. it's the data's...
HTH. | How to understand the label-bias problem in HMM?
Based on Section 2 of "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data" by Lafferty, J. et al,
I think it is "states with a single outgoing transition effect |
55,311 | How to understand the label-bias problem in HMM? | Suppose a simple finite state machine which was developed for named entity recognition.
In those kinds of machines, the states with a single outgoing transition effectively ignore their observation. In other words, the states with a single transition simply have to move to the next state without considering their current observation. More generally, states with low-entropy next state distributions will take little notice of observations.
Reference:
J. D. Lafferty, A. McCallum, and F. C. N. Pereira, “Conditional random fields: Probabilistic models for segmenting and labeling sequence data”, in Proceedings of the eighteenth international conference on machine learning, ser. ICML ’01, San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 2001, pp. 282–289, ISBN: 1-55860-778-1. [Online]. Available: http://dl.acm.org/citation.cfm?id=645530.655813. | How to understand the label-bias problem in HMM? | Suppose a simple finite state machine which was developed for named entity recognition.
In those kinds of machines, the states with a single outgoing transition effectively ignore their observation. | How to understand the label-bias problem in HMM?
Suppose a simple finite state machine which was developed for named entity recognition.
In those kinds of machines, the states with a single outgoing transition effectively ignore their observation. In other words, the states with a single transition simply have to move to the next state without considering their current observation. More generally, states with low-entropy next state distributions will take little notice of observations.
Reference:
J. D. Lafferty, A. McCallum, and F. C. N. Pereira, “Conditional random fields: Probabilistic models for segmenting and labeling sequence data”, in Proceedings of the eighteenth international conference on machine learning, ser. ICML ’01, San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 2001, pp. 282–289, ISBN: 1-55860-778-1. [Online]. Available: http://dl.acm.org/citation.cfm?id=645530.655813. | How to understand the label-bias problem in HMM?
Suppose a simple finite state machine which was developed for named entity recognition.
In those kinds of machines, the states with a single outgoing transition effectively ignore their observation. |
55,312 | How to understand the label-bias problem in HMM? | CRF is a solution for MEMM and NOT for HMM.
In markov model Label-bias is not a problem ,because input sequence is generated by the model (Farhana Liza). in MEMM, while calculating the transition probabilities, from every position (AKA state), the probabilities sums up to 1.
So whats the problem?
Lets say we have a state that is very UNlikely to happen, but when it does, with a very high probability (even 1) it would happen again. Now, if we have a long chain of states, there is a higher probability to stay in that position forever even though it is a state that is unlikely to happen!
In the CRF model we are using GLOBAL NORMALIZATION, which takes care of it and sums up all of the transition probabilities to 1.
Good luck! | How to understand the label-bias problem in HMM? | CRF is a solution for MEMM and NOT for HMM.
In markov model Label-bias is not a problem ,because input sequence is generated by the model (Farhana Liza). in MEMM, while calculating the transition prob | How to understand the label-bias problem in HMM?
CRF is a solution for MEMM and NOT for HMM.
In markov model Label-bias is not a problem ,because input sequence is generated by the model (Farhana Liza). in MEMM, while calculating the transition probabilities, from every position (AKA state), the probabilities sums up to 1.
So whats the problem?
Lets say we have a state that is very UNlikely to happen, but when it does, with a very high probability (even 1) it would happen again. Now, if we have a long chain of states, there is a higher probability to stay in that position forever even though it is a state that is unlikely to happen!
In the CRF model we are using GLOBAL NORMALIZATION, which takes care of it and sums up all of the transition probabilities to 1.
Good luck! | How to understand the label-bias problem in HMM?
CRF is a solution for MEMM and NOT for HMM.
In markov model Label-bias is not a problem ,because input sequence is generated by the model (Farhana Liza). in MEMM, while calculating the transition prob |
55,313 | Regression with neural network | Unless you restrict the range of your inputs, the sigmoid may be giving you a problem. You won't even be able to learn the function $y=x$ if you have a sigmoid in the middle.
If you have a restricted range, then the input-hidden weights could scale the input values so that they hit the sigmoid at the part of the graph which looks like a line i.e. the part around 0:
Once you have the linear behavior, the hidden-output weights could re-scale the values back. The training process will take care of all this for you - but to test this hypothesis you could just train (and validate) with input values in $[-1,1]$.
Do you know what the network's function looks like? If you plot $(x_1, x_2) \rightarrow y$, does it look anything like a plane? At least in parts?
You could also try training with more data and seeing if the graph gets closer to $x_1+2x_2$. | Regression with neural network | Unless you restrict the range of your inputs, the sigmoid may be giving you a problem. You won't even be able to learn the function $y=x$ if you have a sigmoid in the middle.
If you have a restricted | Regression with neural network
Unless you restrict the range of your inputs, the sigmoid may be giving you a problem. You won't even be able to learn the function $y=x$ if you have a sigmoid in the middle.
If you have a restricted range, then the input-hidden weights could scale the input values so that they hit the sigmoid at the part of the graph which looks like a line i.e. the part around 0:
Once you have the linear behavior, the hidden-output weights could re-scale the values back. The training process will take care of all this for you - but to test this hypothesis you could just train (and validate) with input values in $[-1,1]$.
Do you know what the network's function looks like? If you plot $(x_1, x_2) \rightarrow y$, does it look anything like a plane? At least in parts?
You could also try training with more data and seeing if the graph gets closer to $x_1+2x_2$. | Regression with neural network
Unless you restrict the range of your inputs, the sigmoid may be giving you a problem. You won't even be able to learn the function $y=x$ if you have a sigmoid in the middle.
If you have a restricted |
55,314 | Regression with neural network | Assuming you wrote the implementation yourself, you may simply have a bug in your backpropagation algorithm. Some bugs can be quite subtle and leave the algorithm partially working with poor performance. You might try adding some gradient checking code to your implementation to verify the calculated gradients. Here's an excellent video by Andrew Ng on how to do this:
http://www.youtube.com/watch?v=12a9fsLyFes
Note that I'm assuming you're getting poor performance on the training set, rather than the test set. You should be able to get near perfect results on the training set, at which point your implementation is likely correct and you can start dealing with overfitting by adding regularization, etc. I would disable regularization until you get to that point.
As a side note, since you're dataset has a linear relationships between the inputs and the outputs you'll likely get a better result with NO hidden neurons (i.e. a single layer perceptron), but then an MLP should work too, and you can then test it on a non-linear dataset. | Regression with neural network | Assuming you wrote the implementation yourself, you may simply have a bug in your backpropagation algorithm. Some bugs can be quite subtle and leave the algorithm partially working with poor performan | Regression with neural network
Assuming you wrote the implementation yourself, you may simply have a bug in your backpropagation algorithm. Some bugs can be quite subtle and leave the algorithm partially working with poor performance. You might try adding some gradient checking code to your implementation to verify the calculated gradients. Here's an excellent video by Andrew Ng on how to do this:
http://www.youtube.com/watch?v=12a9fsLyFes
Note that I'm assuming you're getting poor performance on the training set, rather than the test set. You should be able to get near perfect results on the training set, at which point your implementation is likely correct and you can start dealing with overfitting by adding regularization, etc. I would disable regularization until you get to that point.
As a side note, since you're dataset has a linear relationships between the inputs and the outputs you'll likely get a better result with NO hidden neurons (i.e. a single layer perceptron), but then an MLP should work too, and you can then test it on a non-linear dataset. | Regression with neural network
Assuming you wrote the implementation yourself, you may simply have a bug in your backpropagation algorithm. Some bugs can be quite subtle and leave the algorithm partially working with poor performan |
55,315 | Normalizing Term Frequency for document clustering | A common misunderstanding is the term "frequency". To some, it seems to be the count of objects. But usually, frequency is a relative value.
TF/IDF usually is a two-fold normalization.
First, each document is normalized to length 1, so there is no bias for longer or shorter documents. This equals taking the relative frequencies instead of the absolute term counts. This is "TF".
Second, IDF then is a cross-document normalization, that puts less weight on common terms, and more weight on rare terms, by normalizing (weighting) each word with the inverse in-corpus frequency. Here it does not matter whether you use the absolute or relative frequency, as this amounts just to a constant factor across all vectors, so you will get different distances, but only by a constant factor (the corpus size).
To get the formulas right, try to understand why they are supposed to be one way or another. It's worthless to just copy some formula from a source that may even have it wrong. Instead, understand the mathematics and intentions behind it. | Normalizing Term Frequency for document clustering | A common misunderstanding is the term "frequency". To some, it seems to be the count of objects. But usually, frequency is a relative value.
TF/IDF usually is a two-fold normalization.
First, each do | Normalizing Term Frequency for document clustering
A common misunderstanding is the term "frequency". To some, it seems to be the count of objects. But usually, frequency is a relative value.
TF/IDF usually is a two-fold normalization.
First, each document is normalized to length 1, so there is no bias for longer or shorter documents. This equals taking the relative frequencies instead of the absolute term counts. This is "TF".
Second, IDF then is a cross-document normalization, that puts less weight on common terms, and more weight on rare terms, by normalizing (weighting) each word with the inverse in-corpus frequency. Here it does not matter whether you use the absolute or relative frequency, as this amounts just to a constant factor across all vectors, so you will get different distances, but only by a constant factor (the corpus size).
To get the formulas right, try to understand why they are supposed to be one way or another. It's worthless to just copy some formula from a source that may even have it wrong. Instead, understand the mathematics and intentions behind it. | Normalizing Term Frequency for document clustering
A common misunderstanding is the term "frequency". To some, it seems to be the count of objects. But usually, frequency is a relative value.
TF/IDF usually is a two-fold normalization.
First, each do |
55,316 | How can reliability and validity of content analysis be quantified when there is only one person coding the data? | Usual reliability indices (Cronbach $\alpha$, Cohen $\kappa$, etc.) only ever quantify the influence of a single source of error. For $\alpha$ the relevant source of error is item-specific variance, for $\kappa$ and other measures of inter-rater argument, it is rater-specific error. In any measurement situation, there are other sources of error that might or might not matter for your purposes.
Generalizability theory provides a systematic framework to approach these multiple sources of error. In generalizability theory, a study has a number of “facets” like items, time or raters. Their influence is first quantified in a “G study” and these estimates can be used to adjust the measurement. This has a number of practical applications but the important thing with respect to your question is that it is only possible to quantify error related to the facets that vary in your study and you must design it to include all the facets you care or worry about.
For example, if you only have one point of measurement, you don't know how stable the measure is over time and if you have only one rater, you can't possibly know how much raters would differ from each other. Stated that way, it might sound obvious but talk of “estimating the reliability” often obscures this basic point.
In practice, it means that in your case you have no way to check if your results would generalize to other raters or if different people would categorize the content in the same way (one rater might in fact be enough, but you can't check that with your data). If you have concerns about inter-rater agreement (as is often the case), you really need to have another rater code at least part of the content. You can however quantify other sources of error (say temporal stability, different coding schemes or software, etc.) with a single rater and call that “reliability assessment” but that still would not tell you about inter-rater agreement or eliminate potential issues with rater-specific error. | How can reliability and validity of content analysis be quantified when there is only one person cod | Usual reliability indices (Cronbach $\alpha$, Cohen $\kappa$, etc.) only ever quantify the influence of a single source of error. For $\alpha$ the relevant source of error is item-specific variance, f | How can reliability and validity of content analysis be quantified when there is only one person coding the data?
Usual reliability indices (Cronbach $\alpha$, Cohen $\kappa$, etc.) only ever quantify the influence of a single source of error. For $\alpha$ the relevant source of error is item-specific variance, for $\kappa$ and other measures of inter-rater argument, it is rater-specific error. In any measurement situation, there are other sources of error that might or might not matter for your purposes.
Generalizability theory provides a systematic framework to approach these multiple sources of error. In generalizability theory, a study has a number of “facets” like items, time or raters. Their influence is first quantified in a “G study” and these estimates can be used to adjust the measurement. This has a number of practical applications but the important thing with respect to your question is that it is only possible to quantify error related to the facets that vary in your study and you must design it to include all the facets you care or worry about.
For example, if you only have one point of measurement, you don't know how stable the measure is over time and if you have only one rater, you can't possibly know how much raters would differ from each other. Stated that way, it might sound obvious but talk of “estimating the reliability” often obscures this basic point.
In practice, it means that in your case you have no way to check if your results would generalize to other raters or if different people would categorize the content in the same way (one rater might in fact be enough, but you can't check that with your data). If you have concerns about inter-rater agreement (as is often the case), you really need to have another rater code at least part of the content. You can however quantify other sources of error (say temporal stability, different coding schemes or software, etc.) with a single rater and call that “reliability assessment” but that still would not tell you about inter-rater agreement or eliminate potential issues with rater-specific error. | How can reliability and validity of content analysis be quantified when there is only one person cod
Usual reliability indices (Cronbach $\alpha$, Cohen $\kappa$, etc.) only ever quantify the influence of a single source of error. For $\alpha$ the relevant source of error is item-specific variance, f |
55,317 | Log likelihood improves with addition of a nonsignificant variable | Apparently the problem was that there was missing data in the predictor you added to your model, making the log-likelihoods on different scales and therefore not comparable. This is a very insidious problem sometimes because most software, by default, removes those cases silently and leaves you to figure out what happened (smh..) and is one most analysts will run into eventually. I know in R there is an argument called na.action that you can pass to GLMs to control exactly what it does with missing (NA) values but I'm not sure how to control this in SAS.
Often times this issue is only detected after observing odd behavior in the log-likelihood, such as wild discrepancies between the Wald-based and the LRT-based $p$-values, as you saw here. Related to this, while there is likely to be some differences between the Wald and LRT-based inference, there shouldn't be a ton, especially for larger sample sizes, since the two are asymptotically equivalent. | Log likelihood improves with addition of a nonsignificant variable | Apparently the problem was that there was missing data in the predictor you added to your model, making the log-likelihoods on different scales and therefore not comparable. This is a very insidious p | Log likelihood improves with addition of a nonsignificant variable
Apparently the problem was that there was missing data in the predictor you added to your model, making the log-likelihoods on different scales and therefore not comparable. This is a very insidious problem sometimes because most software, by default, removes those cases silently and leaves you to figure out what happened (smh..) and is one most analysts will run into eventually. I know in R there is an argument called na.action that you can pass to GLMs to control exactly what it does with missing (NA) values but I'm not sure how to control this in SAS.
Often times this issue is only detected after observing odd behavior in the log-likelihood, such as wild discrepancies between the Wald-based and the LRT-based $p$-values, as you saw here. Related to this, while there is likely to be some differences between the Wald and LRT-based inference, there shouldn't be a ton, especially for larger sample sizes, since the two are asymptotically equivalent. | Log likelihood improves with addition of a nonsignificant variable
Apparently the problem was that there was missing data in the predictor you added to your model, making the log-likelihoods on different scales and therefore not comparable. This is a very insidious p |
55,318 | What is "ANOVA"? | One-way and two-way ANOVA are just two simple versions, but I doubt two experts on the topic would agree exactly what is central to ANOVA, treated in moderate or extreme generality.
For evidence, see Speed, T.P. 1987. What is an analysis of variance?
Annals of Statistics 15: 885-910. Eleven discussions follow with a rejoinder by the author rounding it off. | What is "ANOVA"? | One-way and two-way ANOVA are just two simple versions, but I doubt two experts on the topic would agree exactly what is central to ANOVA, treated in moderate or extreme generality.
For evidence, see | What is "ANOVA"?
One-way and two-way ANOVA are just two simple versions, but I doubt two experts on the topic would agree exactly what is central to ANOVA, treated in moderate or extreme generality.
For evidence, see Speed, T.P. 1987. What is an analysis of variance?
Annals of Statistics 15: 885-910. Eleven discussions follow with a rejoinder by the author rounding it off. | What is "ANOVA"?
One-way and two-way ANOVA are just two simple versions, but I doubt two experts on the topic would agree exactly what is central to ANOVA, treated in moderate or extreme generality.
For evidence, see |
55,319 | What is "ANOVA"? | ANOVA is a technique, not a model
Some sources refer to ANOVA as a "model" or a "collection of models" but in my view that is incorrect. The acronym ANOVA refers to the "analysis of variance", which is a statistical technique that can be applied to a variety of statistical models, rather than being a model itself. The essence of ANOVA lies in use of the law of iterated variance (or other variance decompositions) applied to regression models. Consider the general form of a homoskedastic regression model:
$$Y_i = u(\mathbf{X}_i, \boldsymbol{\beta}) + \varepsilon_i
\quad \quad \quad \mathbb{E}(\varepsilon_i) = 0
\quad \quad \quad \mathbb{V}(\varepsilon_i) = \sigma^2.$$
Letting $v(\boldsymbol{\beta}) \equiv \mathbb{V}[u(\mathbf{X}_i, \boldsymbol{\beta})]$ and applying the law of iterated variance to this general model gives:
$$\begin{align}
\mathbb{V}(Y_i)
&= \mathbb{V}[\mathbb{E}(Y_i|\mathbf{X}_i)] + \mathbb{E} [\mathbb{V}(Y_i|\mathbf{X}_i) ] \\[6pt]
&= \mathbb{V}[u(\mathbf{X}_i, \boldsymbol{\beta})] + \mathbb{E} [\sigma^2] \\[6pt]
&= v(\boldsymbol{\beta}) + \sigma^2. \\[6pt]
\end{align}$$
Now, the data generally allows you to estimate the variance of the response variable and error term, which gives the estimate:
$$\hat{v}(\boldsymbol{\beta}) = \hat{\sigma}_Y^2 - \hat{\sigma}^2.$$
Any hypothesised value for $\boldsymbol{\beta}$ gives a known value for the variance term $v(\boldsymbol{\beta})$, which means you can test the plausibility of a hypothesised values by looking at whether the implied variance is near the estimated variance. This is how you use ANOVA tests in regression models to test hypotheses on $\boldsymbol{\beta}$. | What is "ANOVA"? | ANOVA is a technique, not a model
Some sources refer to ANOVA as a "model" or a "collection of models" but in my view that is incorrect. The acronym ANOVA refers to the "analysis of variance", which | What is "ANOVA"?
ANOVA is a technique, not a model
Some sources refer to ANOVA as a "model" or a "collection of models" but in my view that is incorrect. The acronym ANOVA refers to the "analysis of variance", which is a statistical technique that can be applied to a variety of statistical models, rather than being a model itself. The essence of ANOVA lies in use of the law of iterated variance (or other variance decompositions) applied to regression models. Consider the general form of a homoskedastic regression model:
$$Y_i = u(\mathbf{X}_i, \boldsymbol{\beta}) + \varepsilon_i
\quad \quad \quad \mathbb{E}(\varepsilon_i) = 0
\quad \quad \quad \mathbb{V}(\varepsilon_i) = \sigma^2.$$
Letting $v(\boldsymbol{\beta}) \equiv \mathbb{V}[u(\mathbf{X}_i, \boldsymbol{\beta})]$ and applying the law of iterated variance to this general model gives:
$$\begin{align}
\mathbb{V}(Y_i)
&= \mathbb{V}[\mathbb{E}(Y_i|\mathbf{X}_i)] + \mathbb{E} [\mathbb{V}(Y_i|\mathbf{X}_i) ] \\[6pt]
&= \mathbb{V}[u(\mathbf{X}_i, \boldsymbol{\beta})] + \mathbb{E} [\sigma^2] \\[6pt]
&= v(\boldsymbol{\beta}) + \sigma^2. \\[6pt]
\end{align}$$
Now, the data generally allows you to estimate the variance of the response variable and error term, which gives the estimate:
$$\hat{v}(\boldsymbol{\beta}) = \hat{\sigma}_Y^2 - \hat{\sigma}^2.$$
Any hypothesised value for $\boldsymbol{\beta}$ gives a known value for the variance term $v(\boldsymbol{\beta})$, which means you can test the plausibility of a hypothesised values by looking at whether the implied variance is near the estimated variance. This is how you use ANOVA tests in regression models to test hypotheses on $\boldsymbol{\beta}$. | What is "ANOVA"?
ANOVA is a technique, not a model
Some sources refer to ANOVA as a "model" or a "collection of models" but in my view that is incorrect. The acronym ANOVA refers to the "analysis of variance", which |
55,320 | How to derive the first order autocorrelation coefficient of an AR(1) process? | Find a mean and variance ($\gamma_0$) of $y_t$. How does the condition $|\Theta|<1$ help you to do that? Remember the assumptions of how residuals are distributed and similar.
Autocorrelation coefficient $\rho_1=\dfrac{\gamma_1}{\gamma_0}$, i.e. now you are missing only $\gamma_1$. Once you write down its expression the hints from the previous step are sufficient here too. | How to derive the first order autocorrelation coefficient of an AR(1) process? | Find a mean and variance ($\gamma_0$) of $y_t$. How does the condition $|\Theta|<1$ help you to do that? Remember the assumptions of how residuals are distributed and similar.
Autocorrelation coeffici | How to derive the first order autocorrelation coefficient of an AR(1) process?
Find a mean and variance ($\gamma_0$) of $y_t$. How does the condition $|\Theta|<1$ help you to do that? Remember the assumptions of how residuals are distributed and similar.
Autocorrelation coefficient $\rho_1=\dfrac{\gamma_1}{\gamma_0}$, i.e. now you are missing only $\gamma_1$. Once you write down its expression the hints from the previous step are sufficient here too. | How to derive the first order autocorrelation coefficient of an AR(1) process?
Find a mean and variance ($\gamma_0$) of $y_t$. How does the condition $|\Theta|<1$ help you to do that? Remember the assumptions of how residuals are distributed and similar.
Autocorrelation coeffici |
55,321 | How to derive the first order autocorrelation coefficient of an AR(1) process? | Multiply both sides by $y_{t-1}$ and take expectation. Exploit the fact that $u_t$ and $y_{t-1}$ are not correlated.
To calculate variance simply square both sides and then take expectation. | How to derive the first order autocorrelation coefficient of an AR(1) process? | Multiply both sides by $y_{t-1}$ and take expectation. Exploit the fact that $u_t$ and $y_{t-1}$ are not correlated.
To calculate variance simply square both sides and then take expectation. | How to derive the first order autocorrelation coefficient of an AR(1) process?
Multiply both sides by $y_{t-1}$ and take expectation. Exploit the fact that $u_t$ and $y_{t-1}$ are not correlated.
To calculate variance simply square both sides and then take expectation. | How to derive the first order autocorrelation coefficient of an AR(1) process?
Multiply both sides by $y_{t-1}$ and take expectation. Exploit the fact that $u_t$ and $y_{t-1}$ are not correlated.
To calculate variance simply square both sides and then take expectation. |
55,322 | Correlation between discrete and continuous variables | The discreteness is not an issue, so much as the ordinal (ordered, graded) scale used for your assessment from normal to severe. That indeed implies something different from standard linear regression, namely some ordinal regression method such as ordered logit or ordered probit.
Note incidentally that multivariate regression is not the same as multiple regression. | Correlation between discrete and continuous variables | The discreteness is not an issue, so much as the ordinal (ordered, graded) scale used for your assessment from normal to severe. That indeed implies something different from standard linear regression | Correlation between discrete and continuous variables
The discreteness is not an issue, so much as the ordinal (ordered, graded) scale used for your assessment from normal to severe. That indeed implies something different from standard linear regression, namely some ordinal regression method such as ordered logit or ordered probit.
Note incidentally that multivariate regression is not the same as multiple regression. | Correlation between discrete and continuous variables
The discreteness is not an issue, so much as the ordinal (ordered, graded) scale used for your assessment from normal to severe. That indeed implies something different from standard linear regression |
55,323 | Does it make sense to run LDA on several principal components and not on all variables? | First of all, do you have an actual indication (external knowledge) that your data consists of a few variates that carry discriminatory information among noise-only variates? There is data that can be assumed to follow such a model (e.g. gene microarray data), while other types of data have the discriminatory information "spread out" over many variates (e.g. spectroscopic data). The choice of dimension reduction technique will depend on this.
I think you may want to take a look at chapter 3.4 (Shrinkage methods) of The Elements of Statistical Learning.
Principal Component Analysis and Partial Least Squares (a supervised regression analogue to PCA) are best fit for the latter type of data.
It is certainly possible to model in the new space spanned by the selected principal components. You just take the scores of those PCs as input for the LDA. This type of model is often referred to as PCA-LDA.
I wrote a bit of a comparison between PCA-LDA and PLS-LDA (doing LDA in the PLS scores space) in my answer to "Should PCA be performed before I do classification?". Briefly, I usually prefer PLS as "preprocessing" for the LDA as it is very well adapted to situations with large numbers of (correlated) variates and (unlike PCA) it already emphasizes directions that help to discriminate the groups. PLS-DA (wihtout L) means "abusing" PLS-Regression by using dummy levels (e.g. 0 and 1, or -1 and +1) for the classes and then putting a threshold on the regression result. In my experience this is often inferior to PLS-LDA: PLS is a regression technique and as such at some point will desparately try to reduce the point clouds around the dummy levels to points (i.e. project all samples of one class to exactly 1 and all of the other to exactly 0), which leads to overfitting. LDA as a proper classification technique helps to avoid this - but it profits from the reduction of variates by the PLS.
As @January pointed out, you need to be careful with the validation of your model. However, this is easy if you keep 2 points in mind:
Data-driven variable reduction (or selection) such as PCA, PLS, or any picking of variables with the help of measures derived from the data is part of the model. If you do a resampling validation (iterated $k$-fold cross-validation, out-of-bootstrap) - which you should do given your restricted sample size - you need to redo this variable reduction for each of the surrogate models.
The same applies to data-driven (hyper)parameter selection such as determining the number of PCs or latent variables for PLS: redo this for each of the surrogate models (e.g. in an inner resampling validation loop) or fix the hyperparameters in advance. The latter is possible with a bit of experience about the particular type of data and particularly for the PCA-LDA and PLS-LDA models as they are not too sensitive for the exact number of variates. The advantage of fixing lies also in the fact that data-driven optimization is rather difficult for classification models, you should use a so-called proper scoring rule for that and you need rather large numbers of test cases.
(I cannot recommend any solution in Stata, but I could give you an R package where I implemented these combined models).
update to answer @doctorate's comment:
Yes, in priciple you can treat the PCA or PLS projection as dimensionality reduction pre-processing and do this before any other kind of classification.
IMHO One should spend a few thoughts about whether this approach is appropriate for the data at hand.
There's quite some literature about the combination PLS with generalized linear models such as logistic regression, see e.g.
Bastien, P.; Vinzi, V. E. & Tenenhaus, M.: PLS generalised linear regression, Computational Statistics & Data Analysis, 48, 17-46 (2005). DOI: 10.1016/j.csda.2004.02.005
Fort, G. & Lambert-Lacroix, S.: Classification using partial least squares with penalized logistic regression, Bioinformatics, 21, 1104-1111 (2005). DOI: 10.1093/bioinformatics/bti114
Boulesteix, A.-L. & Strimmer, K.: Partial least squares: a versatile tool for the analysis of high-dimensional genomic data, Brief Bioinform, 8, 32-44 (2007). DOI: 10.1093/bib/bbl016](http://dx.doi.org/10.1093/bib/bbl016)
R packages plsRglm and plsgenomics have generalized linear models with PLS and PLS with logistic regression.
On the other hand, if you find yourself reducing the data by linear projection to a few latent variables and then applying a highly nonlinear model such as randomForest, you should know an answer why this is the way to go as opposed to do a linear or maybe "slightly non-linear" model on the original data. | Does it make sense to run LDA on several principal components and not on all variables? | First of all, do you have an actual indication (external knowledge) that your data consists of a few variates that carry discriminatory information among noise-only variates? There is data that can be | Does it make sense to run LDA on several principal components and not on all variables?
First of all, do you have an actual indication (external knowledge) that your data consists of a few variates that carry discriminatory information among noise-only variates? There is data that can be assumed to follow such a model (e.g. gene microarray data), while other types of data have the discriminatory information "spread out" over many variates (e.g. spectroscopic data). The choice of dimension reduction technique will depend on this.
I think you may want to take a look at chapter 3.4 (Shrinkage methods) of The Elements of Statistical Learning.
Principal Component Analysis and Partial Least Squares (a supervised regression analogue to PCA) are best fit for the latter type of data.
It is certainly possible to model in the new space spanned by the selected principal components. You just take the scores of those PCs as input for the LDA. This type of model is often referred to as PCA-LDA.
I wrote a bit of a comparison between PCA-LDA and PLS-LDA (doing LDA in the PLS scores space) in my answer to "Should PCA be performed before I do classification?". Briefly, I usually prefer PLS as "preprocessing" for the LDA as it is very well adapted to situations with large numbers of (correlated) variates and (unlike PCA) it already emphasizes directions that help to discriminate the groups. PLS-DA (wihtout L) means "abusing" PLS-Regression by using dummy levels (e.g. 0 and 1, or -1 and +1) for the classes and then putting a threshold on the regression result. In my experience this is often inferior to PLS-LDA: PLS is a regression technique and as such at some point will desparately try to reduce the point clouds around the dummy levels to points (i.e. project all samples of one class to exactly 1 and all of the other to exactly 0), which leads to overfitting. LDA as a proper classification technique helps to avoid this - but it profits from the reduction of variates by the PLS.
As @January pointed out, you need to be careful with the validation of your model. However, this is easy if you keep 2 points in mind:
Data-driven variable reduction (or selection) such as PCA, PLS, or any picking of variables with the help of measures derived from the data is part of the model. If you do a resampling validation (iterated $k$-fold cross-validation, out-of-bootstrap) - which you should do given your restricted sample size - you need to redo this variable reduction for each of the surrogate models.
The same applies to data-driven (hyper)parameter selection such as determining the number of PCs or latent variables for PLS: redo this for each of the surrogate models (e.g. in an inner resampling validation loop) or fix the hyperparameters in advance. The latter is possible with a bit of experience about the particular type of data and particularly for the PCA-LDA and PLS-LDA models as they are not too sensitive for the exact number of variates. The advantage of fixing lies also in the fact that data-driven optimization is rather difficult for classification models, you should use a so-called proper scoring rule for that and you need rather large numbers of test cases.
(I cannot recommend any solution in Stata, but I could give you an R package where I implemented these combined models).
update to answer @doctorate's comment:
Yes, in priciple you can treat the PCA or PLS projection as dimensionality reduction pre-processing and do this before any other kind of classification.
IMHO One should spend a few thoughts about whether this approach is appropriate for the data at hand.
There's quite some literature about the combination PLS with generalized linear models such as logistic regression, see e.g.
Bastien, P.; Vinzi, V. E. & Tenenhaus, M.: PLS generalised linear regression, Computational Statistics & Data Analysis, 48, 17-46 (2005). DOI: 10.1016/j.csda.2004.02.005
Fort, G. & Lambert-Lacroix, S.: Classification using partial least squares with penalized logistic regression, Bioinformatics, 21, 1104-1111 (2005). DOI: 10.1093/bioinformatics/bti114
Boulesteix, A.-L. & Strimmer, K.: Partial least squares: a versatile tool for the analysis of high-dimensional genomic data, Brief Bioinform, 8, 32-44 (2007). DOI: 10.1093/bib/bbl016](http://dx.doi.org/10.1093/bib/bbl016)
R packages plsRglm and plsgenomics have generalized linear models with PLS and PLS with logistic regression.
On the other hand, if you find yourself reducing the data by linear projection to a few latent variables and then applying a highly nonlinear model such as randomForest, you should know an answer why this is the way to go as opposed to do a linear or maybe "slightly non-linear" model on the original data. | Does it make sense to run LDA on several principal components and not on all variables?
First of all, do you have an actual indication (external knowledge) that your data consists of a few variates that carry discriminatory information among noise-only variates? There is data that can be |
55,324 | Does it make sense to run LDA on several principal components and not on all variables? | One way of reducing the dimensionality of your samples might be the so-called "sparse PCA" (SPCA), but I don't know whether it is available for Stata. SPCA limits the number of variables with non-zero weight per component and thus allows you to select the variables much more tightly.
Alternatively, use the top N variables with the largest absolute loading and test how well your model performs then. But be warned: never use the same samples for test and selection procedure; otherwise your results will be worthless.
Another approach that I personally find very useful in such a setting is to use PLS-DA, which is both a dimension reduction technique and a supervised machine learning algorithm. However you have to mind the way you are validating your results (see paper by Westerhuis et al. and van Dorsten, Metabolomics 2008).
Other machine learning algorithms also are suitable for variable selection -- another one that I have experience with is random forests, where variables are given weights and can be selected for a refined model with a limited number of variables. | Does it make sense to run LDA on several principal components and not on all variables? | One way of reducing the dimensionality of your samples might be the so-called "sparse PCA" (SPCA), but I don't know whether it is available for Stata. SPCA limits the number of variables with non-zero | Does it make sense to run LDA on several principal components and not on all variables?
One way of reducing the dimensionality of your samples might be the so-called "sparse PCA" (SPCA), but I don't know whether it is available for Stata. SPCA limits the number of variables with non-zero weight per component and thus allows you to select the variables much more tightly.
Alternatively, use the top N variables with the largest absolute loading and test how well your model performs then. But be warned: never use the same samples for test and selection procedure; otherwise your results will be worthless.
Another approach that I personally find very useful in such a setting is to use PLS-DA, which is both a dimension reduction technique and a supervised machine learning algorithm. However you have to mind the way you are validating your results (see paper by Westerhuis et al. and van Dorsten, Metabolomics 2008).
Other machine learning algorithms also are suitable for variable selection -- another one that I have experience with is random forests, where variables are given weights and can be selected for a refined model with a limited number of variables. | Does it make sense to run LDA on several principal components and not on all variables?
One way of reducing the dimensionality of your samples might be the so-called "sparse PCA" (SPCA), but I don't know whether it is available for Stata. SPCA limits the number of variables with non-zero |
55,325 | is there any difference between taking more samples and a sample with more observations? | Suppose you take 10 samples of 50 and your friend takes one sample of 500. There is no difference in the amount of information you can extract versus your friend. In theory you are both under the same conditions because you have the same amount of data. Problems could arise if samples are not independent, but under independent random sampling you and your friend are dealing with equivalent situations.
Let's look at the variance. Suppose you average your 10 sample means. So you have $$\bar x_{samples}=(1/10)(\bar x_1+\bar x_2 + \cdots + \bar x_{10}) $$
The variance of this random variable is $(1/10^2)*(10)*(\sigma^2/50)= \sigma^2/500,$ where $\sigma^2$ is the population variance.
But this is the same as the variance of the random variable $$\bar x_{500}=(1/500)*(x_1 + x_2 + \cdots + x_{500}),$$ which your friend uses.
To answer your question about bias, both estimators are unbiased for the population mean. That is, the expected values of both are equal to the population mean. | is there any difference between taking more samples and a sample with more observations? | Suppose you take 10 samples of 50 and your friend takes one sample of 500. There is no difference in the amount of information you can extract versus your friend. In theory you are both under the same | is there any difference between taking more samples and a sample with more observations?
Suppose you take 10 samples of 50 and your friend takes one sample of 500. There is no difference in the amount of information you can extract versus your friend. In theory you are both under the same conditions because you have the same amount of data. Problems could arise if samples are not independent, but under independent random sampling you and your friend are dealing with equivalent situations.
Let's look at the variance. Suppose you average your 10 sample means. So you have $$\bar x_{samples}=(1/10)(\bar x_1+\bar x_2 + \cdots + \bar x_{10}) $$
The variance of this random variable is $(1/10^2)*(10)*(\sigma^2/50)= \sigma^2/500,$ where $\sigma^2$ is the population variance.
But this is the same as the variance of the random variable $$\bar x_{500}=(1/500)*(x_1 + x_2 + \cdots + x_{500}),$$ which your friend uses.
To answer your question about bias, both estimators are unbiased for the population mean. That is, the expected values of both are equal to the population mean. | is there any difference between taking more samples and a sample with more observations?
Suppose you take 10 samples of 50 and your friend takes one sample of 500. There is no difference in the amount of information you can extract versus your friend. In theory you are both under the same |
55,326 | is there any difference between taking more samples and a sample with more observations? | In some areas (e.g. analytical chemistry) the term sample means a piece (or quantity) of material that is to be analyzed (specimen). From a statistical point of view, you then have a nested/clustered/hierarchical structure of your sampling and the assumption of "independent random sampling" in @soakley's answer is not met:
multiple observations of the same specimen are often more similar than multiple observations from different specimen (aka samples).
That is, $\sigma^2_\text{within specimen} < \sigma^2_\text{between specimen}$.
E.g. for chemical analyses of an ore, a sampling error $\sigma^2_\text{between specimen}$ that is $\leq 3 \times $ the analysis error $\sigma^2_\text{within specimen}$ would be considered typical (properly done sampling).
If your sampling is done properly (for both physical and statistical meaning of "sample"), then taking 50 or 500 specimen/samples both yield unbiased estimates of the target property. If it is not done properly, then both can be biased. Whether the estimate is biased or not does not depend on the number of specimen / (statistical) sample size, but on the sampling procedure.
But if $\sigma^2_\text{within samples/specimen} < \sigma^2_\text{between samples/specimen}$, the uncertainty (standard error) after 50 samples/ specimen $\times$ 10 observations each is larger than the uncertainty after 500 samples/ specimen $\times$ 1 observation each.
If only 1 specimen is analyzed with 500 observations, then the estimate is still unbiased, but unfortunately you have no idea of the sampling error $\sigma^2_\text{between samples/specimen}$ other than that you can assume that it is a multiple (e.g. an order of magnitude higher) than the variance $\sigma^2_\text{within samples/specimen}$ you observe between your 500 observations. | is there any difference between taking more samples and a sample with more observations? | In some areas (e.g. analytical chemistry) the term sample means a piece (or quantity) of material that is to be analyzed (specimen). From a statistical point of view, you then have a nested/clustered | is there any difference between taking more samples and a sample with more observations?
In some areas (e.g. analytical chemistry) the term sample means a piece (or quantity) of material that is to be analyzed (specimen). From a statistical point of view, you then have a nested/clustered/hierarchical structure of your sampling and the assumption of "independent random sampling" in @soakley's answer is not met:
multiple observations of the same specimen are often more similar than multiple observations from different specimen (aka samples).
That is, $\sigma^2_\text{within specimen} < \sigma^2_\text{between specimen}$.
E.g. for chemical analyses of an ore, a sampling error $\sigma^2_\text{between specimen}$ that is $\leq 3 \times $ the analysis error $\sigma^2_\text{within specimen}$ would be considered typical (properly done sampling).
If your sampling is done properly (for both physical and statistical meaning of "sample"), then taking 50 or 500 specimen/samples both yield unbiased estimates of the target property. If it is not done properly, then both can be biased. Whether the estimate is biased or not does not depend on the number of specimen / (statistical) sample size, but on the sampling procedure.
But if $\sigma^2_\text{within samples/specimen} < \sigma^2_\text{between samples/specimen}$, the uncertainty (standard error) after 50 samples/ specimen $\times$ 10 observations each is larger than the uncertainty after 500 samples/ specimen $\times$ 1 observation each.
If only 1 specimen is analyzed with 500 observations, then the estimate is still unbiased, but unfortunately you have no idea of the sampling error $\sigma^2_\text{between samples/specimen}$ other than that you can assume that it is a multiple (e.g. an order of magnitude higher) than the variance $\sigma^2_\text{within samples/specimen}$ you observe between your 500 observations. | is there any difference between taking more samples and a sample with more observations?
In some areas (e.g. analytical chemistry) the term sample means a piece (or quantity) of material that is to be analyzed (specimen). From a statistical point of view, you then have a nested/clustered |
55,327 | Not all Features Selected by GLMNET Considered Signficant by GLM (Logistic Regression) | First of all, it seems like glmnet is a reasonable tool for your problem - good choice!
If all you want is a predictive model, you don't need to worry about p-values. A simple way to assess the predictive accuracy of your model is to use cross validation. Glmnet will cross validate automatically for you (try cv.glmnet), so implementation should not be a problem. The model cv.glmnet produces can then be used as is.
A thing to keep in mind is that glmnet (or lasso) simultaneously shrinks and selects features. The fact that glmnet shrinks features allows it to use more features without overfitting. The upshot is that if you just take the features selected by glmnet and plug them into glm, you'll probably start overfitting (since glm won't do any of the shrinking).
Anyways, once you start using glmnet, you need to stay within the world of penalized regression. You should just take the output of glmnet as your model, and not try to use glm to refit it. | Not all Features Selected by GLMNET Considered Signficant by GLM (Logistic Regression) | First of all, it seems like glmnet is a reasonable tool for your problem - good choice!
If all you want is a predictive model, you don't need to worry about p-values. A simple way to assess the predic | Not all Features Selected by GLMNET Considered Signficant by GLM (Logistic Regression)
First of all, it seems like glmnet is a reasonable tool for your problem - good choice!
If all you want is a predictive model, you don't need to worry about p-values. A simple way to assess the predictive accuracy of your model is to use cross validation. Glmnet will cross validate automatically for you (try cv.glmnet), so implementation should not be a problem. The model cv.glmnet produces can then be used as is.
A thing to keep in mind is that glmnet (or lasso) simultaneously shrinks and selects features. The fact that glmnet shrinks features allows it to use more features without overfitting. The upshot is that if you just take the features selected by glmnet and plug them into glm, you'll probably start overfitting (since glm won't do any of the shrinking).
Anyways, once you start using glmnet, you need to stay within the world of penalized regression. You should just take the output of glmnet as your model, and not try to use glm to refit it. | Not all Features Selected by GLMNET Considered Signficant by GLM (Logistic Regression)
First of all, it seems like glmnet is a reasonable tool for your problem - good choice!
If all you want is a predictive model, you don't need to worry about p-values. A simple way to assess the predic |
55,328 | Regression with non-normally distributed residuals | I wouldn't call that multinomial. Residuals are measured on a continuous scale, but they have multimodal distribution, rather than multinomial.
By the way, a kernel density plot makes the modality more difficult to judge. Some kind of strip plot or strip chart would be helpful.
Commenting on your model would be easier with some scientific context, but sample size is less convincing as a control than design complexity. If sample size is an issue at all, would you expect an interaction? Would you expect a linear relationship?
Design complexity has a strong effect. If you start with the much simpler model in which design complexity is the only factor, then what is crucial is the distribution of residuals at each level of complexity.
My bottom line is that normality of residuals is less of a big deal than you seem to think. You seem to have approximate symmetry of residuals, and perhaps your P-values will be a little untrustworthy, but they are usually dubious any way. | Regression with non-normally distributed residuals | I wouldn't call that multinomial. Residuals are measured on a continuous scale, but they have multimodal distribution, rather than multinomial.
By the way, a kernel density plot makes the modality mo | Regression with non-normally distributed residuals
I wouldn't call that multinomial. Residuals are measured on a continuous scale, but they have multimodal distribution, rather than multinomial.
By the way, a kernel density plot makes the modality more difficult to judge. Some kind of strip plot or strip chart would be helpful.
Commenting on your model would be easier with some scientific context, but sample size is less convincing as a control than design complexity. If sample size is an issue at all, would you expect an interaction? Would you expect a linear relationship?
Design complexity has a strong effect. If you start with the much simpler model in which design complexity is the only factor, then what is crucial is the distribution of residuals at each level of complexity.
My bottom line is that normality of residuals is less of a big deal than you seem to think. You seem to have approximate symmetry of residuals, and perhaps your P-values will be a little untrustworthy, but they are usually dubious any way. | Regression with non-normally distributed residuals
I wouldn't call that multinomial. Residuals are measured on a continuous scale, but they have multimodal distribution, rather than multinomial.
By the way, a kernel density plot makes the modality mo |
55,329 | Difference between cumulants and moments | Question asks: "is the $n$th cumulant equivalent to the $n$th central moment (i.e. about the mean)?"
Answer is: only for $n = 1, 2$ or $3$.
Here, for example, are the first 9 cumulants of the population in terms of central moments $\mu_i$ of the population:
using mathStatica's CumulantToCentral function.
More generally
In a multivariate world, the product cumulant will only be identical to the product central moments if 1 < (sum of the indexes) $\le$ 3. For example, $\kappa_{i,j,k}$ will be equal to $\mu_{i,j,k}$ provided $1 < i+j+k \le 3$. Here are some bivariate product cumulants expressed in terms of product central moments of the population: | Difference between cumulants and moments | Question asks: "is the $n$th cumulant equivalent to the $n$th central moment (i.e. about the mean)?"
Answer is: only for $n = 1, 2$ or $3$.
Here, for example, are the first 9 cumulants of the popula | Difference between cumulants and moments
Question asks: "is the $n$th cumulant equivalent to the $n$th central moment (i.e. about the mean)?"
Answer is: only for $n = 1, 2$ or $3$.
Here, for example, are the first 9 cumulants of the population in terms of central moments $\mu_i$ of the population:
using mathStatica's CumulantToCentral function.
More generally
In a multivariate world, the product cumulant will only be identical to the product central moments if 1 < (sum of the indexes) $\le$ 3. For example, $\kappa_{i,j,k}$ will be equal to $\mu_{i,j,k}$ provided $1 < i+j+k \le 3$. Here are some bivariate product cumulants expressed in terms of product central moments of the population: | Difference between cumulants and moments
Question asks: "is the $n$th cumulant equivalent to the $n$th central moment (i.e. about the mean)?"
Answer is: only for $n = 1, 2$ or $3$.
Here, for example, are the first 9 cumulants of the popula |
55,330 | Autocorrelation and partial autocorrelation interpretation | Neither the ACF nor the PACF are giving any reason to suppose an ARMA process, trend or seasonality: none of the correlations approach significance at conventional levels. Note that sixteen observations is very few to fit a time series model, so the only effects you might see would be very large ones.
The residuals of the process are the differences between the observations & the fitted values from your model. If your model's good they should be white noise—uncorrelated with zero mean. You don't say what model you fit; but the residuals look a little less like white noise than your original series, so it's probably not a good one. | Autocorrelation and partial autocorrelation interpretation | Neither the ACF nor the PACF are giving any reason to suppose an ARMA process, trend or seasonality: none of the correlations approach significance at conventional levels. Note that sixteen observati | Autocorrelation and partial autocorrelation interpretation
Neither the ACF nor the PACF are giving any reason to suppose an ARMA process, trend or seasonality: none of the correlations approach significance at conventional levels. Note that sixteen observations is very few to fit a time series model, so the only effects you might see would be very large ones.
The residuals of the process are the differences between the observations & the fitted values from your model. If your model's good they should be white noise—uncorrelated with zero mean. You don't say what model you fit; but the residuals look a little less like white noise than your original series, so it's probably not a good one. | Autocorrelation and partial autocorrelation interpretation
Neither the ACF nor the PACF are giving any reason to suppose an ARMA process, trend or seasonality: none of the correlations approach significance at conventional levels. Note that sixteen observati |
55,331 | Goodness of fit in a GLM with scaled deviance | I think when you allow for unknown dispersion, the GLM is no longer a maximum likelihood technique, but maximizes a "quasi" likelihood. Because of that, the deviance is fixed by using the sample dispersion as the model dispersion (as a consequence of maximizing quasi likelihood). By treating the dispersion as a parameter in a quasilikelihood, the family of e.g. quasibinomial likelihoods are equivalent to the binomial likelihood up to a proportional constant. Maximizing quasilikelihood treats this constant like a nuisance parameter.
Think of it like this: when the probability model for the underlying GLM is correct then the sample deviance will have an expected value of 1 (as confirmed by your formulation and limiting distribution statement above). But random variation in that data will show that the probability model does not always perfectly fit such data.
When the sample deviance is egregiously different from such, this is indication that the working probability model is not a good probability framework for the observed data. This doesn't mean that the inference on parameters is incorrect. In fact, by using scaled deviance, you can account for this over or under dispersion in the working GLM and get correct inference on the parameters. This is the GLM obtained from maximum quasilikelihood.
I recommend looking at Alan Agresti's example of horseshoe crabs and quasipoisson and quasibinomial models in Categorical Data Analysis 2nd ed for further clarification. | Goodness of fit in a GLM with scaled deviance | I think when you allow for unknown dispersion, the GLM is no longer a maximum likelihood technique, but maximizes a "quasi" likelihood. Because of that, the deviance is fixed by using the sample dispe | Goodness of fit in a GLM with scaled deviance
I think when you allow for unknown dispersion, the GLM is no longer a maximum likelihood technique, but maximizes a "quasi" likelihood. Because of that, the deviance is fixed by using the sample dispersion as the model dispersion (as a consequence of maximizing quasi likelihood). By treating the dispersion as a parameter in a quasilikelihood, the family of e.g. quasibinomial likelihoods are equivalent to the binomial likelihood up to a proportional constant. Maximizing quasilikelihood treats this constant like a nuisance parameter.
Think of it like this: when the probability model for the underlying GLM is correct then the sample deviance will have an expected value of 1 (as confirmed by your formulation and limiting distribution statement above). But random variation in that data will show that the probability model does not always perfectly fit such data.
When the sample deviance is egregiously different from such, this is indication that the working probability model is not a good probability framework for the observed data. This doesn't mean that the inference on parameters is incorrect. In fact, by using scaled deviance, you can account for this over or under dispersion in the working GLM and get correct inference on the parameters. This is the GLM obtained from maximum quasilikelihood.
I recommend looking at Alan Agresti's example of horseshoe crabs and quasipoisson and quasibinomial models in Categorical Data Analysis 2nd ed for further clarification. | Goodness of fit in a GLM with scaled deviance
I think when you allow for unknown dispersion, the GLM is no longer a maximum likelihood technique, but maximizes a "quasi" likelihood. Because of that, the deviance is fixed by using the sample dispe |
55,332 | Re-check boxplot after outlier removal | If you have that many outliers, they aren't outliers; you have a non-normal distribution.
How are you going to be using the age variable? One possibility is that it is to be used as an independent variable in a regression. In this case, this distribution is not necessarily a problem - regression makes assumptions about the error (as measured by the residuals) not about the distribution of the independent variables.
(Also, @Doug 's answer is good, and you should tell us that, too). | Re-check boxplot after outlier removal | If you have that many outliers, they aren't outliers; you have a non-normal distribution.
How are you going to be using the age variable? One possibility is that it is to be used as an independent va | Re-check boxplot after outlier removal
If you have that many outliers, they aren't outliers; you have a non-normal distribution.
How are you going to be using the age variable? One possibility is that it is to be used as an independent variable in a regression. In this case, this distribution is not necessarily a problem - regression makes assumptions about the error (as measured by the residuals) not about the distribution of the independent variables.
(Also, @Doug 's answer is good, and you should tell us that, too). | Re-check boxplot after outlier removal
If you have that many outliers, they aren't outliers; you have a non-normal distribution.
How are you going to be using the age variable? One possibility is that it is to be used as an independent va |
55,333 | Re-check boxplot after outlier removal | Answers 1: maybe, 2: depends. We need a little more information on why you want to remove these outliers. If you could provide a histogram, it might be possible to transform the data and eliminate some of the outliers, but it all depends on the research questions. Please tell us more about 1) your research questions, 2) your participants, and 3O) how you are defining outliers (or are you allowing the boxplots to define them for you). | Re-check boxplot after outlier removal | Answers 1: maybe, 2: depends. We need a little more information on why you want to remove these outliers. If you could provide a histogram, it might be possible to transform the data and eliminate s | Re-check boxplot after outlier removal
Answers 1: maybe, 2: depends. We need a little more information on why you want to remove these outliers. If you could provide a histogram, it might be possible to transform the data and eliminate some of the outliers, but it all depends on the research questions. Please tell us more about 1) your research questions, 2) your participants, and 3O) how you are defining outliers (or are you allowing the boxplots to define them for you). | Re-check boxplot after outlier removal
Answers 1: maybe, 2: depends. We need a little more information on why you want to remove these outliers. If you could provide a histogram, it might be possible to transform the data and eliminate s |
55,334 | Re-check boxplot after outlier removal | when you remove outliers no of data changes thus its quantile changes means lower range and upper range changes thus it is again showing outliers
If you observe both box plot carefully your upper range for first in nearly 38
after removing outlier it become nearly 32 | Re-check boxplot after outlier removal | when you remove outliers no of data changes thus its quantile changes means lower range and upper range changes thus it is again showing outliers
If you observe both box plot carefully your upper rang | Re-check boxplot after outlier removal
when you remove outliers no of data changes thus its quantile changes means lower range and upper range changes thus it is again showing outliers
If you observe both box plot carefully your upper range for first in nearly 38
after removing outlier it become nearly 32 | Re-check boxplot after outlier removal
when you remove outliers no of data changes thus its quantile changes means lower range and upper range changes thus it is again showing outliers
If you observe both box plot carefully your upper rang |
55,335 | Re-check boxplot after outlier removal | Ok here is what I learned, It is enough to pick out the outliers once from your dataset. If you continue to do so IQR changes respectively which will keep giving you new outliers. If you do not want to see the outliers once you picked them out just add the code, "outline=F", to avoid seeing the new outliers. Hope this helps. | Re-check boxplot after outlier removal | Ok here is what I learned, It is enough to pick out the outliers once from your dataset. If you continue to do so IQR changes respectively which will keep giving you new outliers. If you do not want t | Re-check boxplot after outlier removal
Ok here is what I learned, It is enough to pick out the outliers once from your dataset. If you continue to do so IQR changes respectively which will keep giving you new outliers. If you do not want to see the outliers once you picked them out just add the code, "outline=F", to avoid seeing the new outliers. Hope this helps. | Re-check boxplot after outlier removal
Ok here is what I learned, It is enough to pick out the outliers once from your dataset. If you continue to do so IQR changes respectively which will keep giving you new outliers. If you do not want t |
55,336 | How can one set up a linear support vector machine in Excel? | Honestly, I am not sure why you want to do this in Excel. Nonetheless, ...
A linear SVM requires solving a quadratic program with several linear constraints. You can check this answer [1] to find out how the quadratic program is setup. Once you setup the quadratic program and find a solver that can help you solve it in Excel, then you are good to go.
On the other hand, the corresponding quadratic program has a dual that gives rise to the notion of kernels. The objective function for the dual can be found here [2]. If you can find a quadratic program solver in Excel, you might as well solve the dual, which will allow you to solve problems beyond linear kernels.
If you don't have a QP solver at hand, then you can write the SMO algorithm [3] which solves the SVM dual. The provided link gives you a pseudocode. SMO is one of the simplest algorithms to solve the SVM dual, but also the slowest. For a small number of training data, it should be pretty fast, however.
[1] Given a set of points in two dimensional space, how can one design decision function for SVM?
[2] Non-linear SVM classification with RBF kernel
[3] http://cs229.stanford.edu/materials/smo.pdf | How can one set up a linear support vector machine in Excel? | Honestly, I am not sure why you want to do this in Excel. Nonetheless, ...
A linear SVM requires solving a quadratic program with several linear constraints. You can check this answer [1] to find out | How can one set up a linear support vector machine in Excel?
Honestly, I am not sure why you want to do this in Excel. Nonetheless, ...
A linear SVM requires solving a quadratic program with several linear constraints. You can check this answer [1] to find out how the quadratic program is setup. Once you setup the quadratic program and find a solver that can help you solve it in Excel, then you are good to go.
On the other hand, the corresponding quadratic program has a dual that gives rise to the notion of kernels. The objective function for the dual can be found here [2]. If you can find a quadratic program solver in Excel, you might as well solve the dual, which will allow you to solve problems beyond linear kernels.
If you don't have a QP solver at hand, then you can write the SMO algorithm [3] which solves the SVM dual. The provided link gives you a pseudocode. SMO is one of the simplest algorithms to solve the SVM dual, but also the slowest. For a small number of training data, it should be pretty fast, however.
[1] Given a set of points in two dimensional space, how can one design decision function for SVM?
[2] Non-linear SVM classification with RBF kernel
[3] http://cs229.stanford.edu/materials/smo.pdf | How can one set up a linear support vector machine in Excel?
Honestly, I am not sure why you want to do this in Excel. Nonetheless, ...
A linear SVM requires solving a quadratic program with several linear constraints. You can check this answer [1] to find out |
55,337 | How can one set up a linear support vector machine in Excel? | This looks like a good tutorial, and has a downloadable Excel example:
http://people.revoledu.com/kardi/tutorial/Regression/KernelRegression/KernelRegression.htm | How can one set up a linear support vector machine in Excel? | This looks like a good tutorial, and has a downloadable Excel example:
http://people.revoledu.com/kardi/tutorial/Regression/KernelRegression/KernelRegression.htm | How can one set up a linear support vector machine in Excel?
This looks like a good tutorial, and has a downloadable Excel example:
http://people.revoledu.com/kardi/tutorial/Regression/KernelRegression/KernelRegression.htm | How can one set up a linear support vector machine in Excel?
This looks like a good tutorial, and has a downloadable Excel example:
http://people.revoledu.com/kardi/tutorial/Regression/KernelRegression/KernelRegression.htm |
55,338 | How can one set up a linear support vector machine in Excel? | You might try using Excel2SVM if you want to organize your data in an excel format. http://www.bioinformatics.org/Excel2SVM/ could be helpful | How can one set up a linear support vector machine in Excel? | You might try using Excel2SVM if you want to organize your data in an excel format. http://www.bioinformatics.org/Excel2SVM/ could be helpful | How can one set up a linear support vector machine in Excel?
You might try using Excel2SVM if you want to organize your data in an excel format. http://www.bioinformatics.org/Excel2SVM/ could be helpful | How can one set up a linear support vector machine in Excel?
You might try using Excel2SVM if you want to organize your data in an excel format. http://www.bioinformatics.org/Excel2SVM/ could be helpful |
55,339 | How can one set up a linear support vector machine in Excel? | You can find a tutorial here, it uses Excel (no macros) and explains everything in an intuitive way (beware: most parts are behind a paywall, but the price is reasonable):
http://people.revoledu.com/kardi/tutorial/SVM/index.html | How can one set up a linear support vector machine in Excel? | You can find a tutorial here, it uses Excel (no macros) and explains everything in an intuitive way (beware: most parts are behind a paywall, but the price is reasonable):
http://people.revoledu.com/k | How can one set up a linear support vector machine in Excel?
You can find a tutorial here, it uses Excel (no macros) and explains everything in an intuitive way (beware: most parts are behind a paywall, but the price is reasonable):
http://people.revoledu.com/kardi/tutorial/SVM/index.html | How can one set up a linear support vector machine in Excel?
You can find a tutorial here, it uses Excel (no macros) and explains everything in an intuitive way (beware: most parts are behind a paywall, but the price is reasonable):
http://people.revoledu.com/k |
55,340 | Can I repeat cross validation with a small dataset, and/or how can I improve my cross validation confidence? | It seems as if you are using an improper scoring rule, proportion correctly classified. Optimizing this measure will choose a bogus model.
You will need to repeat 10-fold cross-validation 100 times to get sufficient precision for validation estimates, and be sure to use a proper scoring rule (e.g., Brier score (quadratic error score) or logarithmic scoring rule (log likelihood)). | Can I repeat cross validation with a small dataset, and/or how can I improve my cross validation con | It seems as if you are using an improper scoring rule, proportion correctly classified. Optimizing this measure will choose a bogus model.
You will need to repeat 10-fold cross-validation 100 times t | Can I repeat cross validation with a small dataset, and/or how can I improve my cross validation confidence?
It seems as if you are using an improper scoring rule, proportion correctly classified. Optimizing this measure will choose a bogus model.
You will need to repeat 10-fold cross-validation 100 times to get sufficient precision for validation estimates, and be sure to use a proper scoring rule (e.g., Brier score (quadratic error score) or logarithmic scoring rule (log likelihood)). | Can I repeat cross validation with a small dataset, and/or how can I improve my cross validation con
It seems as if you are using an improper scoring rule, proportion correctly classified. Optimizing this measure will choose a bogus model.
You will need to repeat 10-fold cross-validation 100 times t |
55,341 | How can I find $\text{Cov}(X_k,X_5)$ | The general situation you have here is described by a multinomial distribution. The Wikipedia article about the multinomial distribution explains:
For n independent trials each of which leads to a success for exactly one of k categories, with each category having a given fixed success probability [...]
In your case, two balls are put in the boxes in each trial. You have already recognized that this leads to a success probability of 1/5 for each of the boxes. So your answer for (a) is correct.
In order to answer question (b), recall the formula for the covariance for $i\neq j$:
$$
Cov[X_{i}, X_{j}] = E[X_{i}\cdot X_{j}] - E[X_{i}]\cdot E[X_{j}]
$$
If only one ball was put in a box in each trials, we could set $E[X_{i}\cdot X_{j}]=0$ because $X_{i}$ and $X_{j}$ could not be 1 (i.e. receive a ball) in the same trial. It is easily seen that this assumption is true whenever $j\neq i \pm 1 \text{ mod } 10$. For these cases, the covariance is $-np_{i}p_{j} = -100\cdot (1/5)\cdot (1/5) = -4$.
For the case that $j=i \pm 1 \text{ mod } 10$, the term $E[X_{i}\cdot X_{j}]\neq 0$ because both boxes receive a ball at the same trial. From the Wikipedia article about the binomial distribution we get
$$
Cov[X_{i}, X_{j}] = n\cdot (p_{b}-p_{i}p_{j})
$$
where $p_{b}$ denotes the probability that both boxes receive a ball. In this case, $p_{b}=1/10$. So finally, whenever $j=i \pm 1 \text{ mod } 10$ the covariance is $100\cdot (\frac{1}{10} - \frac{1}{5}\cdot \frac{1}{5})=6$. | How can I find $\text{Cov}(X_k,X_5)$ | The general situation you have here is described by a multinomial distribution. The Wikipedia article about the multinomial distribution explains:
For n independent trials each of which leads to a s | How can I find $\text{Cov}(X_k,X_5)$
The general situation you have here is described by a multinomial distribution. The Wikipedia article about the multinomial distribution explains:
For n independent trials each of which leads to a success for exactly one of k categories, with each category having a given fixed success probability [...]
In your case, two balls are put in the boxes in each trial. You have already recognized that this leads to a success probability of 1/5 for each of the boxes. So your answer for (a) is correct.
In order to answer question (b), recall the formula for the covariance for $i\neq j$:
$$
Cov[X_{i}, X_{j}] = E[X_{i}\cdot X_{j}] - E[X_{i}]\cdot E[X_{j}]
$$
If only one ball was put in a box in each trials, we could set $E[X_{i}\cdot X_{j}]=0$ because $X_{i}$ and $X_{j}$ could not be 1 (i.e. receive a ball) in the same trial. It is easily seen that this assumption is true whenever $j\neq i \pm 1 \text{ mod } 10$. For these cases, the covariance is $-np_{i}p_{j} = -100\cdot (1/5)\cdot (1/5) = -4$.
For the case that $j=i \pm 1 \text{ mod } 10$, the term $E[X_{i}\cdot X_{j}]\neq 0$ because both boxes receive a ball at the same trial. From the Wikipedia article about the binomial distribution we get
$$
Cov[X_{i}, X_{j}] = n\cdot (p_{b}-p_{i}p_{j})
$$
where $p_{b}$ denotes the probability that both boxes receive a ball. In this case, $p_{b}=1/10$. So finally, whenever $j=i \pm 1 \text{ mod } 10$ the covariance is $100\cdot (\frac{1}{10} - \frac{1}{5}\cdot \frac{1}{5})=6$. | How can I find $\text{Cov}(X_k,X_5)$
The general situation you have here is described by a multinomial distribution. The Wikipedia article about the multinomial distribution explains:
For n independent trials each of which leads to a s |
55,342 | How can I find $\text{Cov}(X_k,X_5)$ | This is my first post on StackExchange, and I hope it is helpful. COOLSerdash gave a very complete answer already which I mostly agree with, but I will try to add a different perspective on the problem.
As you stated, $X_{k} \sim Binomial(n=100,p=\frac{1}{5}) \equiv 100 \times Bernoulli(\frac{1}{5})$
$\text{Cov}(X_5,X_k) = \mathbb{E}[(X_5-\mathbb{E}[X_5])(X_k-\mathbb{E}[X_k])]$
This leads to the expression $\mathbb{E}[X_5X_k]-\mathbb{E}[X_5]\mathbb{E}[X_k] = \mathbb{E}[X_5X_k]-100\times(\mathbb{E}(Bernoulli(\frac{1}{5}))$
$= \mathbb{E}[X_5X_k]-100\times(\frac{1}{25})=\mathbb{E}[X_5X_k]-4$.
Therefore, we have three conditions:
$k = 5 ; Cov(X_5,X_5) = Var(X_5) = Var(X_k) = 16$
$k \neq 3,7; Cov(X_5,X_k) = \mathbb{E}[X_5X_k]-4 = 0$
Consider the events $A := ${box $5$ gets a ball} , $B$ {box $k$ gets a ball} $\neq 3,7$, then $ \mathbb{P}(A,B) = \mathbb{P}(A)\mathbb{P}(B)$ due to independence.
$\therefore \mathbb{E}[X_5X_k]=100\times \mathbb{E}[A]\mathbb{E}[B]=100\times(\frac{1}{5}\times \frac{1}{5}) = 4$
$k = 3,7; Cov(X_5,X_k) = \mathbb{E}[X_5X_k]-4 = 6$ (as COOLSerdash stated)
With the defined events $A$ and $B$ for $k = 3,7$, $ \mathbb{P}(A,B) = \frac{1}{10}$ since the probability $A \cap B$ is the probability that box 4 (if $k=3$) or box 6 (if $k=7$) gets a ball.
Hope this helps! | How can I find $\text{Cov}(X_k,X_5)$ | This is my first post on StackExchange, and I hope it is helpful. COOLSerdash gave a very complete answer already which I mostly agree with, but I will try to add a different perspective on the proble | How can I find $\text{Cov}(X_k,X_5)$
This is my first post on StackExchange, and I hope it is helpful. COOLSerdash gave a very complete answer already which I mostly agree with, but I will try to add a different perspective on the problem.
As you stated, $X_{k} \sim Binomial(n=100,p=\frac{1}{5}) \equiv 100 \times Bernoulli(\frac{1}{5})$
$\text{Cov}(X_5,X_k) = \mathbb{E}[(X_5-\mathbb{E}[X_5])(X_k-\mathbb{E}[X_k])]$
This leads to the expression $\mathbb{E}[X_5X_k]-\mathbb{E}[X_5]\mathbb{E}[X_k] = \mathbb{E}[X_5X_k]-100\times(\mathbb{E}(Bernoulli(\frac{1}{5}))$
$= \mathbb{E}[X_5X_k]-100\times(\frac{1}{25})=\mathbb{E}[X_5X_k]-4$.
Therefore, we have three conditions:
$k = 5 ; Cov(X_5,X_5) = Var(X_5) = Var(X_k) = 16$
$k \neq 3,7; Cov(X_5,X_k) = \mathbb{E}[X_5X_k]-4 = 0$
Consider the events $A := ${box $5$ gets a ball} , $B$ {box $k$ gets a ball} $\neq 3,7$, then $ \mathbb{P}(A,B) = \mathbb{P}(A)\mathbb{P}(B)$ due to independence.
$\therefore \mathbb{E}[X_5X_k]=100\times \mathbb{E}[A]\mathbb{E}[B]=100\times(\frac{1}{5}\times \frac{1}{5}) = 4$
$k = 3,7; Cov(X_5,X_k) = \mathbb{E}[X_5X_k]-4 = 6$ (as COOLSerdash stated)
With the defined events $A$ and $B$ for $k = 3,7$, $ \mathbb{P}(A,B) = \frac{1}{10}$ since the probability $A \cap B$ is the probability that box 4 (if $k=3$) or box 6 (if $k=7$) gets a ball.
Hope this helps! | How can I find $\text{Cov}(X_k,X_5)$
This is my first post on StackExchange, and I hope it is helpful. COOLSerdash gave a very complete answer already which I mostly agree with, but I will try to add a different perspective on the proble |
55,343 | Cosine of a uniform random variable | Here's a mostly intuitive explanation of the general appearance of the result. Consider just the right half of the original $y$ range (the other half is symmetric about zero to what happens here).
Where do the values end up? Which values go close to 1? To 0?
Clearly, from inspection of the $\cos$ function, small $y$ values will become values close to $1$, while values of $y$ near $\pi$ are mapped to near $-1$ and values near $\pi/2$ are mapped to near $0$.
Near $y=\pi/2$ the $\cos$ function is almost linear, and so uniformly distributed $Y$ values will remain nearly uniform after transformation (just rescaled almost-linearly - in this case, the linear transform that it's close to in this vicinity flips values around and shifts them along).
Near $y=0$ the $\cos$ function is almost quadratic, $\text{cos}\, y \approx 1-y^2/2$. So what happens to $y$ values between 0 and $\varepsilon$ for small $\varepsilon$?
They get mapped to values between $1-\varepsilon^2/2$ and $1$. So they're bunched into a space that is about $\varepsilon/2$ as large as where they came from. So it has to get (on average) $2/\varepsilon$ times as dense in there - a big number.
e.g. values between 0 and 0.01 roughly go to between 1-0.00005 and 1. So they're squeezed into one two-hundredth the space and so need to average 200 times the density in there, and it gets bigger the closer in you go.
There's a similar effect on $y$ values close to $\pi$ but they map to near $-1$.
So the overall appearance is intuitively clear - the density should look flat near zero and increase dramatically - indeed, without bound, near the endpoints.
(Note that $\text{sin}(\text{cos}^{−1}y) = \sqrt{1-y^2}$ here. This doesn't affect the above intuitive explanation for why it must look like this, but is useful if you're trying to use the usual algebraic methods for calculation of the density of the transformed variable.) | Cosine of a uniform random variable | Here's a mostly intuitive explanation of the general appearance of the result. Consider just the right half of the original $y$ range (the other half is symmetric about zero to what happens here).
Wh | Cosine of a uniform random variable
Here's a mostly intuitive explanation of the general appearance of the result. Consider just the right half of the original $y$ range (the other half is symmetric about zero to what happens here).
Where do the values end up? Which values go close to 1? To 0?
Clearly, from inspection of the $\cos$ function, small $y$ values will become values close to $1$, while values of $y$ near $\pi$ are mapped to near $-1$ and values near $\pi/2$ are mapped to near $0$.
Near $y=\pi/2$ the $\cos$ function is almost linear, and so uniformly distributed $Y$ values will remain nearly uniform after transformation (just rescaled almost-linearly - in this case, the linear transform that it's close to in this vicinity flips values around and shifts them along).
Near $y=0$ the $\cos$ function is almost quadratic, $\text{cos}\, y \approx 1-y^2/2$. So what happens to $y$ values between 0 and $\varepsilon$ for small $\varepsilon$?
They get mapped to values between $1-\varepsilon^2/2$ and $1$. So they're bunched into a space that is about $\varepsilon/2$ as large as where they came from. So it has to get (on average) $2/\varepsilon$ times as dense in there - a big number.
e.g. values between 0 and 0.01 roughly go to between 1-0.00005 and 1. So they're squeezed into one two-hundredth the space and so need to average 200 times the density in there, and it gets bigger the closer in you go.
There's a similar effect on $y$ values close to $\pi$ but they map to near $-1$.
So the overall appearance is intuitively clear - the density should look flat near zero and increase dramatically - indeed, without bound, near the endpoints.
(Note that $\text{sin}(\text{cos}^{−1}y) = \sqrt{1-y^2}$ here. This doesn't affect the above intuitive explanation for why it must look like this, but is useful if you're trying to use the usual algebraic methods for calculation of the density of the transformed variable.) | Cosine of a uniform random variable
Here's a mostly intuitive explanation of the general appearance of the result. Consider just the right half of the original $y$ range (the other half is symmetric about zero to what happens here).
Wh |
55,344 | Why is the semi-partial correlation sometimes called the "part correlation"? | I'm afraid that my attempt at an answer is hardly more satisfying than gung's. Snedecor and Cochran's book discuss this briefly. That was an old statistics text, based out of a lot of agricultural work (and so much of the early work was) and in any case, takes us back, I think, to early work by Mordecai Ezekiel and Bradford Smith around the 1920s or 1930s. At that point, the linear model was already established, and that a correlation, assuming bivariate normality, could be extended to the multivariate normal case and regression. However, considerable effort was placed on formulae and shortcuts that made by hand calculations easier.
My belief is that the part correlation, was an early derivation that actually pre-dated what we now call partial correlations, and in the absence of partial correlations, referring to it as part correlation makes sense. I think the reference you might want is Correlation Theory and Method Applied to Agricultural Research, but alas I do not have easy access to it to see if all is explained. | Why is the semi-partial correlation sometimes called the "part correlation"? | I'm afraid that my attempt at an answer is hardly more satisfying than gung's. Snedecor and Cochran's book discuss this briefly. That was an old statistics text, based out of a lot of agricultural w | Why is the semi-partial correlation sometimes called the "part correlation"?
I'm afraid that my attempt at an answer is hardly more satisfying than gung's. Snedecor and Cochran's book discuss this briefly. That was an old statistics text, based out of a lot of agricultural work (and so much of the early work was) and in any case, takes us back, I think, to early work by Mordecai Ezekiel and Bradford Smith around the 1920s or 1930s. At that point, the linear model was already established, and that a correlation, assuming bivariate normality, could be extended to the multivariate normal case and regression. However, considerable effort was placed on formulae and shortcuts that made by hand calculations easier.
My belief is that the part correlation, was an early derivation that actually pre-dated what we now call partial correlations, and in the absence of partial correlations, referring to it as part correlation makes sense. I think the reference you might want is Correlation Theory and Method Applied to Agricultural Research, but alas I do not have easy access to it to see if all is explained. | Why is the semi-partial correlation sometimes called the "part correlation"?
I'm afraid that my attempt at an answer is hardly more satisfying than gung's. Snedecor and Cochran's book discuss this briefly. That was an old statistics text, based out of a lot of agricultural w |
55,345 | Why is the semi-partial correlation sometimes called the "part correlation"? | I have no idea (and if you object, I can delete this answer), but I can tell you how I've tried to explain it to people so that they can get it.
Namely, the word "part" goes only half way through the word "partial". In the same sense, the part correlation partials the variable out of only one of X or Y, so it only goes half way though a partial correlation. I acknowledge that this is an arbitrary mnemonic, but it does help people remember it. | Why is the semi-partial correlation sometimes called the "part correlation"? | I have no idea (and if you object, I can delete this answer), but I can tell you how I've tried to explain it to people so that they can get it.
Namely, the word "part" goes only half way through th | Why is the semi-partial correlation sometimes called the "part correlation"?
I have no idea (and if you object, I can delete this answer), but I can tell you how I've tried to explain it to people so that they can get it.
Namely, the word "part" goes only half way through the word "partial". In the same sense, the part correlation partials the variable out of only one of X or Y, so it only goes half way though a partial correlation. I acknowledge that this is an arbitrary mnemonic, but it does help people remember it. | Why is the semi-partial correlation sometimes called the "part correlation"?
I have no idea (and if you object, I can delete this answer), but I can tell you how I've tried to explain it to people so that they can get it.
Namely, the word "part" goes only half way through th |
55,346 | Why is the semi-partial correlation sometimes called the "part correlation"? | According to Howell (2012, in the partial and semi-partial correlation), "part correlation" is due to McNemar (1969) Psychological statistics, Wiley. If someone can get a copy, maybe we could get further. | Why is the semi-partial correlation sometimes called the "part correlation"? | According to Howell (2012, in the partial and semi-partial correlation), "part correlation" is due to McNemar (1969) Psychological statistics, Wiley. If someone can get a copy, maybe we could get furt | Why is the semi-partial correlation sometimes called the "part correlation"?
According to Howell (2012, in the partial and semi-partial correlation), "part correlation" is due to McNemar (1969) Psychological statistics, Wiley. If someone can get a copy, maybe we could get further. | Why is the semi-partial correlation sometimes called the "part correlation"?
According to Howell (2012, in the partial and semi-partial correlation), "part correlation" is due to McNemar (1969) Psychological statistics, Wiley. If someone can get a copy, maybe we could get furt |
55,347 | Why is the CI for an odds ratio not always centered on the sample value? | Odds ratios are not distributed symmetrically - they can't be, because they can't go below zero, but they can go as high as infinity.
What is distributed symmetrically is the log of the odds ratio. Most stats packages give a choice of the regular regression coefficient (B), and the exponentiated regression coefficient (exp(B)), which is the odds ratio.
Here's an extreme example: B is 3, CIs are 1, 3. Exponentiate those values, and you get a point estimate for the odds ratio of 20.1, and confidence intervals of 2.72 and 148.4.
However, notice that they are not symmetrical additively, but they are symmetrical multiplicatively. That is to say: 20.1/2.72 = 7.4 and 148.4/20.1 = 7.4 as well.
The farther your point estimate of the odds ratio is from 1, the more distorted the effect will (appear to) be. To take an extreme example, a point estimate for B of 12, with CIs 10, 14 gives an odds ratio of 162,754 with CIs 22,026 and 1,202,604. The CIs cover a range over one million. But also notice that the same ratio (7.4) still holds. | Why is the CI for an odds ratio not always centered on the sample value? | Odds ratios are not distributed symmetrically - they can't be, because they can't go below zero, but they can go as high as infinity.
What is distributed symmetrically is the log of the odds ratio. | Why is the CI for an odds ratio not always centered on the sample value?
Odds ratios are not distributed symmetrically - they can't be, because they can't go below zero, but they can go as high as infinity.
What is distributed symmetrically is the log of the odds ratio. Most stats packages give a choice of the regular regression coefficient (B), and the exponentiated regression coefficient (exp(B)), which is the odds ratio.
Here's an extreme example: B is 3, CIs are 1, 3. Exponentiate those values, and you get a point estimate for the odds ratio of 20.1, and confidence intervals of 2.72 and 148.4.
However, notice that they are not symmetrical additively, but they are symmetrical multiplicatively. That is to say: 20.1/2.72 = 7.4 and 148.4/20.1 = 7.4 as well.
The farther your point estimate of the odds ratio is from 1, the more distorted the effect will (appear to) be. To take an extreme example, a point estimate for B of 12, with CIs 10, 14 gives an odds ratio of 162,754 with CIs 22,026 and 1,202,604. The CIs cover a range over one million. But also notice that the same ratio (7.4) still holds. | Why is the CI for an odds ratio not always centered on the sample value?
Odds ratios are not distributed symmetrically - they can't be, because they can't go below zero, but they can go as high as infinity.
What is distributed symmetrically is the log of the odds ratio. |
55,348 | Asymmetric S-shaped function mapping interval $[0, 1]$ to interval $[0, 1]$ | Yes.
And here is how you go about finding one.
For our purposes, convex means $F''(x)\ge 0$ and concave means $F''(x)\le 0$.
Ok, so let $F$ be such a function. If we also assume monotonicity, we have $F'(x)\ge 0$, and $F$ is a cumulative distribution function. Therefore, the convex and concave conditions are $f'(x)\ge 0$ for $x\le c$ and $f'(x) \le 0$ for $x\ge c$ (where $f=F'$ is the pdf of $F$).
In other words, now we are looking for density function on $[0,1]$ that is increasing on $[0,c]$ and decreasing on $[c,1]$. We go to a table of probability distributions on $[0,1]$ (e.g., Wikipedia's list) and see that the cumulative distribution function of the Beta distribution fits the bill.
Another approach would be to explicitly construct such a function (I believe a quartic would do the job). | Asymmetric S-shaped function mapping interval $[0, 1]$ to interval $[0, 1]$ | Yes.
And here is how you go about finding one.
For our purposes, convex means $F''(x)\ge 0$ and concave means $F''(x)\le 0$.
Ok, so let $F$ be such a function. If we also assume monotonicity, we have | Asymmetric S-shaped function mapping interval $[0, 1]$ to interval $[0, 1]$
Yes.
And here is how you go about finding one.
For our purposes, convex means $F''(x)\ge 0$ and concave means $F''(x)\le 0$.
Ok, so let $F$ be such a function. If we also assume monotonicity, we have $F'(x)\ge 0$, and $F$ is a cumulative distribution function. Therefore, the convex and concave conditions are $f'(x)\ge 0$ for $x\le c$ and $f'(x) \le 0$ for $x\ge c$ (where $f=F'$ is the pdf of $F$).
In other words, now we are looking for density function on $[0,1]$ that is increasing on $[0,c]$ and decreasing on $[c,1]$. We go to a table of probability distributions on $[0,1]$ (e.g., Wikipedia's list) and see that the cumulative distribution function of the Beta distribution fits the bill.
Another approach would be to explicitly construct such a function (I believe a quartic would do the job). | Asymmetric S-shaped function mapping interval $[0, 1]$ to interval $[0, 1]$
Yes.
And here is how you go about finding one.
For our purposes, convex means $F''(x)\ge 0$ and concave means $F''(x)\le 0$.
Ok, so let $F$ be such a function. If we also assume monotonicity, we have |
55,349 | Asymmetric S-shaped function mapping interval $[0, 1]$ to interval $[0, 1]$ | I would comment instead, but I don't have the score yet, so here's my two cents:
The only thing I could think of is an asymmetric tangent function, yet I couldn't find anything about them except for this part of the Proceedings of the Estonian Academy of Sciences, Engineering. See if it helps...? | Asymmetric S-shaped function mapping interval $[0, 1]$ to interval $[0, 1]$ | I would comment instead, but I don't have the score yet, so here's my two cents:
The only thing I could think of is an asymmetric tangent function, yet I couldn't find anything about them except for t | Asymmetric S-shaped function mapping interval $[0, 1]$ to interval $[0, 1]$
I would comment instead, but I don't have the score yet, so here's my two cents:
The only thing I could think of is an asymmetric tangent function, yet I couldn't find anything about them except for this part of the Proceedings of the Estonian Academy of Sciences, Engineering. See if it helps...? | Asymmetric S-shaped function mapping interval $[0, 1]$ to interval $[0, 1]$
I would comment instead, but I don't have the score yet, so here's my two cents:
The only thing I could think of is an asymmetric tangent function, yet I couldn't find anything about them except for t |
55,350 | is the z-test for difference of proportions valid for massive samples with tiny proportions? | Whenever I have doubts about the performance of a particular method, I try to run a simulation study to examine how well the method works under similar conditions. Below is a simple example using R for the case you are describing. Note that I set the true proportions equal for the two groups and to a value that is somewhere in between what you actually observed in the two samples. Therefore, the simulation provides the empirical Type I error rate of the test. It should hopefully be close to .05. Setting the number of iterations large enough will ensure that the simulation error is small. Also, note that I once run the test without and once with Yates' continuity correction to see whether this is relevant here.
iters <- 100000
n <- 23000
p <- 0.0027
x1i <- rbinom(iters, n, p)
x2i <- rbinom(iters, n, p)
pval1 <- rep(NA, iters)
pval2 <- rep(NA, iters)
for (i in 1:iters) {
pval1[i] <- chisq.test(matrix(c(x1i[i], n-x1i[i], x2i[i], n-x2i[i]), nrow=2, byrow=TRUE), correct=FALSE)$p.value
pval2[i] <- chisq.test(matrix(c(x1i[i], n-x1i[i], x2i[i], n-x2i[i]), nrow=2, byrow=TRUE), correct=TRUE)$p.value
}
round(mean(pval1 <= .05), 3)
round(mean(pval2 <= .05), 3)
Here are the results from one run:
> round(mean(pval1 <= .05), 3)
[1] 0.05
> round(mean(pval2 <= .05), 3)
[1] 0.04
So, the test performs nominally when not using Yates' continuity correction. With the correction, the test is slightly conservative.
If you want to find out about the power of the test, you can set the true proportions to two different values and then rerun the simulation. | is the z-test for difference of proportions valid for massive samples with tiny proportions? | Whenever I have doubts about the performance of a particular method, I try to run a simulation study to examine how well the method works under similar conditions. Below is a simple example using R fo | is the z-test for difference of proportions valid for massive samples with tiny proportions?
Whenever I have doubts about the performance of a particular method, I try to run a simulation study to examine how well the method works under similar conditions. Below is a simple example using R for the case you are describing. Note that I set the true proportions equal for the two groups and to a value that is somewhere in between what you actually observed in the two samples. Therefore, the simulation provides the empirical Type I error rate of the test. It should hopefully be close to .05. Setting the number of iterations large enough will ensure that the simulation error is small. Also, note that I once run the test without and once with Yates' continuity correction to see whether this is relevant here.
iters <- 100000
n <- 23000
p <- 0.0027
x1i <- rbinom(iters, n, p)
x2i <- rbinom(iters, n, p)
pval1 <- rep(NA, iters)
pval2 <- rep(NA, iters)
for (i in 1:iters) {
pval1[i] <- chisq.test(matrix(c(x1i[i], n-x1i[i], x2i[i], n-x2i[i]), nrow=2, byrow=TRUE), correct=FALSE)$p.value
pval2[i] <- chisq.test(matrix(c(x1i[i], n-x1i[i], x2i[i], n-x2i[i]), nrow=2, byrow=TRUE), correct=TRUE)$p.value
}
round(mean(pval1 <= .05), 3)
round(mean(pval2 <= .05), 3)
Here are the results from one run:
> round(mean(pval1 <= .05), 3)
[1] 0.05
> round(mean(pval2 <= .05), 3)
[1] 0.04
So, the test performs nominally when not using Yates' continuity correction. With the correction, the test is slightly conservative.
If you want to find out about the power of the test, you can set the true proportions to two different values and then rerun the simulation. | is the z-test for difference of proportions valid for massive samples with tiny proportions?
Whenever I have doubts about the performance of a particular method, I try to run a simulation study to examine how well the method works under similar conditions. Below is a simple example using R fo |
55,351 | How can I do a correlation between Likert scale and an ordinal categorical measure? | What about one of the Kendall's $\tau$s? They are a kind of rank correlation coefficient for ordinal data.
Here's an example with Stata and $\tau_{b}$. A value of $−1$ implies perfect negative association, and $+1$ indicates perfect agreement. Zero indicates the absence of association. Here we see a modest, though significant, negative association between speed limits and accidents.
. webuse hiway, clear
(Minnesota Highway Data, 1973)
. tab spdlimit rate, taub
| Accident rate per million
Speed | vehicle miles
Limit | Below 4 4-7 Above 7 | Total
-----------+---------------------------------+----------
40 | 1 0 0 | 1
45 | 1 1 1 | 3
50 | 1 4 2 | 7
55 | 10 4 1 | 15
60 | 9 2 0 | 11
65 | 1 0 0 | 1
70 | 1 0 0 | 1
-----------+---------------------------------+----------
Total | 24 11 4 | 39
Kendall's tau-b = -0.4026 ASE = 0.116
You can also try an asymmetric modification of $\tau_{b}$ that only corrects for ties of the independent variable. This is called Somer's D:
. somersd rate spdlimit
Somers' D with variable: rate
Transformation: Untransformed
Valid observations: 39
Symmetric 95% CI
------------------------------------------------------------------------------
| Jackknife
rate | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
spdlimit | -.4727723 .1395719 -3.39 0.001 -.7463282 -.1992163
------------------------------------------------------------------------------
All these measure of association are related in that they classify all pairs of observations (highways in our example) as concordant or discordant. A pair is concordant if the observation with the larger value of variable $X$ (speed limit) also has the larger value of variable $Y$ (accident rate). There are more of them than you can shake a stick at (one more is Goodman and Kruskal's $\gamma$, which ignores ties altogether like $\tau_{a}$). They will generally yield similar conclusions, even if they are not directly comparable.
The results above are qualitatively in line with Spearman's rank correlation coefficient mentioned by Greg (which tends to be larger in absolute value than $\tau$):
.ci2 rate spdlimit, spearman
Confidence interval for Spearman's rank correlation
of rate and spdlimit, based on Fisher's transformation.
Correlation = -0.451 on 39 observations (95% CI: -0.671 to -0.158)
This measure does not consider pairs, but compares the similarity of the ordering that you would get if you used each variable separately to rank observations (Stata breaks ties by assigning the average rank, and it's just Pearson correlation on the ranks). This makes it somewhat faster to compute since you don't have to consider all $\frac{n \cdot (n-1)}{2}$ pairs. On the other hand, the central limit theorem works much faster for $\tau$, so if you plan to do inference that measure might be better. $\tau_b$ is the most common variant. | How can I do a correlation between Likert scale and an ordinal categorical measure? | What about one of the Kendall's $\tau$s? They are a kind of rank correlation coefficient for ordinal data.
Here's an example with Stata and $\tau_{b}$. A value of $−1$ implies perfect negative associ | How can I do a correlation between Likert scale and an ordinal categorical measure?
What about one of the Kendall's $\tau$s? They are a kind of rank correlation coefficient for ordinal data.
Here's an example with Stata and $\tau_{b}$. A value of $−1$ implies perfect negative association, and $+1$ indicates perfect agreement. Zero indicates the absence of association. Here we see a modest, though significant, negative association between speed limits and accidents.
. webuse hiway, clear
(Minnesota Highway Data, 1973)
. tab spdlimit rate, taub
| Accident rate per million
Speed | vehicle miles
Limit | Below 4 4-7 Above 7 | Total
-----------+---------------------------------+----------
40 | 1 0 0 | 1
45 | 1 1 1 | 3
50 | 1 4 2 | 7
55 | 10 4 1 | 15
60 | 9 2 0 | 11
65 | 1 0 0 | 1
70 | 1 0 0 | 1
-----------+---------------------------------+----------
Total | 24 11 4 | 39
Kendall's tau-b = -0.4026 ASE = 0.116
You can also try an asymmetric modification of $\tau_{b}$ that only corrects for ties of the independent variable. This is called Somer's D:
. somersd rate spdlimit
Somers' D with variable: rate
Transformation: Untransformed
Valid observations: 39
Symmetric 95% CI
------------------------------------------------------------------------------
| Jackknife
rate | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
spdlimit | -.4727723 .1395719 -3.39 0.001 -.7463282 -.1992163
------------------------------------------------------------------------------
All these measure of association are related in that they classify all pairs of observations (highways in our example) as concordant or discordant. A pair is concordant if the observation with the larger value of variable $X$ (speed limit) also has the larger value of variable $Y$ (accident rate). There are more of them than you can shake a stick at (one more is Goodman and Kruskal's $\gamma$, which ignores ties altogether like $\tau_{a}$). They will generally yield similar conclusions, even if they are not directly comparable.
The results above are qualitatively in line with Spearman's rank correlation coefficient mentioned by Greg (which tends to be larger in absolute value than $\tau$):
.ci2 rate spdlimit, spearman
Confidence interval for Spearman's rank correlation
of rate and spdlimit, based on Fisher's transformation.
Correlation = -0.451 on 39 observations (95% CI: -0.671 to -0.158)
This measure does not consider pairs, but compares the similarity of the ordering that you would get if you used each variable separately to rank observations (Stata breaks ties by assigning the average rank, and it's just Pearson correlation on the ranks). This makes it somewhat faster to compute since you don't have to consider all $\frac{n \cdot (n-1)}{2}$ pairs. On the other hand, the central limit theorem works much faster for $\tau$, so if you plan to do inference that measure might be better. $\tau_b$ is the most common variant. | How can I do a correlation between Likert scale and an ordinal categorical measure?
What about one of the Kendall's $\tau$s? They are a kind of rank correlation coefficient for ordinal data.
Here's an example with Stata and $\tau_{b}$. A value of $−1$ implies perfect negative associ |
55,352 | How can I do a correlation between Likert scale and an ordinal categorical measure? | Spearman rank correlation produces an interpretable measure of correlation when both measrues are ordinal. | How can I do a correlation between Likert scale and an ordinal categorical measure? | Spearman rank correlation produces an interpretable measure of correlation when both measrues are ordinal. | How can I do a correlation between Likert scale and an ordinal categorical measure?
Spearman rank correlation produces an interpretable measure of correlation when both measrues are ordinal. | How can I do a correlation between Likert scale and an ordinal categorical measure?
Spearman rank correlation produces an interpretable measure of correlation when both measrues are ordinal. |
55,353 | What is the name of this perceptron-like classifier? | The name ADALINE (ADaptive LInear NEuron) come from both the physical implementation of an early classifier, but it is also the name specific design.
See: http://en.wikipedia.org/wiki/ADALINE
Apparently McCulloch-Pitts perceptrons came first. ADALINE was a variation on this that used a linear response function, as opposed to heaviside step. ADALINE is fitted using gradient decent because its output is the dot product of the weights and the inputs, effectively with a linear transfer function. The original McCulloch-Pitts neuron had a heaviside step transfer function, and so couldn't use gradient descent (I had this the wrong way round in my original answer). More general artificial neurons apply any transfer function and so, if differentiable, can be fitted with gradient descent.
As to being "equally confused about the use of perceptron" then really a perceptron is just a linear classifier. The main difference between a perceptron and other classifiers like logistic regression etc. is that they are "trained" using online algorithms - that is you can give them one datapoint at a time. Each input/response pair that you give it updates the weights and should make it a better classifier. In the early days the connection between perceptrons and logistic regression (and other classifiers) was not clear, but these days it is understood that they do the same thing, and you can "train" logistic regression one point at a time if you wish. Typically perceptrons are now discussed as the elements of larger neural networks or multilayer perceptrons. Some sources suggest that a perceptron must have a binary output, but then other sources on multilayer perceptrons don't enforce this.
For safety I would suggest you refered to your model as an artificial neuron. It isn't ADALINE, and it's not an original McCulloch-Pitts; it might be a perceptron, but it is probably best to refer to it as an artificial neuron.
Incidentally, Information Theory, Inference and Learning Algorithms by D. MacKay has almost exactly your case as an example (chapter 39) and refers to it simply as a "Single Neuron". | What is the name of this perceptron-like classifier? | The name ADALINE (ADaptive LInear NEuron) come from both the physical implementation of an early classifier, but it is also the name specific design.
See: http://en.wikipedia.org/wiki/ADALINE
Apparent | What is the name of this perceptron-like classifier?
The name ADALINE (ADaptive LInear NEuron) come from both the physical implementation of an early classifier, but it is also the name specific design.
See: http://en.wikipedia.org/wiki/ADALINE
Apparently McCulloch-Pitts perceptrons came first. ADALINE was a variation on this that used a linear response function, as opposed to heaviside step. ADALINE is fitted using gradient decent because its output is the dot product of the weights and the inputs, effectively with a linear transfer function. The original McCulloch-Pitts neuron had a heaviside step transfer function, and so couldn't use gradient descent (I had this the wrong way round in my original answer). More general artificial neurons apply any transfer function and so, if differentiable, can be fitted with gradient descent.
As to being "equally confused about the use of perceptron" then really a perceptron is just a linear classifier. The main difference between a perceptron and other classifiers like logistic regression etc. is that they are "trained" using online algorithms - that is you can give them one datapoint at a time. Each input/response pair that you give it updates the weights and should make it a better classifier. In the early days the connection between perceptrons and logistic regression (and other classifiers) was not clear, but these days it is understood that they do the same thing, and you can "train" logistic regression one point at a time if you wish. Typically perceptrons are now discussed as the elements of larger neural networks or multilayer perceptrons. Some sources suggest that a perceptron must have a binary output, but then other sources on multilayer perceptrons don't enforce this.
For safety I would suggest you refered to your model as an artificial neuron. It isn't ADALINE, and it's not an original McCulloch-Pitts; it might be a perceptron, but it is probably best to refer to it as an artificial neuron.
Incidentally, Information Theory, Inference and Learning Algorithms by D. MacKay has almost exactly your case as an example (chapter 39) and refers to it simply as a "Single Neuron". | What is the name of this perceptron-like classifier?
The name ADALINE (ADaptive LInear NEuron) come from both the physical implementation of an early classifier, but it is also the name specific design.
See: http://en.wikipedia.org/wiki/ADALINE
Apparent |
55,354 | What is the name of this perceptron-like classifier? | What you describe is essentially just logistic regression with a scaled output using squared loss rather than the usual log loss. Notice that $\tanh(x) = 2\sigma(x) - 1$ where
$$
\sigma(x) = \frac{1}{1 + e^{-x}}
$$
is the logistic function. The decision boundary will still be linear. | What is the name of this perceptron-like classifier? | What you describe is essentially just logistic regression with a scaled output using squared loss rather than the usual log loss. Notice that $\tanh(x) = 2\sigma(x) - 1$ where
$$
\sigma(x) = \frac{1}{ | What is the name of this perceptron-like classifier?
What you describe is essentially just logistic regression with a scaled output using squared loss rather than the usual log loss. Notice that $\tanh(x) = 2\sigma(x) - 1$ where
$$
\sigma(x) = \frac{1}{1 + e^{-x}}
$$
is the logistic function. The decision boundary will still be linear. | What is the name of this perceptron-like classifier?
What you describe is essentially just logistic regression with a scaled output using squared loss rather than the usual log loss. Notice that $\tanh(x) = 2\sigma(x) - 1$ where
$$
\sigma(x) = \frac{1}{ |
55,355 | Using Anselin Local Moran's I Values in Regression | There seems to be some confusion around what exactly the local Moran's I values are, so lets review what they are and then evaluate if they can be given any reasonable interpretation in a regression equation.
In ESRI's notation, I believe you are talking about putting the $z_{I_i}$ in the regression equation, or perhaps a dummy variable to signify if that observation is identified to be an outlying High-High, Low-Low value etc. Placing a $z_{I_i}$ value on the right hand side of a regression equation amounts to essentially the same interpretation as does any standardized variable (which is certainly not meaningless), although one would preferably examine both the standardized and unstandardized versions. Dummy values for high-high, low-low values I would hestitate to use, although I believe some work by Sergio Rey considers them as the outcome variable as analyses transitions between the states in a temporal system (so it isn't out of the realm of possibilites, but they are so processed interpreting them would be a challenge).
To put a face on this example, lets consider some example data on a 4 by 4 grid. Here I index the values by letters on the column and row.
A B C D
A 5 17 1 6
B 3 10 3 7
C 6 1 11 12
D 2 0 3 4
Now what exactly is a Local Moran's I value? Well we first need to define what local means, and the typical way to do that is to specify a spatial weights matrix that intrinsically relates any particular value to its neighbors via a weight. Here we unfold each unique spatial observation to have its own row in a data matrix, and then define each observations relationship to every other observation in a $N$ by $N$ square matrix. Here the first value refers to the colum and the second value refers to the row (so AC means column A and row C). The unfolded values are as below, and lets refer to this column vector of values as $x$.
x
AA 5
AB 3
AC 6
AD 2
BA 17
BB 10
BC 1
BD 0
CA 1
CB 2
CC 11
CD 3
DA 6
DB 7
DC 12
DD 4
The example below shows only one type of spatial weights matrix, a row standardized contiguity matrix. Here I define contiguity based how a Rook moves, and so only cells that share a side of the original observation are neighbors. I also weight the association by dividing 1 by the total number of neighbors (I will go onto further detail to say why this is type of spatial weight matrix in which the row values sum to 1 have a nice interpretaion). Let's refer to this matrix as $W$
AA AB AC AD BA BB BC BD CA CB CC CD DA DB DC DD
AA 0 1/2 0 0 1/2 0 0 0 0 0 0 0 0 0 0 0
AB 1/3 0 1/3 0 0 1/3 0 0 0 0 0 0 0 0 0 0
AC 0 1/3 0 1/3 0 0 1/3 0 0 0 0 0 0 0 0 0
AD 0 0 1/2 0 0 0 0 1/2 0 0 0 0 0 0 0 0
BA 1/3 0 0 0 0 1/3 0 0 1/3 0 0 0 0 0 0 0
BB 0 1/4 0 0 1/4 0 1/4 0 0 1/4 0 0 0 0 0 0
BC 0 0 1/4 0 0 1/4 0 1/4 0 0 1/4 0 0 0 0 0
BD 0 0 0 1/3 0 0 1/3 0 0 0 0 1/3 0 0 0 0
CA 0 0 0 0 1/3 0 0 0 0 1/3 0 0 1/3 0 0 0
CB 0 0 0 0 0 1/4 0 0 1/4 0 0 1/4 0 1/4 0 0
CC 0 0 0 0 0 0 1/4 0 0 1/4 0 1/4 0 0 1/4 0
CD 0 0 0 0 0 0 0 1/3 0 0 1/3 0 0 0 0 1/3
DA 0 0 0 0 0 0 0 0 1/2 0 0 0 0 1/2 0 0
DB 0 0 0 0 0 0 0 0 0 1/3 0 0 1/3 0 1/3 0
DC 0 0 0 0 0 0 0 0 0 0 1/3 0 0 1/3 0 1/3
DD 0 0 0 0 0 0 0 0 0 0 0 1/2 0 0 1/2 0
To define local I ESRI uses the notation in terms of individual units, but for some simplicity lets just consider some matrix algebra. If we pre-multiply our column vector $x$ by $W$, we end up with a new column vector of the same length that is equal to a local weighted average of neighboring values. To see what is going on in simpler steps, lets just consider the dot product of our $x$ column vector and the first row of our weights matrix, which amounts to;
$$
\begin{bmatrix}
0 & 0.5 & 0 & 0 & 0.5 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}
\cdot
\begin{bmatrix}
5 \\
3 \\
6 \\
2 \\
17 \\
10 \\
1 \\
0 \\
1 \\
2 \\
11 \\
3 \\
6 \\
7 \\
12 \\
4 \\
\end{bmatrix}
= 10$$
If you go through the individual operations on this you will see that this dot product with the row-standardized weights matrix amounts to the average of the neighboring values for each individual observation. The operation of multiplying $W \cdot x$ just amounts to estimating the dot product of every spatial weight row and the column vector $x$ combination just like this.
How this relates to the Local I values, and why your $I_i$ values sometimes negative, is we typically consider Local I values as a decomposition of the global Moran's I test, in which case we don't evaluate the actual located weighted average, but as deviations from the average. We then further standardize this value by dividing the Local deviations by the standard deviation of that average, which then essentially gives Z-scores. Admittedly standardized scores aren't always straight forward to interpret in regression analysis (they are sometimes useful to compare to other coefficients on intrinsically different scales), but that critique doesn't apply to simply the weighted average of the neighbors.
Consider the case where the x values above are quadrat cells (just an arbitrary square grid) over Raccoon city, and the counts are the estimated number of known offenders living in those particular quadrats. From criminological theory it is certainly reasonable to expect the number of crimes in a quadrat is not only a function of the number of offenders in the local quadrat, but the number of offenders in nearby quadrats as well. In that situation having both effects in the equation is both logical and provides a useful interpretation.
Now, things to consider in addition to this are the fact that more general spatial models, as Corey suggests, will likely be needed. It is often the case in such spatial models that there still exists spatial auto-correlation in the residuals. Corey's suggested reference is essential a spatial error model, which does not easily generalize to incorporating spatial effects of the independent variables. A spatial-Durbin model does though. I would highly suggest to read the first 3 chapters of Lesage and Pace's Introduction to Spatial Econometrics. | Using Anselin Local Moran's I Values in Regression | There seems to be some confusion around what exactly the local Moran's I values are, so lets review what they are and then evaluate if they can be given any reasonable interpretation in a regression e | Using Anselin Local Moran's I Values in Regression
There seems to be some confusion around what exactly the local Moran's I values are, so lets review what they are and then evaluate if they can be given any reasonable interpretation in a regression equation.
In ESRI's notation, I believe you are talking about putting the $z_{I_i}$ in the regression equation, or perhaps a dummy variable to signify if that observation is identified to be an outlying High-High, Low-Low value etc. Placing a $z_{I_i}$ value on the right hand side of a regression equation amounts to essentially the same interpretation as does any standardized variable (which is certainly not meaningless), although one would preferably examine both the standardized and unstandardized versions. Dummy values for high-high, low-low values I would hestitate to use, although I believe some work by Sergio Rey considers them as the outcome variable as analyses transitions between the states in a temporal system (so it isn't out of the realm of possibilites, but they are so processed interpreting them would be a challenge).
To put a face on this example, lets consider some example data on a 4 by 4 grid. Here I index the values by letters on the column and row.
A B C D
A 5 17 1 6
B 3 10 3 7
C 6 1 11 12
D 2 0 3 4
Now what exactly is a Local Moran's I value? Well we first need to define what local means, and the typical way to do that is to specify a spatial weights matrix that intrinsically relates any particular value to its neighbors via a weight. Here we unfold each unique spatial observation to have its own row in a data matrix, and then define each observations relationship to every other observation in a $N$ by $N$ square matrix. Here the first value refers to the colum and the second value refers to the row (so AC means column A and row C). The unfolded values are as below, and lets refer to this column vector of values as $x$.
x
AA 5
AB 3
AC 6
AD 2
BA 17
BB 10
BC 1
BD 0
CA 1
CB 2
CC 11
CD 3
DA 6
DB 7
DC 12
DD 4
The example below shows only one type of spatial weights matrix, a row standardized contiguity matrix. Here I define contiguity based how a Rook moves, and so only cells that share a side of the original observation are neighbors. I also weight the association by dividing 1 by the total number of neighbors (I will go onto further detail to say why this is type of spatial weight matrix in which the row values sum to 1 have a nice interpretaion). Let's refer to this matrix as $W$
AA AB AC AD BA BB BC BD CA CB CC CD DA DB DC DD
AA 0 1/2 0 0 1/2 0 0 0 0 0 0 0 0 0 0 0
AB 1/3 0 1/3 0 0 1/3 0 0 0 0 0 0 0 0 0 0
AC 0 1/3 0 1/3 0 0 1/3 0 0 0 0 0 0 0 0 0
AD 0 0 1/2 0 0 0 0 1/2 0 0 0 0 0 0 0 0
BA 1/3 0 0 0 0 1/3 0 0 1/3 0 0 0 0 0 0 0
BB 0 1/4 0 0 1/4 0 1/4 0 0 1/4 0 0 0 0 0 0
BC 0 0 1/4 0 0 1/4 0 1/4 0 0 1/4 0 0 0 0 0
BD 0 0 0 1/3 0 0 1/3 0 0 0 0 1/3 0 0 0 0
CA 0 0 0 0 1/3 0 0 0 0 1/3 0 0 1/3 0 0 0
CB 0 0 0 0 0 1/4 0 0 1/4 0 0 1/4 0 1/4 0 0
CC 0 0 0 0 0 0 1/4 0 0 1/4 0 1/4 0 0 1/4 0
CD 0 0 0 0 0 0 0 1/3 0 0 1/3 0 0 0 0 1/3
DA 0 0 0 0 0 0 0 0 1/2 0 0 0 0 1/2 0 0
DB 0 0 0 0 0 0 0 0 0 1/3 0 0 1/3 0 1/3 0
DC 0 0 0 0 0 0 0 0 0 0 1/3 0 0 1/3 0 1/3
DD 0 0 0 0 0 0 0 0 0 0 0 1/2 0 0 1/2 0
To define local I ESRI uses the notation in terms of individual units, but for some simplicity lets just consider some matrix algebra. If we pre-multiply our column vector $x$ by $W$, we end up with a new column vector of the same length that is equal to a local weighted average of neighboring values. To see what is going on in simpler steps, lets just consider the dot product of our $x$ column vector and the first row of our weights matrix, which amounts to;
$$
\begin{bmatrix}
0 & 0.5 & 0 & 0 & 0.5 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}
\cdot
\begin{bmatrix}
5 \\
3 \\
6 \\
2 \\
17 \\
10 \\
1 \\
0 \\
1 \\
2 \\
11 \\
3 \\
6 \\
7 \\
12 \\
4 \\
\end{bmatrix}
= 10$$
If you go through the individual operations on this you will see that this dot product with the row-standardized weights matrix amounts to the average of the neighboring values for each individual observation. The operation of multiplying $W \cdot x$ just amounts to estimating the dot product of every spatial weight row and the column vector $x$ combination just like this.
How this relates to the Local I values, and why your $I_i$ values sometimes negative, is we typically consider Local I values as a decomposition of the global Moran's I test, in which case we don't evaluate the actual located weighted average, but as deviations from the average. We then further standardize this value by dividing the Local deviations by the standard deviation of that average, which then essentially gives Z-scores. Admittedly standardized scores aren't always straight forward to interpret in regression analysis (they are sometimes useful to compare to other coefficients on intrinsically different scales), but that critique doesn't apply to simply the weighted average of the neighbors.
Consider the case where the x values above are quadrat cells (just an arbitrary square grid) over Raccoon city, and the counts are the estimated number of known offenders living in those particular quadrats. From criminological theory it is certainly reasonable to expect the number of crimes in a quadrat is not only a function of the number of offenders in the local quadrat, but the number of offenders in nearby quadrats as well. In that situation having both effects in the equation is both logical and provides a useful interpretation.
Now, things to consider in addition to this are the fact that more general spatial models, as Corey suggests, will likely be needed. It is often the case in such spatial models that there still exists spatial auto-correlation in the residuals. Corey's suggested reference is essential a spatial error model, which does not easily generalize to incorporating spatial effects of the independent variables. A spatial-Durbin model does though. I would highly suggest to read the first 3 chapters of Lesage and Pace's Introduction to Spatial Econometrics. | Using Anselin Local Moran's I Values in Regression
There seems to be some confusion around what exactly the local Moran's I values are, so lets review what they are and then evaluate if they can be given any reasonable interpretation in a regression e |
55,356 | Using Anselin Local Moran's I Values in Regression | Why not just use a spatial regression model? That way you account for the dependency measured by Local Moran's I directly in the model. As an aside, I would not advise including the local I value in a model, nor would a reviewer, I trust. There is the topic of Moran Eigenvector filtering (http://hosho.ees.hokudai.ac.jp/~kubo/Rdoc/library/spdep/html/ME.html) that works well if you don't want to use a fully spatially specified regression model | Using Anselin Local Moran's I Values in Regression | Why not just use a spatial regression model? That way you account for the dependency measured by Local Moran's I directly in the model. As an aside, I would not advise including the local I value in | Using Anselin Local Moran's I Values in Regression
Why not just use a spatial regression model? That way you account for the dependency measured by Local Moran's I directly in the model. As an aside, I would not advise including the local I value in a model, nor would a reviewer, I trust. There is the topic of Moran Eigenvector filtering (http://hosho.ees.hokudai.ac.jp/~kubo/Rdoc/library/spdep/html/ME.html) that works well if you don't want to use a fully spatially specified regression model | Using Anselin Local Moran's I Values in Regression
Why not just use a spatial regression model? That way you account for the dependency measured by Local Moran's I directly in the model. As an aside, I would not advise including the local I value in |
55,357 | Intepretation of crossvalidation result - cv.glm() | Similar to what mambo said, the delta values are useful to compare this model with alternative models. You might, for example, plot the delta values of this vs. comparable models to see which produce the lowest MSE (delta). The first value of delta is the standard k-fold estimate and the second is bias corrected. | Intepretation of crossvalidation result - cv.glm() | Similar to what mambo said, the delta values are useful to compare this model with alternative models. You might, for example, plot the delta values of this vs. comparable models to see which produce | Intepretation of crossvalidation result - cv.glm()
Similar to what mambo said, the delta values are useful to compare this model with alternative models. You might, for example, plot the delta values of this vs. comparable models to see which produce the lowest MSE (delta). The first value of delta is the standard k-fold estimate and the second is bias corrected. | Intepretation of crossvalidation result - cv.glm()
Similar to what mambo said, the delta values are useful to compare this model with alternative models. You might, for example, plot the delta values of this vs. comparable models to see which produce |
55,358 | Intepretation of crossvalidation result - cv.glm() | I started digging through the code for the boot package and found the function cv.glm() at https://github.com/cran/boot/blob/5b1e0fea4d1ab1716f2226d673e981d669495b75/R/bootfuns.q#L825, as well as going through Introduction to Statistical Learning by James et al. I haven't gotten to the $K$-fold CV section yet, but here's my understanding...
The first component of delta is the average mean-squared error that you obtain from doing $K$-fold CV.
The second component of delta is the average mean-squared error that you obtain from doing $K$-fold CV, but with a bias correction. How this is achieved is, initially, the residual sum of squares (RSS) is computed based on the GLM predicted values and the actual response values for the entire data set. As you're going through the $K$ folds, you generate a training model, and then you compute the RSS between the entire data set of $y$-values (not just the training set) and the predicted values from the training model. These resulting RSS values are then subtracted from the initial RSS. After you're done going through your $K$ folds, you will have subtracted $K$ values from the initial RSS. This is the second component of delta.
I'm hoping this is right, as this is how I'm interpreting the code.
Here is the code snippet, for your reference. Thankfully, it appears that this code is mostly self-contained.
sample0 <- function(x, ...) x[sample.int(length(x), ...)]
cv.glm <- function(data, glmfit, cost=function(y,yhat) mean((y-yhat)^2),
K=n)
{
# cross-validation estimate of error for glm prediction with K groups.
# cost is a function of two arguments: the observed values and the
# the predicted values.
call <- match.call()
if (!exists(".Random.seed", envir=.GlobalEnv, inherits = FALSE)) runif(1)
seed <- get(".Random.seed", envir=.GlobalEnv, inherits = FALSE)
n <- nrow(data)
if ((K > n) || (K <= 1))
stop("'K' outside allowable range")
K.o <- K
K <- round(K)
kvals <- unique(round(n/(1L:floor(n/2))))
temp <- abs(kvals-K)
if (!any(temp == 0))
K <- kvals[temp == min(temp)][1L]
if (K!=K.o) warning(gettextf("'K' has been set to %f", K), domain = NA)
f <- ceiling(n/K)
s <- sample0(rep(1L:K, f), n)
n.s <- table(s)
# glm.f <- formula(glmfit)
glm.y <- glmfit$y
cost.0 <- cost(glm.y, fitted(glmfit))
ms <- max(s)
CV <- 0
Call <- glmfit$call
for(i in seq_len(ms)) {
j.out <- seq_len(n)[(s == i)]
j.in <- seq_len(n)[(s != i)]
## we want data from here but formula from the parent.
Call$data <- data[j.in, , drop=FALSE]
d.glm <- eval.parent(Call)
p.alpha <- n.s[i]/n
cost.i <- cost(glm.y[j.out],
predict(d.glm, data[j.out, , drop=FALSE],
type = "response"))
CV <- CV + p.alpha * cost.i
cost.0 <- cost.0 - p.alpha *
cost(glm.y, predict(d.glm, data, type = "response"))
}
list(call = call, K = K,
delta = as.numeric(c(CV, CV + cost.0)), # drop any names
seed = seed)
} | Intepretation of crossvalidation result - cv.glm() | I started digging through the code for the boot package and found the function cv.glm() at https://github.com/cran/boot/blob/5b1e0fea4d1ab1716f2226d673e981d669495b75/R/bootfuns.q#L825, as well as goin | Intepretation of crossvalidation result - cv.glm()
I started digging through the code for the boot package and found the function cv.glm() at https://github.com/cran/boot/blob/5b1e0fea4d1ab1716f2226d673e981d669495b75/R/bootfuns.q#L825, as well as going through Introduction to Statistical Learning by James et al. I haven't gotten to the $K$-fold CV section yet, but here's my understanding...
The first component of delta is the average mean-squared error that you obtain from doing $K$-fold CV.
The second component of delta is the average mean-squared error that you obtain from doing $K$-fold CV, but with a bias correction. How this is achieved is, initially, the residual sum of squares (RSS) is computed based on the GLM predicted values and the actual response values for the entire data set. As you're going through the $K$ folds, you generate a training model, and then you compute the RSS between the entire data set of $y$-values (not just the training set) and the predicted values from the training model. These resulting RSS values are then subtracted from the initial RSS. After you're done going through your $K$ folds, you will have subtracted $K$ values from the initial RSS. This is the second component of delta.
I'm hoping this is right, as this is how I'm interpreting the code.
Here is the code snippet, for your reference. Thankfully, it appears that this code is mostly self-contained.
sample0 <- function(x, ...) x[sample.int(length(x), ...)]
cv.glm <- function(data, glmfit, cost=function(y,yhat) mean((y-yhat)^2),
K=n)
{
# cross-validation estimate of error for glm prediction with K groups.
# cost is a function of two arguments: the observed values and the
# the predicted values.
call <- match.call()
if (!exists(".Random.seed", envir=.GlobalEnv, inherits = FALSE)) runif(1)
seed <- get(".Random.seed", envir=.GlobalEnv, inherits = FALSE)
n <- nrow(data)
if ((K > n) || (K <= 1))
stop("'K' outside allowable range")
K.o <- K
K <- round(K)
kvals <- unique(round(n/(1L:floor(n/2))))
temp <- abs(kvals-K)
if (!any(temp == 0))
K <- kvals[temp == min(temp)][1L]
if (K!=K.o) warning(gettextf("'K' has been set to %f", K), domain = NA)
f <- ceiling(n/K)
s <- sample0(rep(1L:K, f), n)
n.s <- table(s)
# glm.f <- formula(glmfit)
glm.y <- glmfit$y
cost.0 <- cost(glm.y, fitted(glmfit))
ms <- max(s)
CV <- 0
Call <- glmfit$call
for(i in seq_len(ms)) {
j.out <- seq_len(n)[(s == i)]
j.in <- seq_len(n)[(s != i)]
## we want data from here but formula from the parent.
Call$data <- data[j.in, , drop=FALSE]
d.glm <- eval.parent(Call)
p.alpha <- n.s[i]/n
cost.i <- cost(glm.y[j.out],
predict(d.glm, data[j.out, , drop=FALSE],
type = "response"))
CV <- CV + p.alpha * cost.i
cost.0 <- cost.0 - p.alpha *
cost(glm.y, predict(d.glm, data, type = "response"))
}
list(call = call, K = K,
delta = as.numeric(c(CV, CV + cost.0)), # drop any names
seed = seed)
} | Intepretation of crossvalidation result - cv.glm()
I started digging through the code for the boot package and found the function cv.glm() at https://github.com/cran/boot/blob/5b1e0fea4d1ab1716f2226d673e981d669495b75/R/bootfuns.q#L825, as well as goin |
55,359 | Intepretation of crossvalidation result - cv.glm() | This might be useful for the understanding of prediction errors (delta):
R crossvalidation cv.glm: prediction error and confidence interval
This answer from AdamO was particularly helpful:
"Prediction errors are different from standard errors in two critical ways.
Prediction errors provide intervals for predicted values, i.e. values which could be observed in the outcome controlling for some or all of the variation (through conditioning) in the predictors. Standard errors provide intervals for estimated statistics, e.g. parameters which are never truly observed. Continuously valued parameters such as log odds ratios in a logistic regression model can create "prediction intervals" for binary outcomes in the form of a confusion matrix (this is natural for Bayesians).
Prediction errors do not vanish in large n whereas confidence intervals do. This is because no amount of sampling will reduce the variability inherent in a single observation drawn from the data generating mechanism. Prediction errors do decrease in large however, since the precision of the estimated predictive model improves. Confidence intervals do vanish in large as a result of the central limit theorem (usu.). This is because sampling the universe repeatedly would yield the exact same thing with 0 variation." | Intepretation of crossvalidation result - cv.glm() | This might be useful for the understanding of prediction errors (delta):
R crossvalidation cv.glm: prediction error and confidence interval
This answer from AdamO was particularly helpful:
"Prediction | Intepretation of crossvalidation result - cv.glm()
This might be useful for the understanding of prediction errors (delta):
R crossvalidation cv.glm: prediction error and confidence interval
This answer from AdamO was particularly helpful:
"Prediction errors are different from standard errors in two critical ways.
Prediction errors provide intervals for predicted values, i.e. values which could be observed in the outcome controlling for some or all of the variation (through conditioning) in the predictors. Standard errors provide intervals for estimated statistics, e.g. parameters which are never truly observed. Continuously valued parameters such as log odds ratios in a logistic regression model can create "prediction intervals" for binary outcomes in the form of a confusion matrix (this is natural for Bayesians).
Prediction errors do not vanish in large n whereas confidence intervals do. This is because no amount of sampling will reduce the variability inherent in a single observation drawn from the data generating mechanism. Prediction errors do decrease in large however, since the precision of the estimated predictive model improves. Confidence intervals do vanish in large as a result of the central limit theorem (usu.). This is because sampling the universe repeatedly would yield the exact same thing with 0 variation." | Intepretation of crossvalidation result - cv.glm()
This might be useful for the understanding of prediction errors (delta):
R crossvalidation cv.glm: prediction error and confidence interval
This answer from AdamO was particularly helpful:
"Prediction |
55,360 | Intepretation of crossvalidation result - cv.glm() | I found this online, which helps explain what delta is:
http://home.strw.leidenuniv.nl/~jarle/IAC/Tasks/IAC-lecture4-homework.pdf
It seems to me that that comparative values of delta between models are of importance rather than the absolute values. | Intepretation of crossvalidation result - cv.glm() | I found this online, which helps explain what delta is:
http://home.strw.leidenuniv.nl/~jarle/IAC/Tasks/IAC-lecture4-homework.pdf
It seems to me that that comparative values of delta between models ar | Intepretation of crossvalidation result - cv.glm()
I found this online, which helps explain what delta is:
http://home.strw.leidenuniv.nl/~jarle/IAC/Tasks/IAC-lecture4-homework.pdf
It seems to me that that comparative values of delta between models are of importance rather than the absolute values. | Intepretation of crossvalidation result - cv.glm()
I found this online, which helps explain what delta is:
http://home.strw.leidenuniv.nl/~jarle/IAC/Tasks/IAC-lecture4-homework.pdf
It seems to me that that comparative values of delta between models ar |
55,361 | Testing the race model inequality in R | UPDATE: All the custom versions of functions rewritten in R and used in this answer are added in the bottom for better clarity.
I guess. You probably knew where's gone wrong. As you said
it seems right, but I'm not 100% sure
Yes. This is the start of the differences between your data and Ulrich et al's.
cx <- c(244, 249, 257, 260, 264, 268, 271, 274, 277, 291)
cy <- c(245, 246, 248, 250, 251, 252, 253, 254, 255, 259, 263, 265, 279, 282, 284, 319)
cz <- c(234, 238, 240, 240, 243, 243, 245, 251, 254, 256, 259, 270, 280)
Here is the correct psq:
psq <- probSpace(10); psq
[1] 0.05 0.15 0.25 0.35 0.45 0.55 0.65 0.75 0.85 0.95
Here is your psq:
psq <- seq(0,1,0.1); psq
[1] 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Also (or should say therefore), your gx, gy and gz are different from Ulrich et al's. Here I used Ulrich et al's ties function (off course, after rewriting it in R) to generate a data frame that stores (1) the unique response time (RT) data, (2) the number of iteration for each RT datum, and (3) the cumulative frequency. This is in line with Urlich et al's MATLAB function (note that they stored them as a structure, I suppose). These three pieces of information are then served as inputs to Ulrich et al's CDF function, which differs from the R's quantile function. I rewrote their MATLAB CDF function to work with my ties data frame output. They in fact wrote their CDF function, following the equation in Appendix A, not the equation (2) in the main article. Here is probably the place you might not notice, so did not work out the question. Basically, they wanted to handle the problem of ties, so they wrote their variation of CDF function.
dfx <- ties(cx)
dfy <- ties(cy)
dfz <- ties(cz)
tmax <- max(cx,cy,cz)
gx <- cdf.ulrich(data=dfx, maximum=tmax)
gy <- cdf.ulrich(data=dfy, maximum=tmax)
gz <- cdf.ulrich(data=dfz, maximum=tmax)
I then added gx and gy together and stored them to b. Because R does element-by-element calculation, if one uses the operator as usual, I did not bother to use a for loop, as Ulrich et al.'s did. I then used Ulrich et al's GerPercentile function (rewrote it in R) to calculate the percentiles for each individual CDF, including the b. Also note that you need tmax to confine the domain of RT. This was possible the place you had not noticed.
b <- gx + gy
xp <- GetPercentile(psq, gx, tmax);
yp <- GetPercentile(psq, gy, tmax);
zp <- GetPercentile(psq, gz, tmax);
bp <- GetPercentile(psq, b, tmax);
Finally, construct a data frame to draw the figure. Here is the panel F in their Figure 1.
gdf <- data.frame(RT =c(xp,yp,zp,bp), Probability =rep(psq, 4),
Condition =rep(c("gx(t)", "gy(t)","gz(t)","gx(t)+gy(t)"), each=length(xp)))
panelf <- ggplot(gdf, aes(x = RT, y = Probability, group=Condition,
colour=Condition, shape=Condition)) +
geom_point() + geom_line()
panelf + coord_cartesian(xlim = c(230, 330), ylim=c(-.01,1.01)) +
theme(legend.position= c(.85, .20),
legend.title = element_text(size=12),
legend.text = element_text(size=12))
I hope these would be helpful.
Also I should mention that you may need to load the required libraries (here I have used ggplot2 and grid) and source those home-made functions, if you did not put them together in one RaceModel function as Ulrich et al did. I suggest write individual file for each function, which was what I did and source them. Some of the functions you could translate directly from the equations in Ulrich et al's article and others from their MATLAB or Pascal code. MATLAB uses parentheses, (), to index vector, but R uses square bracket, [].
FUNCTIONS
Here is the probSpace function. I wrote it according to Ulrich et al's equation (3) (p. 293) by myself, so you probably won't be able to find it in any R packages.
probSpace <- function(len){
# PROBSPACE Determine the interval of percentiles
# The function used equation (3) in Ulrich, Miller and Schroter (2007)
P <- numeric(len);
for(i in 1:len){
P[i] <- (i - .5) / len;
}
return(P)
}
Here is Ulrich et al.'s (2007) CDF function (p. 296 and 299). I wrote it according to their MATLAB code and the equation in Appendix A.
cdf.ulrich <- function(data=NULL, maximum=3000){
# Create a container, whose length is the longest data vector
# data is the output from `ties` function
U <- data[,1]
R <- data[,2]
C <- data[,3]
G <- numeric(maximum);
# The length of the processed data vector, trimming off ties, if there is any.
k <- length(U); # U contains data in millisecond, e.g., 320 ms etc.
# The last element of the cumulative frequency supposely is the
# length of the data vector.
n <- C[k]
for(i in 1:k) { U[i] <- round(U[i]) }
# from 1 ms to the first value of the data set, their probability should be 0.
for(t in 1:U[1]) { G[t] <- 0 }
for(t in U[1]:U[2]){
G[t] <- ( R[1]/2 + (R[1]+R[2]) / 2*(t-U[1]) / (U[2] - U[1]) ) / n;
}
for(i in 2:(k-1)){
for(t in U[i]:U[i+1]){
G[t] <- (C[i-1] + R[i] / 2+(R[i] +R[i+1]) / 2*(t-U[i]) / (U[i+1] - U[i])) / n;
}
}
for(t in U[k]:maximum){
G[t] <- 1;
}
return(G)
}
Below are the ties() and GetPercentile() functions written by Ulrich et al. (2007). I merely rewrote them into R, based on their description in the article, the equations and MATLAB codes they provided.
Firstly, the ties function:
ties <- function(W){
# Count number k of unique values
# and store these values in U.
U <- NULL; W <- sort(W); n = length(W); k = 1; U[1] <- W[1]
for (i in 2:n) {
if (W[i] != W[i-1]) {
k <- k+1;
U <- cbind(U, W[i])
}
}
U <- U[1,]
# Determine number of replications R
# k is the length of the vector, after trimming off the ties
R <- numeric(k)
for (i in 1:k){
for (j in 1:n){
if (U[i] == W[j]) R[i] <- R[i] + 1;
}
}
# Determine the cumlative frequency
C <- numeric(k)
C[1] <- R[1]
for(i in 2:k){
C[i] <- C[i-1] + R[i];
}
res <- list(U, R, C)
names(res) <- c("U", "R", "C")
return(as.data.frame(res))
}
Then, the GetPercentile function:
GetPercentile <- function( P, G, tmax ){
# Determine minimum of |G(Tp[i]) - P[i]|
np <- length(P);
Tp <- numeric(np)
for( i in 1:np) {
cc <- 100;
for(t in 1:tmax) {
if ( abs(G[t] - P[i]) < cc ) {
c <- t;
cc <- abs(G[t] - P[i]);
}
}
if( P[i] > G[c] ){
Tp[i] <- c + (P[i] - G[c]) / (G[c+1] - G[c]);
} else {
Tp[i] <- c + (P[i] - G[c]) / (G[c] - G[c-1]);
}
}
return( Tp )
} | Testing the race model inequality in R | UPDATE: All the custom versions of functions rewritten in R and used in this answer are added in the bottom for better clarity.
I guess. You probably knew where's gone wrong. As you said
it seems ri | Testing the race model inequality in R
UPDATE: All the custom versions of functions rewritten in R and used in this answer are added in the bottom for better clarity.
I guess. You probably knew where's gone wrong. As you said
it seems right, but I'm not 100% sure
Yes. This is the start of the differences between your data and Ulrich et al's.
cx <- c(244, 249, 257, 260, 264, 268, 271, 274, 277, 291)
cy <- c(245, 246, 248, 250, 251, 252, 253, 254, 255, 259, 263, 265, 279, 282, 284, 319)
cz <- c(234, 238, 240, 240, 243, 243, 245, 251, 254, 256, 259, 270, 280)
Here is the correct psq:
psq <- probSpace(10); psq
[1] 0.05 0.15 0.25 0.35 0.45 0.55 0.65 0.75 0.85 0.95
Here is your psq:
psq <- seq(0,1,0.1); psq
[1] 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Also (or should say therefore), your gx, gy and gz are different from Ulrich et al's. Here I used Ulrich et al's ties function (off course, after rewriting it in R) to generate a data frame that stores (1) the unique response time (RT) data, (2) the number of iteration for each RT datum, and (3) the cumulative frequency. This is in line with Urlich et al's MATLAB function (note that they stored them as a structure, I suppose). These three pieces of information are then served as inputs to Ulrich et al's CDF function, which differs from the R's quantile function. I rewrote their MATLAB CDF function to work with my ties data frame output. They in fact wrote their CDF function, following the equation in Appendix A, not the equation (2) in the main article. Here is probably the place you might not notice, so did not work out the question. Basically, they wanted to handle the problem of ties, so they wrote their variation of CDF function.
dfx <- ties(cx)
dfy <- ties(cy)
dfz <- ties(cz)
tmax <- max(cx,cy,cz)
gx <- cdf.ulrich(data=dfx, maximum=tmax)
gy <- cdf.ulrich(data=dfy, maximum=tmax)
gz <- cdf.ulrich(data=dfz, maximum=tmax)
I then added gx and gy together and stored them to b. Because R does element-by-element calculation, if one uses the operator as usual, I did not bother to use a for loop, as Ulrich et al.'s did. I then used Ulrich et al's GerPercentile function (rewrote it in R) to calculate the percentiles for each individual CDF, including the b. Also note that you need tmax to confine the domain of RT. This was possible the place you had not noticed.
b <- gx + gy
xp <- GetPercentile(psq, gx, tmax);
yp <- GetPercentile(psq, gy, tmax);
zp <- GetPercentile(psq, gz, tmax);
bp <- GetPercentile(psq, b, tmax);
Finally, construct a data frame to draw the figure. Here is the panel F in their Figure 1.
gdf <- data.frame(RT =c(xp,yp,zp,bp), Probability =rep(psq, 4),
Condition =rep(c("gx(t)", "gy(t)","gz(t)","gx(t)+gy(t)"), each=length(xp)))
panelf <- ggplot(gdf, aes(x = RT, y = Probability, group=Condition,
colour=Condition, shape=Condition)) +
geom_point() + geom_line()
panelf + coord_cartesian(xlim = c(230, 330), ylim=c(-.01,1.01)) +
theme(legend.position= c(.85, .20),
legend.title = element_text(size=12),
legend.text = element_text(size=12))
I hope these would be helpful.
Also I should mention that you may need to load the required libraries (here I have used ggplot2 and grid) and source those home-made functions, if you did not put them together in one RaceModel function as Ulrich et al did. I suggest write individual file for each function, which was what I did and source them. Some of the functions you could translate directly from the equations in Ulrich et al's article and others from their MATLAB or Pascal code. MATLAB uses parentheses, (), to index vector, but R uses square bracket, [].
FUNCTIONS
Here is the probSpace function. I wrote it according to Ulrich et al's equation (3) (p. 293) by myself, so you probably won't be able to find it in any R packages.
probSpace <- function(len){
# PROBSPACE Determine the interval of percentiles
# The function used equation (3) in Ulrich, Miller and Schroter (2007)
P <- numeric(len);
for(i in 1:len){
P[i] <- (i - .5) / len;
}
return(P)
}
Here is Ulrich et al.'s (2007) CDF function (p. 296 and 299). I wrote it according to their MATLAB code and the equation in Appendix A.
cdf.ulrich <- function(data=NULL, maximum=3000){
# Create a container, whose length is the longest data vector
# data is the output from `ties` function
U <- data[,1]
R <- data[,2]
C <- data[,3]
G <- numeric(maximum);
# The length of the processed data vector, trimming off ties, if there is any.
k <- length(U); # U contains data in millisecond, e.g., 320 ms etc.
# The last element of the cumulative frequency supposely is the
# length of the data vector.
n <- C[k]
for(i in 1:k) { U[i] <- round(U[i]) }
# from 1 ms to the first value of the data set, their probability should be 0.
for(t in 1:U[1]) { G[t] <- 0 }
for(t in U[1]:U[2]){
G[t] <- ( R[1]/2 + (R[1]+R[2]) / 2*(t-U[1]) / (U[2] - U[1]) ) / n;
}
for(i in 2:(k-1)){
for(t in U[i]:U[i+1]){
G[t] <- (C[i-1] + R[i] / 2+(R[i] +R[i+1]) / 2*(t-U[i]) / (U[i+1] - U[i])) / n;
}
}
for(t in U[k]:maximum){
G[t] <- 1;
}
return(G)
}
Below are the ties() and GetPercentile() functions written by Ulrich et al. (2007). I merely rewrote them into R, based on their description in the article, the equations and MATLAB codes they provided.
Firstly, the ties function:
ties <- function(W){
# Count number k of unique values
# and store these values in U.
U <- NULL; W <- sort(W); n = length(W); k = 1; U[1] <- W[1]
for (i in 2:n) {
if (W[i] != W[i-1]) {
k <- k+1;
U <- cbind(U, W[i])
}
}
U <- U[1,]
# Determine number of replications R
# k is the length of the vector, after trimming off the ties
R <- numeric(k)
for (i in 1:k){
for (j in 1:n){
if (U[i] == W[j]) R[i] <- R[i] + 1;
}
}
# Determine the cumlative frequency
C <- numeric(k)
C[1] <- R[1]
for(i in 2:k){
C[i] <- C[i-1] + R[i];
}
res <- list(U, R, C)
names(res) <- c("U", "R", "C")
return(as.data.frame(res))
}
Then, the GetPercentile function:
GetPercentile <- function( P, G, tmax ){
# Determine minimum of |G(Tp[i]) - P[i]|
np <- length(P);
Tp <- numeric(np)
for( i in 1:np) {
cc <- 100;
for(t in 1:tmax) {
if ( abs(G[t] - P[i]) < cc ) {
c <- t;
cc <- abs(G[t] - P[i]);
}
}
if( P[i] > G[c] ){
Tp[i] <- c + (P[i] - G[c]) / (G[c+1] - G[c]);
} else {
Tp[i] <- c + (P[i] - G[c]) / (G[c] - G[c-1]);
}
}
return( Tp )
} | Testing the race model inequality in R
UPDATE: All the custom versions of functions rewritten in R and used in this answer are added in the bottom for better clarity.
I guess. You probably knew where's gone wrong. As you said
it seems ri |
55,362 | Testing the race model inequality in R | In short, I think that at the stage of b you find yourself combining RTs when what you should be combining are proportions. This is speculative, but I took the stab below to get things rolling. I do love me some race models and have been meaning to implement one form or another in R for a while, but never got around to it. If you do complete this project, I hope you might share.
With those disclaimers in place, I'll add... I don't know MATLAB code... but I love speculating.
The MATLAB code to calculate the bounding sum is:
for t=1:tmax;
B(t)=Gx(t)+Gy(t);
end
This looks like a loop for all possible values of RT ranging from 1 to the maximum of cx, cy, and cz. The bounding sum therefore is not the sum of the RTs at various quantiles as suggested by your code above. In fact, in the MATLAB code it looks like Gx is the result of the CDF function across the full time-span. For example, from the mathworks website I can see that cdf('Normal',-2:2,0,1) is expected to yield 0.0228 0.1587 0.5000 0.8413 0.9772. But what the code actually calls is CDF, which is defined in Appendix B. I'd try to turn it into an R function, but without understanding MATLAB data structures it is kind of a nightmare. But, the description of what is happening is in the referred to text... "Following the standard procedures for percentile estimation (see, e.g., Gilchrist, 2000), each step function is next used to generate a corresponding cumulative frequency polygon... [etc]" ... "these values are stored, millisecond by millisecond, in the vectors Gx, Gy, and Gz in the MATLAB program shown in Appendix B". Hey, cool, Gx, Gy, and Gz are clearly results from CDF, so... we know how we need to specify CDF (in abstract). Now it is just a coding problem. I bet you can do that, if you don't, I love co-authorship (half - j/k).
At any rate, I think this means that at this stage in your code b should be giving you something like the sum of proportions correct for various levels of RT not the sum of RT values (as you have coded). That seems a little odd to me and I imagine I'm missing a step because the proportion that will have completed the race when .5 are due to have completed race 1 and .5 are due to have completed race 2 is certainly not 1 for the joint race... but I might be wrong because the authors claim that "Because the sum is only compared against the values of Gz, the computation of the sum can be terminated at the first value of t for which the sum exceeds a value of 1.0"... which seems a little scary, but is certainly beyond the scope of the question and would mean presuming I know something the authors don't which seems unlikely. | Testing the race model inequality in R | In short, I think that at the stage of b you find yourself combining RTs when what you should be combining are proportions. This is speculative, but I took the stab below to get things rolling. I do | Testing the race model inequality in R
In short, I think that at the stage of b you find yourself combining RTs when what you should be combining are proportions. This is speculative, but I took the stab below to get things rolling. I do love me some race models and have been meaning to implement one form or another in R for a while, but never got around to it. If you do complete this project, I hope you might share.
With those disclaimers in place, I'll add... I don't know MATLAB code... but I love speculating.
The MATLAB code to calculate the bounding sum is:
for t=1:tmax;
B(t)=Gx(t)+Gy(t);
end
This looks like a loop for all possible values of RT ranging from 1 to the maximum of cx, cy, and cz. The bounding sum therefore is not the sum of the RTs at various quantiles as suggested by your code above. In fact, in the MATLAB code it looks like Gx is the result of the CDF function across the full time-span. For example, from the mathworks website I can see that cdf('Normal',-2:2,0,1) is expected to yield 0.0228 0.1587 0.5000 0.8413 0.9772. But what the code actually calls is CDF, which is defined in Appendix B. I'd try to turn it into an R function, but without understanding MATLAB data structures it is kind of a nightmare. But, the description of what is happening is in the referred to text... "Following the standard procedures for percentile estimation (see, e.g., Gilchrist, 2000), each step function is next used to generate a corresponding cumulative frequency polygon... [etc]" ... "these values are stored, millisecond by millisecond, in the vectors Gx, Gy, and Gz in the MATLAB program shown in Appendix B". Hey, cool, Gx, Gy, and Gz are clearly results from CDF, so... we know how we need to specify CDF (in abstract). Now it is just a coding problem. I bet you can do that, if you don't, I love co-authorship (half - j/k).
At any rate, I think this means that at this stage in your code b should be giving you something like the sum of proportions correct for various levels of RT not the sum of RT values (as you have coded). That seems a little odd to me and I imagine I'm missing a step because the proportion that will have completed the race when .5 are due to have completed race 1 and .5 are due to have completed race 2 is certainly not 1 for the joint race... but I might be wrong because the authors claim that "Because the sum is only compared against the values of Gz, the computation of the sum can be terminated at the first value of t for which the sum exceeds a value of 1.0"... which seems a little scary, but is certainly beyond the scope of the question and would mean presuming I know something the authors don't which seems unlikely. | Testing the race model inequality in R
In short, I think that at the stage of b you find yourself combining RTs when what you should be combining are proportions. This is speculative, but I took the stab below to get things rolling. I do |
55,363 | Sign of correlation of logged variables | No, without any additional assumption, knowing the (Pearson) correlation on $X$ and $Y$ does not give any clue on the (Pearson) correlation between $\log X$ and $\log Y$. See the following example in R:
x1 = c(10^-100, 1, 10^5)
x2 = c(1, 10^-100, 10^5)
cor(x1, x2) # = 1
cor(log(x1), log(x2)) # -0.4251781
(Here, $X_1$ and $X_2$ can take $3$ values with equal probabilities $\frac{1}{3}$.) | Sign of correlation of logged variables | No, without any additional assumption, knowing the (Pearson) correlation on $X$ and $Y$ does not give any clue on the (Pearson) correlation between $\log X$ and $\log Y$. See the following example in | Sign of correlation of logged variables
No, without any additional assumption, knowing the (Pearson) correlation on $X$ and $Y$ does not give any clue on the (Pearson) correlation between $\log X$ and $\log Y$. See the following example in R:
x1 = c(10^-100, 1, 10^5)
x2 = c(1, 10^-100, 10^5)
cor(x1, x2) # = 1
cor(log(x1), log(x2)) # -0.4251781
(Here, $X_1$ and $X_2$ can take $3$ values with equal probabilities $\frac{1}{3}$.) | Sign of correlation of logged variables
No, without any additional assumption, knowing the (Pearson) correlation on $X$ and $Y$ does not give any clue on the (Pearson) correlation between $\log X$ and $\log Y$. See the following example in |
55,364 | Sign of correlation of logged variables | We don't.
You can calculate an approximation via Taylor Series that should work fairly well for X and Y having a small coefficient of variation or being close to normal. | Sign of correlation of logged variables | We don't.
You can calculate an approximation via Taylor Series that should work fairly well for X and Y having a small coefficient of variation or being close to normal. | Sign of correlation of logged variables
We don't.
You can calculate an approximation via Taylor Series that should work fairly well for X and Y having a small coefficient of variation or being close to normal. | Sign of correlation of logged variables
We don't.
You can calculate an approximation via Taylor Series that should work fairly well for X and Y having a small coefficient of variation or being close to normal. |
55,365 | Minimizing variance of an estimator under sampling cost penalty | You are asking a question about the expected value of sample information.
First, I suggest that you not define your loss function in terms of variance, at least not without further examination. Instead, identify why you care about error in your estimation procedure. What are the different ways of being wrong? What are the consequences of being wrong in each of those ways? Can you quantify those consequences in terms of units of value that make sense to you?
Then define your loss function as cost-of-testing + E[cost-of-error]. To describe losses in this way, you will need to specify a common unit of measure in which both kinds of loss are denominated. Explicitly or implicitly, your loss function will need to incorporate a specification of the "price" at which you would willingly trade off costs of testing for costs of error. This quantification may seem artificial: it may be that there is no obvious common unit in which both kinds of losses might be measured. Too bad: any loss function will define such terms of trade, either explicitly or implicitly. (In terms of intermediate microeconomics: a level set of the loss function defines an indifference curve in the space of arguments; the price is the slope of this indifference curve.) You are better off making this choice consciously, rather than as the logical consequence of an un-examined ad hoc assumption.
In my opinion (and I am right about this), a problem in statistical decision theory (which this is) is handled most naturally in a Bayesian framework, in which we treat unobserved parameters as random variables. Formally, let $\theta$ denote the true value of the variable whose value you're trying to estimate, e.g., the argument at which your search algorithm would find its true minimum. For any other value $\hat{\theta}$ in your parameter space, let $C(\hat{\theta}, \theta)$ denote the cost of error when $\hat{\theta}$ is used as an estimated value and $\theta$ is the true value. It could be that the cost of error depend only on the size of the difference (somehow defined) between the estimated and true values, e.g. $C(\hat{\theta}, \theta) = (\hat{\theta}- \theta)^2$, where the minus sign designates some measure of distance in parameter space.
Let $f(\theta)$ describe your beliefs about the likelihood that $\theta$ might take on various different values, expressed as a probability distribution over the parameter space. Let $f(\theta \mid x_{1:n})$ describe the posterior probability over $\theta$ that would obtain if your sample consisted of the values $x_{1:n}$. After you've done your sampling, you can compute your expected cost of error as a function of the sample you drew: $E[C \mid x_{1:n}] = \int \int C(\hat{\theta}, \theta) f(\hat{\theta} \mid x_{1:n}) \, d\hat{\theta} \, f(\theta \mid x_{1:n}) \, d\theta$, where both integrals are taken over the parameter space. Your total expected loss is then $E[C \mid x_{1:n}] + cn$.
If you are allowed to let choose $n$ using a stopping rule, rather than specifying a choice in advance, you can employ dynamic programming to choose $n$ optimally. This answer to a previous question should offer useful guidance.
If you have to specify $n$ in advance, then you would employ preposterior analysis. Intuitively, you choose $n$ so that in expectation, the marginal benefit of the last test just offsets the cost of the last test. Formally, Let $f(\theta)$ describe your prior beliefs about the distribution of $\theta$, before any sampling has been done. Let $l(x_{1:n} \mid \theta)$ denote the likelihood of drawing a specific sample $x_{1:n}$, if $\theta$ were the true value of the parameter. Then before you've taken your first sample, you can already formulate an expression for your expected cost of error:
$\int \int E[C \mid \mathbf{x}] \, l(\mathbf{x} \mid \theta) \, f(\theta) \, d\mathbf{x} \, d\theta.$
Here, the inner integral is taken over the sample space, the outer integral is taken over the parameter space. You would choose $n$ to minimize the above expression plus $cn$, the cost of testing.
Choosing $n$ dynamically is better, if possible, because you can equate the marginal benefit of the last test to its cost in every realization, rather than merely in expectation. | Minimizing variance of an estimator under sampling cost penalty | You are asking a question about the expected value of sample information.
First, I suggest that you not define your loss function in terms of variance, at least not without further examination. Instea | Minimizing variance of an estimator under sampling cost penalty
You are asking a question about the expected value of sample information.
First, I suggest that you not define your loss function in terms of variance, at least not without further examination. Instead, identify why you care about error in your estimation procedure. What are the different ways of being wrong? What are the consequences of being wrong in each of those ways? Can you quantify those consequences in terms of units of value that make sense to you?
Then define your loss function as cost-of-testing + E[cost-of-error]. To describe losses in this way, you will need to specify a common unit of measure in which both kinds of loss are denominated. Explicitly or implicitly, your loss function will need to incorporate a specification of the "price" at which you would willingly trade off costs of testing for costs of error. This quantification may seem artificial: it may be that there is no obvious common unit in which both kinds of losses might be measured. Too bad: any loss function will define such terms of trade, either explicitly or implicitly. (In terms of intermediate microeconomics: a level set of the loss function defines an indifference curve in the space of arguments; the price is the slope of this indifference curve.) You are better off making this choice consciously, rather than as the logical consequence of an un-examined ad hoc assumption.
In my opinion (and I am right about this), a problem in statistical decision theory (which this is) is handled most naturally in a Bayesian framework, in which we treat unobserved parameters as random variables. Formally, let $\theta$ denote the true value of the variable whose value you're trying to estimate, e.g., the argument at which your search algorithm would find its true minimum. For any other value $\hat{\theta}$ in your parameter space, let $C(\hat{\theta}, \theta)$ denote the cost of error when $\hat{\theta}$ is used as an estimated value and $\theta$ is the true value. It could be that the cost of error depend only on the size of the difference (somehow defined) between the estimated and true values, e.g. $C(\hat{\theta}, \theta) = (\hat{\theta}- \theta)^2$, where the minus sign designates some measure of distance in parameter space.
Let $f(\theta)$ describe your beliefs about the likelihood that $\theta$ might take on various different values, expressed as a probability distribution over the parameter space. Let $f(\theta \mid x_{1:n})$ describe the posterior probability over $\theta$ that would obtain if your sample consisted of the values $x_{1:n}$. After you've done your sampling, you can compute your expected cost of error as a function of the sample you drew: $E[C \mid x_{1:n}] = \int \int C(\hat{\theta}, \theta) f(\hat{\theta} \mid x_{1:n}) \, d\hat{\theta} \, f(\theta \mid x_{1:n}) \, d\theta$, where both integrals are taken over the parameter space. Your total expected loss is then $E[C \mid x_{1:n}] + cn$.
If you are allowed to let choose $n$ using a stopping rule, rather than specifying a choice in advance, you can employ dynamic programming to choose $n$ optimally. This answer to a previous question should offer useful guidance.
If you have to specify $n$ in advance, then you would employ preposterior analysis. Intuitively, you choose $n$ so that in expectation, the marginal benefit of the last test just offsets the cost of the last test. Formally, Let $f(\theta)$ describe your prior beliefs about the distribution of $\theta$, before any sampling has been done. Let $l(x_{1:n} \mid \theta)$ denote the likelihood of drawing a specific sample $x_{1:n}$, if $\theta$ were the true value of the parameter. Then before you've taken your first sample, you can already formulate an expression for your expected cost of error:
$\int \int E[C \mid \mathbf{x}] \, l(\mathbf{x} \mid \theta) \, f(\theta) \, d\mathbf{x} \, d\theta.$
Here, the inner integral is taken over the sample space, the outer integral is taken over the parameter space. You would choose $n$ to minimize the above expression plus $cn$, the cost of testing.
Choosing $n$ dynamically is better, if possible, because you can equate the marginal benefit of the last test to its cost in every realization, rather than merely in expectation. | Minimizing variance of an estimator under sampling cost penalty
You are asking a question about the expected value of sample information.
First, I suggest that you not define your loss function in terms of variance, at least not without further examination. Instea |
55,366 | Minimizing variance of an estimator under sampling cost penalty | The cost vs. better precision/smaller variance trade-off has been studied in sampling literature to some extent. The first historical result dating back to 1930s is Neymann-Chuprow optimal allocation in stratified sampling, which is the problem of allocating the sample between strata with different variances and unit costs. The solution has the form of
$$ n\propto \sqrt{ \mbox{poulation variance}/\mbox{cost} }$$
I imagine that you will end up with a similar expression, as well.
To improve your precision without having to increase $n$ much, you would want to consider balanced sampling in your space. See Chapter 5 of Asmussen and Glynn (2000), for instance. | Minimizing variance of an estimator under sampling cost penalty | The cost vs. better precision/smaller variance trade-off has been studied in sampling literature to some extent. The first historical result dating back to 1930s is Neymann-Chuprow optimal allocation | Minimizing variance of an estimator under sampling cost penalty
The cost vs. better precision/smaller variance trade-off has been studied in sampling literature to some extent. The first historical result dating back to 1930s is Neymann-Chuprow optimal allocation in stratified sampling, which is the problem of allocating the sample between strata with different variances and unit costs. The solution has the form of
$$ n\propto \sqrt{ \mbox{poulation variance}/\mbox{cost} }$$
I imagine that you will end up with a similar expression, as well.
To improve your precision without having to increase $n$ much, you would want to consider balanced sampling in your space. See Chapter 5 of Asmussen and Glynn (2000), for instance. | Minimizing variance of an estimator under sampling cost penalty
The cost vs. better precision/smaller variance trade-off has been studied in sampling literature to some extent. The first historical result dating back to 1930s is Neymann-Chuprow optimal allocation |
55,367 | Basic Bayesian MCMC to estimate two parameters from binomial distributions given unknown number of trials | Model and Pseudocode
So I did some analysis in Python, though I used the pyMC library which hides all the MCMC mathy stuff. I'll show you how I modeled it in semi-pseudocode, and the results.
I set my observed data as $X=5, Y=10$.
X = 5
Y = 10
I assumed that $N$ has a Poisson prior, with the Poisson's rate a $EXP(1)$. This is a pretty fair prior. Though I could have chosen some uniform distribution on some interval:
rate = Exponential( mu = 1 )
N = Poisson( rate = rate)
You mention beta priors on $pX$ and $pY$, so I coded:
pX = Beta(1,1) #equivalent to a uniform
pY = Beta(1,1)
And I combine it all:
observed = Binomial(n = N, p = [pX, pY], value = [X, Y] )
Then I perform the MCMC over 50000 samples, burned-in about half of that. Below are the plots I generated after MCMC.
Interpretation:
Let's examine the first graph for $N$. The N Trace graph are the samples, in order, I generated from the posterior distribution. The N acorr graph is the auto-correlation between samples. Perhaps there is still too much auto-correlation, and I should burn-in more. Finally, N-hist is the histogram of posterior samples. It looks like the mean is 13. Notice too that no samples were drawn from below 10. This is a good sign, as that would be impossible given the data observed was 5 and 10.
Similar observations can be made for the $pX$ and $pY$ graphs.
Different Prior on $N$
If we restrict $N$ to be a Poisson( 20 ) random variable (and remove the Exponential heirarchy), we get different results. This is an important consideration, and reveals that the prior can make a large difference. See the plots below. Note the time to convergence was much larger here too.
On the other hand, using a Poisson( 10 ) prior produced similar results to the Exp. rate prior. | Basic Bayesian MCMC to estimate two parameters from binomial distributions given unknown number of t | Model and Pseudocode
So I did some analysis in Python, though I used the pyMC library which hides all the MCMC mathy stuff. I'll show you how I modeled it in semi-pseudocode, and the results.
I set m | Basic Bayesian MCMC to estimate two parameters from binomial distributions given unknown number of trials
Model and Pseudocode
So I did some analysis in Python, though I used the pyMC library which hides all the MCMC mathy stuff. I'll show you how I modeled it in semi-pseudocode, and the results.
I set my observed data as $X=5, Y=10$.
X = 5
Y = 10
I assumed that $N$ has a Poisson prior, with the Poisson's rate a $EXP(1)$. This is a pretty fair prior. Though I could have chosen some uniform distribution on some interval:
rate = Exponential( mu = 1 )
N = Poisson( rate = rate)
You mention beta priors on $pX$ and $pY$, so I coded:
pX = Beta(1,1) #equivalent to a uniform
pY = Beta(1,1)
And I combine it all:
observed = Binomial(n = N, p = [pX, pY], value = [X, Y] )
Then I perform the MCMC over 50000 samples, burned-in about half of that. Below are the plots I generated after MCMC.
Interpretation:
Let's examine the first graph for $N$. The N Trace graph are the samples, in order, I generated from the posterior distribution. The N acorr graph is the auto-correlation between samples. Perhaps there is still too much auto-correlation, and I should burn-in more. Finally, N-hist is the histogram of posterior samples. It looks like the mean is 13. Notice too that no samples were drawn from below 10. This is a good sign, as that would be impossible given the data observed was 5 and 10.
Similar observations can be made for the $pX$ and $pY$ graphs.
Different Prior on $N$
If we restrict $N$ to be a Poisson( 20 ) random variable (and remove the Exponential heirarchy), we get different results. This is an important consideration, and reveals that the prior can make a large difference. See the plots below. Note the time to convergence was much larger here too.
On the other hand, using a Poisson( 10 ) prior produced similar results to the Exp. rate prior. | Basic Bayesian MCMC to estimate two parameters from binomial distributions given unknown number of t
Model and Pseudocode
So I did some analysis in Python, though I used the pyMC library which hides all the MCMC mathy stuff. I'll show you how I modeled it in semi-pseudocode, and the results.
I set m |
55,368 | Increasing the sample size does not help the classification performance | Increasing training size does not neccessarily help the classifier and rather, may lead to a degradation in the generalization ability.
Regarding your own experiment, the factor of such unexpected degradation in performance given the increase in the training size could be one of the following:
1- Randomness:
Simply, if you run the experiment again, you may see a different result from the one you have. This is only if the classifier is using any random approach in training.
2- Parameter Optimization:
For example, in SVM, while increasing the training size, if the data is not linearly separable, you may need to increase the values of the slack variables (@Douglas). This parameter optimization helps in accounting for any new training point that violates the linear separability of the space.
3- Overfitting:
Training some classifiers for longer time or using extra training points, may lead to a good performance on the training data but a worse one on the testing part. This is because your classifier could be so much fitting the training points to an extent in which it is difficult for to predict new points of different characteristics.
4- Experiment Design:
It would be more indicative to run the experiment you have more than once on different parts of the data (cross-validation) and report the scores. In this case, we will have a MEAN accuracy value and an STDEV which would be more realistic indicators of the observation you have.
My advise for you is to run the same expariment again within the same setting. If you get a different result, then, you check the random part in your code. Then, even if you get the same result, use cross-validation. Finally, you may tune some of the parameters of the SVM. | Increasing the sample size does not help the classification performance | Increasing training size does not neccessarily help the classifier and rather, may lead to a degradation in the generalization ability.
Regarding your own experiment, the factor of such unexpected deg | Increasing the sample size does not help the classification performance
Increasing training size does not neccessarily help the classifier and rather, may lead to a degradation in the generalization ability.
Regarding your own experiment, the factor of such unexpected degradation in performance given the increase in the training size could be one of the following:
1- Randomness:
Simply, if you run the experiment again, you may see a different result from the one you have. This is only if the classifier is using any random approach in training.
2- Parameter Optimization:
For example, in SVM, while increasing the training size, if the data is not linearly separable, you may need to increase the values of the slack variables (@Douglas). This parameter optimization helps in accounting for any new training point that violates the linear separability of the space.
3- Overfitting:
Training some classifiers for longer time or using extra training points, may lead to a good performance on the training data but a worse one on the testing part. This is because your classifier could be so much fitting the training points to an extent in which it is difficult for to predict new points of different characteristics.
4- Experiment Design:
It would be more indicative to run the experiment you have more than once on different parts of the data (cross-validation) and report the scores. In this case, we will have a MEAN accuracy value and an STDEV which would be more realistic indicators of the observation you have.
My advise for you is to run the same expariment again within the same setting. If you get a different result, then, you check the random part in your code. Then, even if you get the same result, use cross-validation. Finally, you may tune some of the parameters of the SVM. | Increasing the sample size does not help the classification performance
Increasing training size does not neccessarily help the classifier and rather, may lead to a degradation in the generalization ability.
Regarding your own experiment, the factor of such unexpected deg |
55,369 | Increasing the sample size does not help the classification performance | One possibility is that the data is not linearly separable, or the best linear separation doesn't give the best classifier. So, a common approach is to use a soft margin. The amount of slack should be increased with the size of the training set. If you aren't doing this, then you may get worse results with more data. | Increasing the sample size does not help the classification performance | One possibility is that the data is not linearly separable, or the best linear separation doesn't give the best classifier. So, a common approach is to use a soft margin. The amount of slack should be | Increasing the sample size does not help the classification performance
One possibility is that the data is not linearly separable, or the best linear separation doesn't give the best classifier. So, a common approach is to use a soft margin. The amount of slack should be increased with the size of the training set. If you aren't doing this, then you may get worse results with more data. | Increasing the sample size does not help the classification performance
One possibility is that the data is not linearly separable, or the best linear separation doesn't give the best classifier. So, a common approach is to use a soft margin. The amount of slack should be |
55,370 | Entropy-based methods in R | Well there is a package which implements entropy-based methods and it is called .... entropy.
More information: http://cran.r-project.org/web/packages/entropy/ | Entropy-based methods in R | Well there is a package which implements entropy-based methods and it is called .... entropy.
More information: http://cran.r-project.org/web/packages/entropy/ | Entropy-based methods in R
Well there is a package which implements entropy-based methods and it is called .... entropy.
More information: http://cran.r-project.org/web/packages/entropy/ | Entropy-based methods in R
Well there is a package which implements entropy-based methods and it is called .... entropy.
More information: http://cran.r-project.org/web/packages/entropy/ |
55,371 | Entropy-based methods in R | I haven't done this personally, but a very good reference for R packages is the R Graphical Manual
A search on that site of "empirical likelihood" gave several results, including EEF.profile from the boot package. There are also some packages that claim entropy methods (FNN), although I don't know precisely what you are looking for. I would check that site. Be as specific as you can with the search bar. | Entropy-based methods in R | I haven't done this personally, but a very good reference for R packages is the R Graphical Manual
A search on that site of "empirical likelihood" gave several results, including EEF.profile from the | Entropy-based methods in R
I haven't done this personally, but a very good reference for R packages is the R Graphical Manual
A search on that site of "empirical likelihood" gave several results, including EEF.profile from the boot package. There are also some packages that claim entropy methods (FNN), although I don't know precisely what you are looking for. I would check that site. Be as specific as you can with the search bar. | Entropy-based methods in R
I haven't done this personally, but a very good reference for R packages is the R Graphical Manual
A search on that site of "empirical likelihood" gave several results, including EEF.profile from the |
55,372 | Fisher overall p-value vs. pairwise comparisons | You are correct to be suspicious and you are correct that problems arise from some of the low cell counts in this case. However, there is nothing wrong with Fisher's test itself. We just need to be careful in interpreting its results.
Let's review the data:
0 1 Total
Site 1 7 2 | 9
Site 2 95 9 | 104
Site 3 0 1 | 1
--------------+-----
Totals 102 12 | 114
Fisher's test sums the probabilities of all configurations of the data that are (a) consistent with the row and column totals and (b) have lower probabilities than the observed table (under the null hypothesis of no column-row association).
Suppose the one result for Site 3 were not included. Fisher's test, applied to the first two rows only, gives a p-value of $0.2123$--far from "significant" evidence of any association within the first two sites. Consider now the effect of including that single value from Site 3. There are only two ways to maintain the value of $1$ for that row total: either the $1$ appears in the left column or in the right and a $0$ appears in the other entry. Because the column totals are 102 and 12, the null hypothesis suggests that the $1$ should appear in the left column with a frequency of $12/114$ and in the right column with a frequency of $102/114$. The former case actually weakens the evidence of a row-column association and so would tend to elevate the p-value, whereas the latter case--which is what actually is observed--strengthens the evidence of an association and decreases the p-value.
At this point I will make an incorrect but suggestive observation: if the p-value for the test of the first two rows actually were a probability (of the null hypothesis being true), we could update this probability (in a Bayesian sense) by multiplying the odds. The odds of the data for Site 3 are 12:102, whence
$$0.2123 / (1 - 0.2123) \times 12 / 102 = 0.0317.$$
This corresponds to a new probability or "p-value" of $0.0307$--remarkably close to the two-sided p-value of $0.0287$ obtained for the full table.
Whether we believe this intuition or not, the discrepancy in p-values is telling us that the apparently significant result for the full table is due almost entirely to the single observation obtained at Site 3.
Do you really want to draw a conclusion about the first two sites based on a single result from a third, different site? It is difficult to imagine a setting in which this would be wise. Instead, you might conclude something like this:
Almost all the data were obtained at Sites 1 and 2. Most of the observations (102 out of 114) were "zeros" (the left column's attribute). They do not show significant evidence of an association with the columns (Fisher's Exact Test, p = 0.212). A single value obtained at a third Site was one of the relatively rare "ones" (the right column's attribute). Including this observation creates the appearance of an association in the entire table (Fisher's Exact Test, p = 0.029). This may be taken as a (very) weak initial suggestion that Site 3 might differ from Sites 1 and 2 in having a greater tendency to exhibit "ones." | Fisher overall p-value vs. pairwise comparisons | You are correct to be suspicious and you are correct that problems arise from some of the low cell counts in this case. However, there is nothing wrong with Fisher's test itself. We just need to be | Fisher overall p-value vs. pairwise comparisons
You are correct to be suspicious and you are correct that problems arise from some of the low cell counts in this case. However, there is nothing wrong with Fisher's test itself. We just need to be careful in interpreting its results.
Let's review the data:
0 1 Total
Site 1 7 2 | 9
Site 2 95 9 | 104
Site 3 0 1 | 1
--------------+-----
Totals 102 12 | 114
Fisher's test sums the probabilities of all configurations of the data that are (a) consistent with the row and column totals and (b) have lower probabilities than the observed table (under the null hypothesis of no column-row association).
Suppose the one result for Site 3 were not included. Fisher's test, applied to the first two rows only, gives a p-value of $0.2123$--far from "significant" evidence of any association within the first two sites. Consider now the effect of including that single value from Site 3. There are only two ways to maintain the value of $1$ for that row total: either the $1$ appears in the left column or in the right and a $0$ appears in the other entry. Because the column totals are 102 and 12, the null hypothesis suggests that the $1$ should appear in the left column with a frequency of $12/114$ and in the right column with a frequency of $102/114$. The former case actually weakens the evidence of a row-column association and so would tend to elevate the p-value, whereas the latter case--which is what actually is observed--strengthens the evidence of an association and decreases the p-value.
At this point I will make an incorrect but suggestive observation: if the p-value for the test of the first two rows actually were a probability (of the null hypothesis being true), we could update this probability (in a Bayesian sense) by multiplying the odds. The odds of the data for Site 3 are 12:102, whence
$$0.2123 / (1 - 0.2123) \times 12 / 102 = 0.0317.$$
This corresponds to a new probability or "p-value" of $0.0307$--remarkably close to the two-sided p-value of $0.0287$ obtained for the full table.
Whether we believe this intuition or not, the discrepancy in p-values is telling us that the apparently significant result for the full table is due almost entirely to the single observation obtained at Site 3.
Do you really want to draw a conclusion about the first two sites based on a single result from a third, different site? It is difficult to imagine a setting in which this would be wise. Instead, you might conclude something like this:
Almost all the data were obtained at Sites 1 and 2. Most of the observations (102 out of 114) were "zeros" (the left column's attribute). They do not show significant evidence of an association with the columns (Fisher's Exact Test, p = 0.212). A single value obtained at a third Site was one of the relatively rare "ones" (the right column's attribute). Including this observation creates the appearance of an association in the entire table (Fisher's Exact Test, p = 0.029). This may be taken as a (very) weak initial suggestion that Site 3 might differ from Sites 1 and 2 in having a greater tendency to exhibit "ones." | Fisher overall p-value vs. pairwise comparisons
You are correct to be suspicious and you are correct that problems arise from some of the low cell counts in this case. However, there is nothing wrong with Fisher's test itself. We just need to be |
55,373 | Software for median polishing | Well R has medpolish built in, and it can deal with some level of missingness:
> a # some data
[,1] [,2] [,3] [,4]
[1,] 32.45884 29.50403 38.54330 30.06207
[2,] 27.92059 25.00838 NA 13.93309
[3,] 37.91911 23.98091 36.00139 27.73731
[4,] 29.20283 29.68059 18.41809 29.92471
[5,] NA 30.98312 23.55309 22.63105
[6,] 24.96472 33.52443 24.85243 37.43364
The medpolish command is simple:
> medpolish(a,na.rm=TRUE) # Pretty easy to use
1 : 86.06071
Final: 85.59585
Median Polish Results (Dataset: "a")
Overall: 29.01548
Row Effects:
[1] 2.2356134 -4.0668144 3.4436953 -0.1729532 -5.2644925 0.1729532
Column Effects:
[1] 1.2077470 0.4488938 -0.1978902 -1.1544723
Residuals:
[,1] [,2] [,3] [,4]
[1,] 0.00000 -2.19595 7.4901 -0.034543
[2,] 1.76418 -0.38917 NA -9.861103
[3,] 4.25219 -8.92715 3.7401 -3.567392
[4,] -0.84743 0.38917 -10.2265 2.236662
[5,] NA 6.78324 0.0000 0.034543
[6,] -5.43146 3.88711 -4.1381 9.399689
This is not particularly hard to do in a spreadsheet by the way (but note that you would normally iterate it; nevertheless it's quite doable).
However if you have really large amount of missingness, you may not be able to estimate effects for all rows and columns (if one is all-missing for example)
Edit: as whuber notes below, a lot of missingness may result in problems of bias or nonconvergence | Software for median polishing | Well R has medpolish built in, and it can deal with some level of missingness:
> a # some data
[,1] [,2] [,3] [,4]
[1,] 32.45884 29.50403 38.54330 30.06207
[2,] 27.92059 25.0 | Software for median polishing
Well R has medpolish built in, and it can deal with some level of missingness:
> a # some data
[,1] [,2] [,3] [,4]
[1,] 32.45884 29.50403 38.54330 30.06207
[2,] 27.92059 25.00838 NA 13.93309
[3,] 37.91911 23.98091 36.00139 27.73731
[4,] 29.20283 29.68059 18.41809 29.92471
[5,] NA 30.98312 23.55309 22.63105
[6,] 24.96472 33.52443 24.85243 37.43364
The medpolish command is simple:
> medpolish(a,na.rm=TRUE) # Pretty easy to use
1 : 86.06071
Final: 85.59585
Median Polish Results (Dataset: "a")
Overall: 29.01548
Row Effects:
[1] 2.2356134 -4.0668144 3.4436953 -0.1729532 -5.2644925 0.1729532
Column Effects:
[1] 1.2077470 0.4488938 -0.1978902 -1.1544723
Residuals:
[,1] [,2] [,3] [,4]
[1,] 0.00000 -2.19595 7.4901 -0.034543
[2,] 1.76418 -0.38917 NA -9.861103
[3,] 4.25219 -8.92715 3.7401 -3.567392
[4,] -0.84743 0.38917 -10.2265 2.236662
[5,] NA 6.78324 0.0000 0.034543
[6,] -5.43146 3.88711 -4.1381 9.399689
This is not particularly hard to do in a spreadsheet by the way (but note that you would normally iterate it; nevertheless it's quite doable).
However if you have really large amount of missingness, you may not be able to estimate effects for all rows and columns (if one is all-missing for example)
Edit: as whuber notes below, a lot of missingness may result in problems of bias or nonconvergence | Software for median polishing
Well R has medpolish built in, and it can deal with some level of missingness:
> a # some data
[,1] [,2] [,3] [,4]
[1,] 32.45884 29.50403 38.54330 30.06207
[2,] 27.92059 25.0 |
55,374 | How to specify in r spatial covariance structure similar to SAS sp(pow) in a marginal model? | The spatial power covariance structure is a generalization of the first-order autoregressive covariance structure. Where the first-order autoregressive structure assumes the time points are equally spaced, the spatial power structure can account for a continuous time point. In reality, we could just forget the first-order autoregressive structure entirely, because if we fit the spatial power structure when the data are equally spaced we'll get the same answer as when using the first-order autoregressive structure.
All that aside, the correlation function you're looking for is corCAR1(), which is the continuous first-order autoregressive structure. If you're looking to duplicate what you fit in SAS, then the code you're looking for is:
gls(CD4t~T, data=df, na.action = (na.omit), method = "REML",
corr=corCAR1(form=~T|NUM_PAT))
Of course, you don't need to specify method = "REML", since, as in SAS, the default method in gls() is already restricted maximum likelihood. | How to specify in r spatial covariance structure similar to SAS sp(pow) in a marginal model? | The spatial power covariance structure is a generalization of the first-order autoregressive covariance structure. Where the first-order autoregressive structure assumes the time points are equally s | How to specify in r spatial covariance structure similar to SAS sp(pow) in a marginal model?
The spatial power covariance structure is a generalization of the first-order autoregressive covariance structure. Where the first-order autoregressive structure assumes the time points are equally spaced, the spatial power structure can account for a continuous time point. In reality, we could just forget the first-order autoregressive structure entirely, because if we fit the spatial power structure when the data are equally spaced we'll get the same answer as when using the first-order autoregressive structure.
All that aside, the correlation function you're looking for is corCAR1(), which is the continuous first-order autoregressive structure. If you're looking to duplicate what you fit in SAS, then the code you're looking for is:
gls(CD4t~T, data=df, na.action = (na.omit), method = "REML",
corr=corCAR1(form=~T|NUM_PAT))
Of course, you don't need to specify method = "REML", since, as in SAS, the default method in gls() is already restricted maximum likelihood. | How to specify in r spatial covariance structure similar to SAS sp(pow) in a marginal model?
The spatial power covariance structure is a generalization of the first-order autoregressive covariance structure. Where the first-order autoregressive structure assumes the time points are equally s |
55,375 | Basic reproduction number | I'm just guessing here but...
The basic reproduction number is the expected number of secondary infections over the lifetime of the initial infection. Let $S$ be the number of secondary infections over the lifetime of the initial infection and $L$ be the lifetime of the initial infection.
$S|L$ can be modeled as a Poisson random variable with parameter $\beta L$ and $L$ can be modeled as an Exponential random variable with parameter $\gamma$.
To find the expected value of $S$, recall the law of total expectation.
$$\textrm{E}\left[S\right]=\textrm{E}\left[\textrm{E}\left[S|L\right]\right]=\textrm{E}\left[\beta L\right]=\frac{\beta}{\gamma}$$ | Basic reproduction number | I'm just guessing here but...
The basic reproduction number is the expected number of secondary infections over the lifetime of the initial infection. Let $S$ be the number of secondary infections ove | Basic reproduction number
I'm just guessing here but...
The basic reproduction number is the expected number of secondary infections over the lifetime of the initial infection. Let $S$ be the number of secondary infections over the lifetime of the initial infection and $L$ be the lifetime of the initial infection.
$S|L$ can be modeled as a Poisson random variable with parameter $\beta L$ and $L$ can be modeled as an Exponential random variable with parameter $\gamma$.
To find the expected value of $S$, recall the law of total expectation.
$$\textrm{E}\left[S\right]=\textrm{E}\left[\textrm{E}\left[S|L\right]\right]=\textrm{E}\left[\beta L\right]=\frac{\beta}{\gamma}$$ | Basic reproduction number
I'm just guessing here but...
The basic reproduction number is the expected number of secondary infections over the lifetime of the initial infection. Let $S$ be the number of secondary infections ove |
55,376 | Basic reproduction number | The answer to this doesn't necessarily rely on the distribution, it can be thought of as a simple problem of incoming infections vs. outgoing infections.
If you are trying to fill a bathtub, the water level will only rise if the rate of incoming water outpaces the rate of outgoing water. The same principle is true of an epidemic. An infection must infect people faster than people recover from infection in order for their to be a sustained increase in cases.
Thus: beta > gamma in order for there to be a sustained epidemic, and therefore: beta/gamma must be greater than 1.
This is, for example, while some of the hemorrhagic fevers aren't capable of sustaining epidemics - while their beta is quite high, the rate of recovery (or in this case, death) is so fast that the epidemic runs out of infective individuals to propagate new infections faster than new people are infected. | Basic reproduction number | The answer to this doesn't necessarily rely on the distribution, it can be thought of as a simple problem of incoming infections vs. outgoing infections.
If you are trying to fill a bathtub, the water | Basic reproduction number
The answer to this doesn't necessarily rely on the distribution, it can be thought of as a simple problem of incoming infections vs. outgoing infections.
If you are trying to fill a bathtub, the water level will only rise if the rate of incoming water outpaces the rate of outgoing water. The same principle is true of an epidemic. An infection must infect people faster than people recover from infection in order for their to be a sustained increase in cases.
Thus: beta > gamma in order for there to be a sustained epidemic, and therefore: beta/gamma must be greater than 1.
This is, for example, while some of the hemorrhagic fevers aren't capable of sustaining epidemics - while their beta is quite high, the rate of recovery (or in this case, death) is so fast that the epidemic runs out of infective individuals to propagate new infections faster than new people are infected. | Basic reproduction number
The answer to this doesn't necessarily rely on the distribution, it can be thought of as a simple problem of incoming infections vs. outgoing infections.
If you are trying to fill a bathtub, the water |
55,377 | Fastest way to compare ROC curves | There is more to k-fold CV than you do. In essence, the idea of using those crazy splits instead of simply making a few random subsamples is that you can reconstruct the full decision and compare it with original just like you might have done with a predictions on a full train set.
So, sticking to a full k-fold CV mechanism, you just have to merge the predictions from all folds and calculate the ROC for that -- this way you get a single AUROC per model.
However, note that just having two numbers and selecting greater is not a statistically valid way of making comparisons -- without spreads of those two you can't invalidate the hypothesis that both accuracies are roughly the same. So if you are sure you want to do any model selection, you'll need to get those spreads (for instance by bootstrapping the k-fold CV to actually get several AUROC values per classifier) and do some multiple comparison test, probably non-parametric. | Fastest way to compare ROC curves | There is more to k-fold CV than you do. In essence, the idea of using those crazy splits instead of simply making a few random subsamples is that you can reconstruct the full decision and compare it w | Fastest way to compare ROC curves
There is more to k-fold CV than you do. In essence, the idea of using those crazy splits instead of simply making a few random subsamples is that you can reconstruct the full decision and compare it with original just like you might have done with a predictions on a full train set.
So, sticking to a full k-fold CV mechanism, you just have to merge the predictions from all folds and calculate the ROC for that -- this way you get a single AUROC per model.
However, note that just having two numbers and selecting greater is not a statistically valid way of making comparisons -- without spreads of those two you can't invalidate the hypothesis that both accuracies are roughly the same. So if you are sure you want to do any model selection, you'll need to get those spreads (for instance by bootstrapping the k-fold CV to actually get several AUROC values per classifier) and do some multiple comparison test, probably non-parametric. | Fastest way to compare ROC curves
There is more to k-fold CV than you do. In essence, the idea of using those crazy splits instead of simply making a few random subsamples is that you can reconstruct the full decision and compare it w |
55,378 | Fastest way to compare ROC curves | Just to chime in the @mbq's multiple testing: if you want to compare each of 120 models with each other, that is 7140 comparisons!
You may want to reduce the number of models beforehand by your expert knowledge of the problem. Or include a (few) models that give you baseline performance (constant prediction, random prediction) to test where they fall in the range of all those other models.
Also, make sure you have an independent test set left if you want to report the final performance of the chosen model. Data-driven optimization means that information from the test samples enter your final model as you choose a model that performs well for these (CV) test sets.
Update:
Omar, take Frank's hint seriously and read about other performance measures that are better suited.
If you decide to stay with AUROC, make sure you calculate them in a range of sensitivities and specificites that are sensible for your application.
as mbq says, calculate the spread of your AUROC values of one model, and then think whether you have any chance to identify a good model out of 120 models with 100 independent test cases.
In any case: If you want to be able to claim a performance of the final model, you need to test that with a completely independent test set. Samples that were tested for parameter optimization or model selection are not independent any longer. And you should report the uncertainty on this final | Fastest way to compare ROC curves | Just to chime in the @mbq's multiple testing: if you want to compare each of 120 models with each other, that is 7140 comparisons!
You may want to reduce the number of models beforehand by your exper | Fastest way to compare ROC curves
Just to chime in the @mbq's multiple testing: if you want to compare each of 120 models with each other, that is 7140 comparisons!
You may want to reduce the number of models beforehand by your expert knowledge of the problem. Or include a (few) models that give you baseline performance (constant prediction, random prediction) to test where they fall in the range of all those other models.
Also, make sure you have an independent test set left if you want to report the final performance of the chosen model. Data-driven optimization means that information from the test samples enter your final model as you choose a model that performs well for these (CV) test sets.
Update:
Omar, take Frank's hint seriously and read about other performance measures that are better suited.
If you decide to stay with AUROC, make sure you calculate them in a range of sensitivities and specificites that are sensible for your application.
as mbq says, calculate the spread of your AUROC values of one model, and then think whether you have any chance to identify a good model out of 120 models with 100 independent test cases.
In any case: If you want to be able to claim a performance of the final model, you need to test that with a completely independent test set. Samples that were tested for parameter optimization or model selection are not independent any longer. And you should report the uncertainty on this final | Fastest way to compare ROC curves
Just to chime in the @mbq's multiple testing: if you want to compare each of 120 models with each other, that is 7140 comparisons!
You may want to reduce the number of models beforehand by your exper |
55,379 | Can two or more splits in a binary decision tree be made on the same variable? | Yes, this is possible and happens frequently. Consider the tree page 4 of this tutorial, you'll see that multiple splits are made on both variables longitude and latitude. At each step of the CART algorithm, all predictors are tried and the best one (the one selected for splitting) is the one that maximizes the decrease in partition impurity (or some other metric), that's it. Then you take your child nodes and you split them again. And you iterate. There is absolutely nothing precluding repeated splits on the same predictor.
full link for the tutorial: http://www.stat.cmu.edu/~cshalizi/350/lectures/22/lecture-22.pdf | Can two or more splits in a binary decision tree be made on the same variable? | Yes, this is possible and happens frequently. Consider the tree page 4 of this tutorial, you'll see that multiple splits are made on both variables longitude and latitude. At each step of the CART alg | Can two or more splits in a binary decision tree be made on the same variable?
Yes, this is possible and happens frequently. Consider the tree page 4 of this tutorial, you'll see that multiple splits are made on both variables longitude and latitude. At each step of the CART algorithm, all predictors are tried and the best one (the one selected for splitting) is the one that maximizes the decrease in partition impurity (or some other metric), that's it. Then you take your child nodes and you split them again. And you iterate. There is absolutely nothing precluding repeated splits on the same predictor.
full link for the tutorial: http://www.stat.cmu.edu/~cshalizi/350/lectures/22/lecture-22.pdf | Can two or more splits in a binary decision tree be made on the same variable?
Yes, this is possible and happens frequently. Consider the tree page 4 of this tutorial, you'll see that multiple splits are made on both variables longitude and latitude. At each step of the CART alg |
55,380 | Why do sampling distributions provide a major simplification on the route statistical inference? | Suppose that you want to know how many likely voters plan to vote for the incumbant in your cities race for mayor this year so you take a simple random sample of likely voters and ask them if they plan to vote for the incumbant or the challenger. The sampling distribution tells us the relationship between the proportion in our sample and the true proportion from the entire city. Because of the sampling distribution we can make inference based only on the information about the proportion who said "incumbant" in our sample and the sample size, inference like hypothesis tests or confidence intervals. If we used the joint distribution then we would have to use the information on how each individual answered the question instead of just the summary information (which is a lot simpler). | Why do sampling distributions provide a major simplification on the route statistical inference? | Suppose that you want to know how many likely voters plan to vote for the incumbant in your cities race for mayor this year so you take a simple random sample of likely voters and ask them if they pla | Why do sampling distributions provide a major simplification on the route statistical inference?
Suppose that you want to know how many likely voters plan to vote for the incumbant in your cities race for mayor this year so you take a simple random sample of likely voters and ask them if they plan to vote for the incumbant or the challenger. The sampling distribution tells us the relationship between the proportion in our sample and the true proportion from the entire city. Because of the sampling distribution we can make inference based only on the information about the proportion who said "incumbant" in our sample and the sample size, inference like hypothesis tests or confidence intervals. If we used the joint distribution then we would have to use the information on how each individual answered the question instead of just the summary information (which is a lot simpler). | Why do sampling distributions provide a major simplification on the route statistical inference?
Suppose that you want to know how many likely voters plan to vote for the incumbant in your cities race for mayor this year so you take a simple random sample of likely voters and ask them if they pla |
55,381 | Why do sampling distributions provide a major simplification on the route statistical inference? | In parametric statistics, you usually start with a sample, let us say an iid sample, $X_1,\ldots,X_n$, distributed as
$$
\prod_{i=1}^n f_\theta(x_i),
$$
and you have to draw inference on $\theta$ using this distribution, which may be troublesome.
If, instead, for a reason or another, you decide to use only a specific transform of the sample, $\Psi(X_1,\ldots,X_n)$, for instance of the same dimension as $\theta$, and if this new random variable has a closed-form/analytic distribution,
$$
\Psi(X_1,\ldots,X_n) \sim g_{n,\theta}(\psi)
$$
then it is much easier to draw inference using this known distribution.
Of course, this is hidding under the carpet the fact that the transform $\Psi$ has to be chosen in the first place, so I am not so convinced of the relevance of this Wikipedia sentence! | Why do sampling distributions provide a major simplification on the route statistical inference? | In parametric statistics, you usually start with a sample, let us say an iid sample, $X_1,\ldots,X_n$, distributed as
$$
\prod_{i=1}^n f_\theta(x_i),
$$
and you have to draw inference on $\theta$ usin | Why do sampling distributions provide a major simplification on the route statistical inference?
In parametric statistics, you usually start with a sample, let us say an iid sample, $X_1,\ldots,X_n$, distributed as
$$
\prod_{i=1}^n f_\theta(x_i),
$$
and you have to draw inference on $\theta$ using this distribution, which may be troublesome.
If, instead, for a reason or another, you decide to use only a specific transform of the sample, $\Psi(X_1,\ldots,X_n)$, for instance of the same dimension as $\theta$, and if this new random variable has a closed-form/analytic distribution,
$$
\Psi(X_1,\ldots,X_n) \sim g_{n,\theta}(\psi)
$$
then it is much easier to draw inference using this known distribution.
Of course, this is hidding under the carpet the fact that the transform $\Psi$ has to be chosen in the first place, so I am not so convinced of the relevance of this Wikipedia sentence! | Why do sampling distributions provide a major simplification on the route statistical inference?
In parametric statistics, you usually start with a sample, let us say an iid sample, $X_1,\ldots,X_n$, distributed as
$$
\prod_{i=1}^n f_\theta(x_i),
$$
and you have to draw inference on $\theta$ usin |
55,382 | Why do sampling distributions provide a major simplification on the route statistical inference? | It's because all of the information from the data given an assumed model is picked up by a multiple of the likelihood and that is all you need or many would argue should use in inference (tentatively taking the model as given). When that is just driven by some summary statistics, there is tremendous simplification.
This is perhaps most easily seen from the Bayesion perspective by noting that
posterior = prior * data model and so data model = posterior/prior.
(prior/posterior is actually a relative probability - after data/before the data - that is called the relative belief ratio and is a multiple of the likelihood function)
Approximate Bayesian Computation (ABC) could be a convenient way to visualize these things.
A technical paper for any who might be interested http://www.utstat.utoronto.ca/mikevans/papers/surprise.pdf | Why do sampling distributions provide a major simplification on the route statistical inference? | It's because all of the information from the data given an assumed model is picked up by a multiple of the likelihood and that is all you need or many would argue should use in inference (tentatively | Why do sampling distributions provide a major simplification on the route statistical inference?
It's because all of the information from the data given an assumed model is picked up by a multiple of the likelihood and that is all you need or many would argue should use in inference (tentatively taking the model as given). When that is just driven by some summary statistics, there is tremendous simplification.
This is perhaps most easily seen from the Bayesion perspective by noting that
posterior = prior * data model and so data model = posterior/prior.
(prior/posterior is actually a relative probability - after data/before the data - that is called the relative belief ratio and is a multiple of the likelihood function)
Approximate Bayesian Computation (ABC) could be a convenient way to visualize these things.
A technical paper for any who might be interested http://www.utstat.utoronto.ca/mikevans/papers/surprise.pdf | Why do sampling distributions provide a major simplification on the route statistical inference?
It's because all of the information from the data given an assumed model is picked up by a multiple of the likelihood and that is all you need or many would argue should use in inference (tentatively |
55,383 | Completely different results from lmer() and lme() | lmer uses Laplace approximation, when the whole normal distribution of the random effect is approximated at its mode. This approximation is known to produce the estimates of the variance components that are biased down. lme uses a more thorough approximation via Gaussian quadrature approximation, but I neither know the default number of integration points nor the way to manipulate this number.
Note also that lmer produced a correlation of the random effects (the intercept vs. year) of -1. That's very bad, and is indicative of numeric problems. It hit some sort of a ridge in its approximation of the likelihood that it could not overcome (no wonder, given that this approximation is quite poor). lme did a somewhat better job, and came up with a correlation of 0.619. If you really have a year, as in, 2010, 2011, 2012, that's a very bad idea for mixed models, where high multicollinearity between factors may mean poor numeric stability and long convergence. You would have been much better off converting it to time centered near zero, rather than near 2010. | Completely different results from lmer() and lme() | lmer uses Laplace approximation, when the whole normal distribution of the random effect is approximated at its mode. This approximation is known to produce the estimates of the variance components th | Completely different results from lmer() and lme()
lmer uses Laplace approximation, when the whole normal distribution of the random effect is approximated at its mode. This approximation is known to produce the estimates of the variance components that are biased down. lme uses a more thorough approximation via Gaussian quadrature approximation, but I neither know the default number of integration points nor the way to manipulate this number.
Note also that lmer produced a correlation of the random effects (the intercept vs. year) of -1. That's very bad, and is indicative of numeric problems. It hit some sort of a ridge in its approximation of the likelihood that it could not overcome (no wonder, given that this approximation is quite poor). lme did a somewhat better job, and came up with a correlation of 0.619. If you really have a year, as in, 2010, 2011, 2012, that's a very bad idea for mixed models, where high multicollinearity between factors may mean poor numeric stability and long convergence. You would have been much better off converting it to time centered near zero, rather than near 2010. | Completely different results from lmer() and lme()
lmer uses Laplace approximation, when the whole normal distribution of the random effect is approximated at its mode. This approximation is known to produce the estimates of the variance components th |
55,384 | Completely different results from lmer() and lme() | The log-likelihood is substantially higher for the lmer() result, which would suggest it's closer to the true optimum.
Since the lmer() fit is on the boundary of the parameter space, which lme() reparametrises off to infinity (the 'Log-Cholesky' parametrisation), it's plausible that lme just didn't find the better solution.
That doesn't entirely answer the question of which answer is more useful, but it does argue against the lme one. | Completely different results from lmer() and lme() | The log-likelihood is substantially higher for the lmer() result, which would suggest it's closer to the true optimum.
Since the lmer() fit is on the boundary of the parameter space, which lme() repar | Completely different results from lmer() and lme()
The log-likelihood is substantially higher for the lmer() result, which would suggest it's closer to the true optimum.
Since the lmer() fit is on the boundary of the parameter space, which lme() reparametrises off to infinity (the 'Log-Cholesky' parametrisation), it's plausible that lme just didn't find the better solution.
That doesn't entirely answer the question of which answer is more useful, but it does argue against the lme one. | Completely different results from lmer() and lme()
The log-likelihood is substantially higher for the lmer() result, which would suggest it's closer to the true optimum.
Since the lmer() fit is on the boundary of the parameter space, which lme() repar |
55,385 | How to use triple exponential smoothing to forecast in Excel | This isn't an exact answer to your question, but... you are definitely best off spending a bit of time to do learn some R basics and use something like Rob Hyndman's forecast package to do this. This will let you try a number of robust forecasting procedures and choose appropriate parameters, all within a state of the art computing environment with good graphics built in.
To get you started, here is how simple it is to have a go with your data in R. Investing a little time for the understanding you need of data management in R will be worth while because it will let you grapple with the real underlying issues of how to treat your time series, which methods to use, how to treat any seasonality, etc.
install.packages("forecast", dependencies=TRUE)
library(forecast)
x <- ts(c(69088,83400,75735,79526,81005,94013,90567,94568,101687,93540,84249,
91280,78531,89465,83341,87106,65636,79632,89722,87483,99228,113215,96057,
95475,92466,103529,94515,76146,81736,80174,81437,102695,120775,97058,
119921,102311,109498,110318,98103), frequency=13, start=c(2009, 10))
par(mfrow=c(3,1))
plot(ses(x,6), bty="l")
plot(holt(x,6), bty="l")
plot(hw(x,6), bty="l") | How to use triple exponential smoothing to forecast in Excel | This isn't an exact answer to your question, but... you are definitely best off spending a bit of time to do learn some R basics and use something like Rob Hyndman's forecast package to do this. This | How to use triple exponential smoothing to forecast in Excel
This isn't an exact answer to your question, but... you are definitely best off spending a bit of time to do learn some R basics and use something like Rob Hyndman's forecast package to do this. This will let you try a number of robust forecasting procedures and choose appropriate parameters, all within a state of the art computing environment with good graphics built in.
To get you started, here is how simple it is to have a go with your data in R. Investing a little time for the understanding you need of data management in R will be worth while because it will let you grapple with the real underlying issues of how to treat your time series, which methods to use, how to treat any seasonality, etc.
install.packages("forecast", dependencies=TRUE)
library(forecast)
x <- ts(c(69088,83400,75735,79526,81005,94013,90567,94568,101687,93540,84249,
91280,78531,89465,83341,87106,65636,79632,89722,87483,99228,113215,96057,
95475,92466,103529,94515,76146,81736,80174,81437,102695,120775,97058,
119921,102311,109498,110318,98103), frequency=13, start=c(2009, 10))
par(mfrow=c(3,1))
plot(ses(x,6), bty="l")
plot(holt(x,6), bty="l")
plot(hw(x,6), bty="l") | How to use triple exponential smoothing to forecast in Excel
This isn't an exact answer to your question, but... you are definitely best off spending a bit of time to do learn some R basics and use something like Rob Hyndman's forecast package to do this. This |
55,386 | How to use triple exponential smoothing to forecast in Excel | Your data can be easly modeled using a seaonal model of the form
Y(T) = 168.16
+[X1(T)][(+ 28.8257)] :PULSE 2012/ 3
+[X2(T)][(- 14.3322)] :PULSE 2010/ 13
+[X3(T)][(+ 15.0558)] :PULSE 2011/ 9
+[X4(T)][(+ 13.6610)] :PULSE 2012/ 8
+ [(1- .945B** 13)]**-1 [A(T)]
Note that this is simply an equation which uses .945 * the value 13 periods ago and can be restated as y= 9.3 + .945*y(t-13) . Analysis suggested 4 unusual points which you might want to focus on to identify any omitted "information/cause series" like promotion/price actrivity.
A plot of the actual fit and forecasts . In my opinion the reason that Peter's holt-winter's additive seaonal model didn't capture the seasonality is his model was deterministic in nature not adaptive. Sometimes a deterministic model is appropriate, sometimes it is not . The data will tEll you which model is appropriate. In addition his model.procedure believed the 4 questionable data points rather than challenging them for "consistency wirt expectations".
The forecasts for the next 7 periods are
98.59
81.24
86.52
85.05
86.24
106.32
96.17
. the r-square for the model is .754 with an MSE of 56.7 . This automatic analysis was obtained using AUTOBOX a program that I have helped develop. Improved forecasting accuracy can save money. Hope this helps. | How to use triple exponential smoothing to forecast in Excel | Your data can be easly modeled using a seaonal model of the form
Y(T) = 168.16
+[X1(T)][(+ 28.8257)] : | How to use triple exponential smoothing to forecast in Excel
Your data can be easly modeled using a seaonal model of the form
Y(T) = 168.16
+[X1(T)][(+ 28.8257)] :PULSE 2012/ 3
+[X2(T)][(- 14.3322)] :PULSE 2010/ 13
+[X3(T)][(+ 15.0558)] :PULSE 2011/ 9
+[X4(T)][(+ 13.6610)] :PULSE 2012/ 8
+ [(1- .945B** 13)]**-1 [A(T)]
Note that this is simply an equation which uses .945 * the value 13 periods ago and can be restated as y= 9.3 + .945*y(t-13) . Analysis suggested 4 unusual points which you might want to focus on to identify any omitted "information/cause series" like promotion/price actrivity.
A plot of the actual fit and forecasts . In my opinion the reason that Peter's holt-winter's additive seaonal model didn't capture the seasonality is his model was deterministic in nature not adaptive. Sometimes a deterministic model is appropriate, sometimes it is not . The data will tEll you which model is appropriate. In addition his model.procedure believed the 4 questionable data points rather than challenging them for "consistency wirt expectations".
The forecasts for the next 7 periods are
98.59
81.24
86.52
85.05
86.24
106.32
96.17
. the r-square for the model is .754 with an MSE of 56.7 . This automatic analysis was obtained using AUTOBOX a program that I have helped develop. Improved forecasting accuracy can save money. Hope this helps. | How to use triple exponential smoothing to forecast in Excel
Your data can be easly modeled using a seaonal model of the form
Y(T) = 168.16
+[X1(T)][(+ 28.8257)] : |
55,387 | How to use triple exponential smoothing to forecast in Excel | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
http://www.calstatela.edu/faculty/hwarren/a503/forecast%20time%20series%20within%20Excel.htm
also get seasonality 13
I hope you enjoy | How to use triple exponential smoothing to forecast in Excel | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| How to use triple exponential smoothing to forecast in Excel
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
http://www.calstatela.edu/faculty/hwarren/a503/forecast%20time%20series%20within%20Excel.htm
also get seasonality 13
I hope you enjoy | How to use triple exponential smoothing to forecast in Excel
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
55,388 | Are discrimination parameters in two-parameter IRT models only specific to items? | @KH Kim, I believe there is some difficulty when any mathematical model meets the real world. In the IRT model, items and item parameters are invariant to the pool of individuals who answer those items - that's the theoretical building block of the model. This is quite a deep conversation in which you are immersing yourself, but I would suggest it is a building block issue - do you accept the mathematical model or not...
The issue might be sample size. A large response group is usually required for stable item estimates with 2-parameter models (300-500). It is also assumed that the sample used to achieve stable estimates reflect the population and are not a subset. If you have discrimination parameters that appear to be different for your sample, then the problem might not be with the model, but applying the exam/survey to an inappropriate group. | Are discrimination parameters in two-parameter IRT models only specific to items? | @KH Kim, I believe there is some difficulty when any mathematical model meets the real world. In the IRT model, items and item parameters are invariant to the pool of individuals who answer those ite | Are discrimination parameters in two-parameter IRT models only specific to items?
@KH Kim, I believe there is some difficulty when any mathematical model meets the real world. In the IRT model, items and item parameters are invariant to the pool of individuals who answer those items - that's the theoretical building block of the model. This is quite a deep conversation in which you are immersing yourself, but I would suggest it is a building block issue - do you accept the mathematical model or not...
The issue might be sample size. A large response group is usually required for stable item estimates with 2-parameter models (300-500). It is also assumed that the sample used to achieve stable estimates reflect the population and are not a subset. If you have discrimination parameters that appear to be different for your sample, then the problem might not be with the model, but applying the exam/survey to an inappropriate group. | Are discrimination parameters in two-parameter IRT models only specific to items?
@KH Kim, I believe there is some difficulty when any mathematical model meets the real world. In the IRT model, items and item parameters are invariant to the pool of individuals who answer those ite |
55,389 | Are discrimination parameters in two-parameter IRT models only specific to items? | The discrimination parameter is an item parameter because of how it is specified in the model
$resp_{ip} \sim \alpha_i ( \theta_p - \beta_i)$
where i is for item and p is for person. In this model, the person (represented only by $\theta$) has nothing to do with the discrimination parameter.
IRT models the interaction between persons and items, and the item parameters attempt to describe their influence on that interaction.
IRT models can be extended to include the types of questions you are asking. For example if you had a randomized treatment and a control group, you could specify something like
$resp_{ip} \sim (\alpha_1 + \alpha_2 I_{p \in control} )\times ( \theta_p - \beta_i)$
where $\alpha_2$ could be interpreted as the difference in discrimination between the treatment and control group. You could extend the model in many other ways that are interesting for your situation. These models are easy to fit in a flexible modeling software like Jags or Stan. | Are discrimination parameters in two-parameter IRT models only specific to items? | The discrimination parameter is an item parameter because of how it is specified in the model
$resp_{ip} \sim \alpha_i ( \theta_p - \beta_i)$
where i is for item and p is for person. In this model, th | Are discrimination parameters in two-parameter IRT models only specific to items?
The discrimination parameter is an item parameter because of how it is specified in the model
$resp_{ip} \sim \alpha_i ( \theta_p - \beta_i)$
where i is for item and p is for person. In this model, the person (represented only by $\theta$) has nothing to do with the discrimination parameter.
IRT models the interaction between persons and items, and the item parameters attempt to describe their influence on that interaction.
IRT models can be extended to include the types of questions you are asking. For example if you had a randomized treatment and a control group, you could specify something like
$resp_{ip} \sim (\alpha_1 + \alpha_2 I_{p \in control} )\times ( \theta_p - \beta_i)$
where $\alpha_2$ could be interpreted as the difference in discrimination between the treatment and control group. You could extend the model in many other ways that are interesting for your situation. These models are easy to fit in a flexible modeling software like Jags or Stan. | Are discrimination parameters in two-parameter IRT models only specific to items?
The discrimination parameter is an item parameter because of how it is specified in the model
$resp_{ip} \sim \alpha_i ( \theta_p - \beta_i)$
where i is for item and p is for person. In this model, th |
55,390 | Are discrimination parameters in two-parameter IRT models only specific to items? | This is more a comment that grew too long and had to be edited than an answer buy anyway…
I don't know these models very well but it might help to remember that they were developed with personal attributes (traits like abilities, attitudes or personality) in mind. The model then describes responses to a particular item as a function of an individual's score on these latent traits. Of course, each person responds differently to an item but these individual differences are captured by the person's level on the relevant traits, not by the other parameters. That the functional form and item parameters are the same for everybody is a necessary assumption to make it all tractable.
Also, situational variables or transient states (emotion, fatigue…) are nuisances. By definition, traits are stable properties of a person and everything should be done to minimize the impact of other variables on the measurement.
Extending this to psychophysics, it could be some detection threshold (and not luminance itself) that would be interpreted as a trait and that's what discrimination and the item characteristic curve would be applied to. Does it make sense? | Are discrimination parameters in two-parameter IRT models only specific to items? | This is more a comment that grew too long and had to be edited than an answer buy anyway…
I don't know these models very well but it might help to remember that they were developed with personal attr | Are discrimination parameters in two-parameter IRT models only specific to items?
This is more a comment that grew too long and had to be edited than an answer buy anyway…
I don't know these models very well but it might help to remember that they were developed with personal attributes (traits like abilities, attitudes or personality) in mind. The model then describes responses to a particular item as a function of an individual's score on these latent traits. Of course, each person responds differently to an item but these individual differences are captured by the person's level on the relevant traits, not by the other parameters. That the functional form and item parameters are the same for everybody is a necessary assumption to make it all tractable.
Also, situational variables or transient states (emotion, fatigue…) are nuisances. By definition, traits are stable properties of a person and everything should be done to minimize the impact of other variables on the measurement.
Extending this to psychophysics, it could be some detection threshold (and not luminance itself) that would be interpreted as a trait and that's what discrimination and the item characteristic curve would be applied to. Does it make sense? | Are discrimination parameters in two-parameter IRT models only specific to items?
This is more a comment that grew too long and had to be edited than an answer buy anyway…
I don't know these models very well but it might help to remember that they were developed with personal attr |
55,391 | Are discrimination parameters in two-parameter IRT models only specific to items? | I found this article which tries a model with person specific discrimination parameter
An IRT Modeling Approach for Assessing Item and Person Discrimination in Binary Personality Responses
I should have tried first!
Related to this,
for polytomous data,
Wolfe and Firth(2002), Modelling subjective use of an ordinal response
scale in a many period crossover experiment | Are discrimination parameters in two-parameter IRT models only specific to items? | I found this article which tries a model with person specific discrimination parameter
An IRT Modeling Approach for Assessing Item and Person Discrimination in Binary Personality Responses
I should ha | Are discrimination parameters in two-parameter IRT models only specific to items?
I found this article which tries a model with person specific discrimination parameter
An IRT Modeling Approach for Assessing Item and Person Discrimination in Binary Personality Responses
I should have tried first!
Related to this,
for polytomous data,
Wolfe and Firth(2002), Modelling subjective use of an ordinal response
scale in a many period crossover experiment | Are discrimination parameters in two-parameter IRT models only specific to items?
I found this article which tries a model with person specific discrimination parameter
An IRT Modeling Approach for Assessing Item and Person Discrimination in Binary Personality Responses
I should ha |
55,392 | How to compare two related ordinal variables? | Welcome to the site, ellen.
You can conduct contingency table analysis, and that can be done in two ways.
First, you can tabulate the category against the occasion, which would give you a 2x4 table with 3 degrees of freedom, and test for independence of the counts. The test will tell you whether the marginal distributions of the response have changed between the two occasions.
Second, you can tabulate the responses one against the other and perform the independence test that way -- although we can be pretty sure that the null of independence will be rejected. Based on this tabulation, you can also compute the polychoric correlation that would demonstrate how strongly the two measurements are related. | How to compare two related ordinal variables? | Welcome to the site, ellen.
You can conduct contingency table analysis, and that can be done in two ways.
First, you can tabulate the category against the occasion, which would give you a 2x4 table wi | How to compare two related ordinal variables?
Welcome to the site, ellen.
You can conduct contingency table analysis, and that can be done in two ways.
First, you can tabulate the category against the occasion, which would give you a 2x4 table with 3 degrees of freedom, and test for independence of the counts. The test will tell you whether the marginal distributions of the response have changed between the two occasions.
Second, you can tabulate the responses one against the other and perform the independence test that way -- although we can be pretty sure that the null of independence will be rejected. Based on this tabulation, you can also compute the polychoric correlation that would demonstrate how strongly the two measurements are related. | How to compare two related ordinal variables?
Welcome to the site, ellen.
You can conduct contingency table analysis, and that can be done in two ways.
First, you can tabulate the category against the occasion, which would give you a 2x4 table wi |
55,393 | How to compare two related ordinal variables? | Since the data are not continuous and certainly not close to being normally distributed a nonparametric paired test seems to be the answer. My suggestion would be the Wilcoxon signed rank test. | How to compare two related ordinal variables? | Since the data are not continuous and certainly not close to being normally distributed a nonparametric paired test seems to be the answer. My suggestion would be the Wilcoxon signed rank test. | How to compare two related ordinal variables?
Since the data are not continuous and certainly not close to being normally distributed a nonparametric paired test seems to be the answer. My suggestion would be the Wilcoxon signed rank test. | How to compare two related ordinal variables?
Since the data are not continuous and certainly not close to being normally distributed a nonparametric paired test seems to be the answer. My suggestion would be the Wilcoxon signed rank test. |
55,394 | How to compare two related ordinal variables? | To @StasK, I think the contingency table doesn't work here since the samples are paired. The contingency table cannot account for the dependence between the paired samples.
The Wilcoxon signed-rank test compares the difference between two paired samples when the response variable is on ordinal scale, and thus fits your case the best. Note that the Wilcoxon signed-rank test does assume that the distribution of the difference between the two paired samples is symmetric. This assumption needs to be justified. If violated, the Sign test then needs to be used.
I answered a similar question here, in which there are links about the tests, and tutorials of how you could use SPSS for this purpose. | How to compare two related ordinal variables? | To @StasK, I think the contingency table doesn't work here since the samples are paired. The contingency table cannot account for the dependence between the paired samples.
The Wilcoxon signed-rank t | How to compare two related ordinal variables?
To @StasK, I think the contingency table doesn't work here since the samples are paired. The contingency table cannot account for the dependence between the paired samples.
The Wilcoxon signed-rank test compares the difference between two paired samples when the response variable is on ordinal scale, and thus fits your case the best. Note that the Wilcoxon signed-rank test does assume that the distribution of the difference between the two paired samples is symmetric. This assumption needs to be justified. If violated, the Sign test then needs to be used.
I answered a similar question here, in which there are links about the tests, and tutorials of how you could use SPSS for this purpose. | How to compare two related ordinal variables?
To @StasK, I think the contingency table doesn't work here since the samples are paired. The contingency table cannot account for the dependence between the paired samples.
The Wilcoxon signed-rank t |
55,395 | Is there any criterion about choosing reference factor in multinomial logistic regression? | You are free to choose any of the categories as the reference. From the viewpoint of overall statistical quality of prediction by the model, the choice is arbitrary. In terms of interpretation of individual IV's effects, it makes difference. The multinomial logistic model is:
$log(\frac{Prob(category_i)}{Prob(category_{ref})})=B_{i0}+B_{i1}X_1+B_{i2}X_2...+B_{ip}X_p$
So you interpret effects (regression coefficients) of independent variables for each category $i$ vis-a-vis your reference category $ref$. Namely, $exp(B_{i1})$, for example, is this odds ratio: by how many times the estimated odds $\frac{Prob(category_i)}{Prob(category_{ref})}$ increases in response relative to increasing $X_1$ by one unit.
This also implies that if you want to interpret the coefficients, you should not just look at whether they are significant or not. It matters if the independent variable $X$ is continuous or categorical. $exp(B)$ for a continuous predictor with wide scale (big variance) can be close to 1 even if the predictor is highly significant. So it is generally preferable to categorize continuous predictors into a small number of meaningful categories prior to doing the regression whenever you are going to interpret the coefficients. Also, categorization of a continuous predictor into equal subranges will allow you to check the linearity assumption. | Is there any criterion about choosing reference factor in multinomial logistic regression? | You are free to choose any of the categories as the reference. From the viewpoint of overall statistical quality of prediction by the model, the choice is arbitrary. In terms of interpretation of indi | Is there any criterion about choosing reference factor in multinomial logistic regression?
You are free to choose any of the categories as the reference. From the viewpoint of overall statistical quality of prediction by the model, the choice is arbitrary. In terms of interpretation of individual IV's effects, it makes difference. The multinomial logistic model is:
$log(\frac{Prob(category_i)}{Prob(category_{ref})})=B_{i0}+B_{i1}X_1+B_{i2}X_2...+B_{ip}X_p$
So you interpret effects (regression coefficients) of independent variables for each category $i$ vis-a-vis your reference category $ref$. Namely, $exp(B_{i1})$, for example, is this odds ratio: by how many times the estimated odds $\frac{Prob(category_i)}{Prob(category_{ref})}$ increases in response relative to increasing $X_1$ by one unit.
This also implies that if you want to interpret the coefficients, you should not just look at whether they are significant or not. It matters if the independent variable $X$ is continuous or categorical. $exp(B)$ for a continuous predictor with wide scale (big variance) can be close to 1 even if the predictor is highly significant. So it is generally preferable to categorize continuous predictors into a small number of meaningful categories prior to doing the regression whenever you are going to interpret the coefficients. Also, categorization of a continuous predictor into equal subranges will allow you to check the linearity assumption. | Is there any criterion about choosing reference factor in multinomial logistic regression?
You are free to choose any of the categories as the reference. From the viewpoint of overall statistical quality of prediction by the model, the choice is arbitrary. In terms of interpretation of indi |
55,396 | An elementary question on binomial test: why should I take a sum? | Max's comment answers your question: The definition of the $p$-value is the probability that you get a value at least as extreme, and this includes all outcomes more lopsided than the one you observed. It's your choice whether to consider $10-40$ more lopsided than $31-19$, whether to use a two-tailed test or one-tailed, but you must include $40-10$.
If you forget to include the more lopsided terms, then you will compute a small probability and automatically reject the null hypothesis when you use a large number of trials. If you play $1$ million games between equal opponents, the most likely outcome is an even split, and the probability of that is still quite small. ${1,000,000 \choose 500,000}/2^{1,000,000} \approx 1/(500\sqrt{2\pi}) \approx 0.000798 \lt 0.1\%.$ Every score has less than a $0.1\%$ chance to occur! So, if you don't add more extreme outcomes, you would simply confirm that some unlikely event had occurred just because there are a lot of possibilities when you have $1,000,000$ games. If you observe a score of $500,300-499,700$, the actual chance to see a score at least as lopsided in favor of $A$ when $A$ and $B$ are equal is $27.46\%$, and the probability of a score at least as lopsided in favor of either player is twice that, over $50\%$.
It is reasonable to ask why the $p$-value is defined this way. Whuber hinted that the Neyman-Pearson lemma is relevant. Another way to think about it is that we only want to have a chance $\alpha$ to reject the null hypothesis if the null hypothesis is true. If we have a linear ordering on how extreme outcomes are, and we define the $p$-value to be the probability of getting an outcome at least as extreme, then the event that we get an outcome with $p$-value lower than $\alpha$ has probability smaller than $\alpha$.
In different statistical procedures, there are times when you calculate just the probabilities of particular outcomes, such as a Bayesian update of a prior distribution. It's about $4$ times as likely to have a $31-19$ outcome if $A$ is actually a $60-40$ favorite rather than even, so you would strengthen your estimate of the probability that $A$ is a $60-40$ favorite by a factor of $4$ relative to the probability that $A$ and $B$ are even, not the ratio between the probabilities of observing events at least as extreme. | An elementary question on binomial test: why should I take a sum? | Max's comment answers your question: The definition of the $p$-value is the probability that you get a value at least as extreme, and this includes all outcomes more lopsided than the one you observed | An elementary question on binomial test: why should I take a sum?
Max's comment answers your question: The definition of the $p$-value is the probability that you get a value at least as extreme, and this includes all outcomes more lopsided than the one you observed. It's your choice whether to consider $10-40$ more lopsided than $31-19$, whether to use a two-tailed test or one-tailed, but you must include $40-10$.
If you forget to include the more lopsided terms, then you will compute a small probability and automatically reject the null hypothesis when you use a large number of trials. If you play $1$ million games between equal opponents, the most likely outcome is an even split, and the probability of that is still quite small. ${1,000,000 \choose 500,000}/2^{1,000,000} \approx 1/(500\sqrt{2\pi}) \approx 0.000798 \lt 0.1\%.$ Every score has less than a $0.1\%$ chance to occur! So, if you don't add more extreme outcomes, you would simply confirm that some unlikely event had occurred just because there are a lot of possibilities when you have $1,000,000$ games. If you observe a score of $500,300-499,700$, the actual chance to see a score at least as lopsided in favor of $A$ when $A$ and $B$ are equal is $27.46\%$, and the probability of a score at least as lopsided in favor of either player is twice that, over $50\%$.
It is reasonable to ask why the $p$-value is defined this way. Whuber hinted that the Neyman-Pearson lemma is relevant. Another way to think about it is that we only want to have a chance $\alpha$ to reject the null hypothesis if the null hypothesis is true. If we have a linear ordering on how extreme outcomes are, and we define the $p$-value to be the probability of getting an outcome at least as extreme, then the event that we get an outcome with $p$-value lower than $\alpha$ has probability smaller than $\alpha$.
In different statistical procedures, there are times when you calculate just the probabilities of particular outcomes, such as a Bayesian update of a prior distribution. It's about $4$ times as likely to have a $31-19$ outcome if $A$ is actually a $60-40$ favorite rather than even, so you would strengthen your estimate of the probability that $A$ is a $60-40$ favorite by a factor of $4$ relative to the probability that $A$ and $B$ are even, not the ratio between the probabilities of observing events at least as extreme. | An elementary question on binomial test: why should I take a sum?
Max's comment answers your question: The definition of the $p$-value is the probability that you get a value at least as extreme, and this includes all outcomes more lopsided than the one you observed |
55,397 | Data cleansing in regression analysis | Use a robust fit, such as lmrob in the robustbase package. This particular one can automatically detect and downweight up to 50% of the data if they appear to be outlying.
To see what can be accomplished, let's simulate a nasty dataset with plenty of outliers in both the $x$ and $y$ variables:
library(robustbase)
set.seed(17)
n.points <- 17520
n.x.outliers <- 500
n.y.outliers <- 500
beta <- c(50, .3, -.05)
x <- rnorm(n.points)
y <- beta[1] + beta[2]*x + beta[3]*x^2 + rnorm(n.points, sd=0.5)
y[1:n.y.outliers] <- rnorm(n.y.outliers, sd=5) + y[1:n.y.outliers]
x[sample(1:n.points, n.x.outliers)] <- rnorm(n.x.outliers, sd=10)
Most of the $x$ values should lie between $-4$ and $4$, but there are some extreme outliers:
Let's compare ordinary least squares (lm) to the robust coefficients:
summary(fit<-lm(y ~ 1 + x + I(x^2)))
summary(fit.rob<-lmrob(y ~ 1 + x + I(x^2)))
lm reports fitted coefficients of $49.94$, $0.00805$, and $0.000479$, compared to the expected values of $50$, $0.3$, and $-0.05$. lmrob reports $49.97$, $0.274$, and $-0.0229$, respectively. Neither of them estimates the quadratic term accurately (because it makes a small contribution and is swamped by the noise), but lmrob comes up with a reasonable estimate of the linear term while lm doesn't even come close.
Let's take a closer look:
i <- abs(x) < 10 # Window the data from x = -10 to 10
w <- fit.rob$weights[i] # Extract the robust weights (each between 0 and 1)
plot(x[i], y[i], pch=".", cex=4, col=hsv((w + 1/4)*4/5, w/3+2/3, 0.8*(1-w/2)),
main="Least-squares and robust fits", xlab="x", ylab="y")
lmrob reports weights for the data. Here, in this zoomed-in plot, the weights are shown by color: light greens for highly downweighted values, dark maroons for values with full weights. Clearly the lm fit is poor: the $x$ outliers have too much influence. Although its quadratic term is a poor estimate, the lmrob fit nevertheless closely follows the correct curve throughout the range of the good data ($x$ between $-4$ and $4$). | Data cleansing in regression analysis | Use a robust fit, such as lmrob in the robustbase package. This particular one can automatically detect and downweight up to 50% of the data if they appear to be outlying.
To see what can be accompli | Data cleansing in regression analysis
Use a robust fit, such as lmrob in the robustbase package. This particular one can automatically detect and downweight up to 50% of the data if they appear to be outlying.
To see what can be accomplished, let's simulate a nasty dataset with plenty of outliers in both the $x$ and $y$ variables:
library(robustbase)
set.seed(17)
n.points <- 17520
n.x.outliers <- 500
n.y.outliers <- 500
beta <- c(50, .3, -.05)
x <- rnorm(n.points)
y <- beta[1] + beta[2]*x + beta[3]*x^2 + rnorm(n.points, sd=0.5)
y[1:n.y.outliers] <- rnorm(n.y.outliers, sd=5) + y[1:n.y.outliers]
x[sample(1:n.points, n.x.outliers)] <- rnorm(n.x.outliers, sd=10)
Most of the $x$ values should lie between $-4$ and $4$, but there are some extreme outliers:
Let's compare ordinary least squares (lm) to the robust coefficients:
summary(fit<-lm(y ~ 1 + x + I(x^2)))
summary(fit.rob<-lmrob(y ~ 1 + x + I(x^2)))
lm reports fitted coefficients of $49.94$, $0.00805$, and $0.000479$, compared to the expected values of $50$, $0.3$, and $-0.05$. lmrob reports $49.97$, $0.274$, and $-0.0229$, respectively. Neither of them estimates the quadratic term accurately (because it makes a small contribution and is swamped by the noise), but lmrob comes up with a reasonable estimate of the linear term while lm doesn't even come close.
Let's take a closer look:
i <- abs(x) < 10 # Window the data from x = -10 to 10
w <- fit.rob$weights[i] # Extract the robust weights (each between 0 and 1)
plot(x[i], y[i], pch=".", cex=4, col=hsv((w + 1/4)*4/5, w/3+2/3, 0.8*(1-w/2)),
main="Least-squares and robust fits", xlab="x", ylab="y")
lmrob reports weights for the data. Here, in this zoomed-in plot, the weights are shown by color: light greens for highly downweighted values, dark maroons for values with full weights. Clearly the lm fit is poor: the $x$ outliers have too much influence. Although its quadratic term is a poor estimate, the lmrob fit nevertheless closely follows the correct curve throughout the range of the good data ($x$ between $-4$ and $4$). | Data cleansing in regression analysis
Use a robust fit, such as lmrob in the robustbase package. This particular one can automatically detect and downweight up to 50% of the data if they appear to be outlying.
To see what can be accompli |
55,398 | Central moments of a gaussian mixture density? | It is simple because of linearity of integration (exchange order of integration and expectation).
$\mu=\omega_1 \mu_1 +\omega_2\mu_2 +\dots+\omega_k \mu_k$
The same is true for higher order moments and central moments with the $\mu_i$s replaced by the variances for variances etc. Now since each $\mu_i$ and $C_i$ determines the higher order moments by normality the third and fourth moments for example can all be expressed as functions of them.
Anything else you want to know about finite mixtures can be found in these books(I include the EM Algorithm book because that is the method most often used to get the MLEs for the parameters:
Finite Mixture Models
The EM Algorithm and Extensions
Nonparametric Statistics and Mixture Models: A Festschrift in Honor of Thomas P Hettmansperger
Medical Applications of Finite Mixture Models
Mixture Models | Central moments of a gaussian mixture density? | It is simple because of linearity of integration (exchange order of integration and expectation).
$\mu=\omega_1 \mu_1 +\omega_2\mu_2 +\dots+\omega_k \mu_k$
The same is true for higher order moments an | Central moments of a gaussian mixture density?
It is simple because of linearity of integration (exchange order of integration and expectation).
$\mu=\omega_1 \mu_1 +\omega_2\mu_2 +\dots+\omega_k \mu_k$
The same is true for higher order moments and central moments with the $\mu_i$s replaced by the variances for variances etc. Now since each $\mu_i$ and $C_i$ determines the higher order moments by normality the third and fourth moments for example can all be expressed as functions of them.
Anything else you want to know about finite mixtures can be found in these books(I include the EM Algorithm book because that is the method most often used to get the MLEs for the parameters:
Finite Mixture Models
The EM Algorithm and Extensions
Nonparametric Statistics and Mixture Models: A Festschrift in Honor of Thomas P Hettmansperger
Medical Applications of Finite Mixture Models
Mixture Models | Central moments of a gaussian mixture density?
It is simple because of linearity of integration (exchange order of integration and expectation).
$\mu=\omega_1 \mu_1 +\omega_2\mu_2 +\dots+\omega_k \mu_k$
The same is true for higher order moments an |
55,399 | Central moments of a gaussian mixture density? | I illustrate the calculation with 2 components. The other cases are similar. The key results to use are
(a) E(E(X|Y))=E(X) and (b) V(X)= V(E(X|Y))+E(V(X|Y)).
Here Y denotes the component. So Y takes the values 1 and 2 with probabilities p and 1-p.
Let E(X|Y=i) = $\mu_i$ and V(X|Y=i) =$\sigma_i^2$
Now E(X)= p $\mu_1$ + (1-p) $\mu_2$.
V(E(X|Y))= p $(\mu_1 - (p \mu_1 + (1-p) \mu_2)^2$ + (1-p) $(\mu_2 - (p \mu_1 + (1-p) \mu_2)^2$ which on simplification yields p(1-p)$(\mu_1-\mu_2)^2$
E(V(X|Y))= p$\sigma_1^2$ + (1-p)$\sigma_2^2$
Thus, V(X) = p$\sigma_1^2$ + (1-p)$\sigma_2^2$ + p(1-p)$(\mu_1-\mu_2)^2$ | Central moments of a gaussian mixture density? | I illustrate the calculation with 2 components. The other cases are similar. The key results to use are
(a) E(E(X|Y))=E(X) and (b) V(X)= V(E(X|Y))+E(V(X|Y)).
Here Y denotes the component. So Y takes | Central moments of a gaussian mixture density?
I illustrate the calculation with 2 components. The other cases are similar. The key results to use are
(a) E(E(X|Y))=E(X) and (b) V(X)= V(E(X|Y))+E(V(X|Y)).
Here Y denotes the component. So Y takes the values 1 and 2 with probabilities p and 1-p.
Let E(X|Y=i) = $\mu_i$ and V(X|Y=i) =$\sigma_i^2$
Now E(X)= p $\mu_1$ + (1-p) $\mu_2$.
V(E(X|Y))= p $(\mu_1 - (p \mu_1 + (1-p) \mu_2)^2$ + (1-p) $(\mu_2 - (p \mu_1 + (1-p) \mu_2)^2$ which on simplification yields p(1-p)$(\mu_1-\mu_2)^2$
E(V(X|Y))= p$\sigma_1^2$ + (1-p)$\sigma_2^2$
Thus, V(X) = p$\sigma_1^2$ + (1-p)$\sigma_2^2$ + p(1-p)$(\mu_1-\mu_2)^2$ | Central moments of a gaussian mixture density?
I illustrate the calculation with 2 components. The other cases are similar. The key results to use are
(a) E(E(X|Y))=E(X) and (b) V(X)= V(E(X|Y))+E(V(X|Y)).
Here Y denotes the component. So Y takes |
55,400 | How to handle both text and numbers for PCA in R? | Check out the dudi.mix() function in the ade4 package: Ordination of Tables mixing quantitative variables and factors. Example:
library(ade4)
scatter.dudi(dudi.mix(iris,scannf=FALSE))
There are a couple other packages that do mixed correspondence analysis.
You can go ahead and fully dummy code your categorical variables too. It's not as theoretically sound, but it does get the job done. | How to handle both text and numbers for PCA in R? | Check out the dudi.mix() function in the ade4 package: Ordination of Tables mixing quantitative variables and factors. Example:
library(ade4)
scatter.dudi(dudi.mix(iris,scannf=FALSE))
There are a co | How to handle both text and numbers for PCA in R?
Check out the dudi.mix() function in the ade4 package: Ordination of Tables mixing quantitative variables and factors. Example:
library(ade4)
scatter.dudi(dudi.mix(iris,scannf=FALSE))
There are a couple other packages that do mixed correspondence analysis.
You can go ahead and fully dummy code your categorical variables too. It's not as theoretically sound, but it does get the job done. | How to handle both text and numbers for PCA in R?
Check out the dudi.mix() function in the ade4 package: Ordination of Tables mixing quantitative variables and factors. Example:
library(ade4)
scatter.dudi(dudi.mix(iris,scannf=FALSE))
There are a co |
Subsets and Splits