idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
βŒ€
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
βŒ€
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
54,201
Will removing outliers improve my predictive model?
Turning my comments above into an answer: If the extreme values are not errors in the data, then you are not helping prediction by removing them. You are instead ignoring data that your model does not explain well. I strongly suggest you retain them in your dataset. Even if you cannot explain them with your model, your estimates of predictive power will be more accurate than if you exclude them. If you think they are biasing your predictions/results, you could use a robust approach (which you seem to have attempted, with the use of MAE instead of MSE). By robust methods, I mean methods that weight extreme values to a lower degree. E.g. MSE will square your residuals, meaning that extreme values will have a large influence on the outcome. Those values will have a much lower influence if you use MAE because of the lack of squaring. There is an entire subfield of 'robust statistics' that deals with problems such as yours, without removing data. Take a look at the robust tag.
Will removing outliers improve my predictive model?
Turning my comments above into an answer: If the extreme values are not errors in the data, then you are not helping prediction by removing them. You are instead ignoring data that your model does no
Will removing outliers improve my predictive model? Turning my comments above into an answer: If the extreme values are not errors in the data, then you are not helping prediction by removing them. You are instead ignoring data that your model does not explain well. I strongly suggest you retain them in your dataset. Even if you cannot explain them with your model, your estimates of predictive power will be more accurate than if you exclude them. If you think they are biasing your predictions/results, you could use a robust approach (which you seem to have attempted, with the use of MAE instead of MSE). By robust methods, I mean methods that weight extreme values to a lower degree. E.g. MSE will square your residuals, meaning that extreme values will have a large influence on the outcome. Those values will have a much lower influence if you use MAE because of the lack of squaring. There is an entire subfield of 'robust statistics' that deals with problems such as yours, without removing data. Take a look at the robust tag.
Will removing outliers improve my predictive model? Turning my comments above into an answer: If the extreme values are not errors in the data, then you are not helping prediction by removing them. You are instead ignoring data that your model does no
54,202
How to make correct predictions of probabilities and their uncertainty in Bayesian logistic regression?
I'm going to start by cleaning up a little notational lack of precision in the first part of the question, then go on to the meat of it. First, if $y \in \{0,1\}$, $y$ is, more precisely than stated in the question, distributed Bernoulli$(\theta)$, not Binomial$(n,\theta)$ for some arbitrary $n$. The two distributions are of course the same when $n=1$, but best to get rid of $n$ altogether if you don't use it subsequently. On to the question! What the posterior distribution $p(\beta|\cdot)$ allows us to do is calculate an MCMC-based posterior for $\theta$ by sampling from the posterior distribution of $\beta$, then multiplying each sample $\beta$ by $X_{\text{new}}$ and calculating the inverse logit of the result (as in your second equation above.) This gives us a vector of samples of $\theta$ from the posterior distribution of $\theta$. Thanks to the Bernoulli assumption, these samples are samples from the posterior distribution of the expected value of $y_{\text{new}}$ as well as from the posterior distribution of the probability that $y_{\text{new}} = 1 | X_{\text{new}}, y, X$. Note that we are not using the posterior predictive distribution to do this, just the posterior distribution itself. Your confusion stems from the fact that you tried to work with the posterior predictive distribution of $y_{\text{new}}$, which, for the problem you faced, was a step too far. The samples of $\theta$ thus generated can be used in the usual MCMC ways in place of the full posterior distribution for $\theta$ or as inputs to a smoothing function of some sort (for calculating a smooth version of the posterior.) Either way, the sample-based posterior distribution contains the information you need to find (estimates of) the expected value of $p(y_{\text{new}} = 1| X_{\text{new}}, y, X)$, (estimates of) quantiles of same, etc. For example, the posterior mean of $p(y_{\text{new}} = 1| X_{\text{new}}, y, X)$ is just the mean of the sampled $\theta$ values, the 5th percentile is just the 5th percentile of the sampled $\theta$ values, etc.
How to make correct predictions of probabilities and their uncertainty in Bayesian logistic regressi
I'm going to start by cleaning up a little notational lack of precision in the first part of the question, then go on to the meat of it. First, if $y \in \{0,1\}$, $y$ is, more precisely than stated i
How to make correct predictions of probabilities and their uncertainty in Bayesian logistic regression? I'm going to start by cleaning up a little notational lack of precision in the first part of the question, then go on to the meat of it. First, if $y \in \{0,1\}$, $y$ is, more precisely than stated in the question, distributed Bernoulli$(\theta)$, not Binomial$(n,\theta)$ for some arbitrary $n$. The two distributions are of course the same when $n=1$, but best to get rid of $n$ altogether if you don't use it subsequently. On to the question! What the posterior distribution $p(\beta|\cdot)$ allows us to do is calculate an MCMC-based posterior for $\theta$ by sampling from the posterior distribution of $\beta$, then multiplying each sample $\beta$ by $X_{\text{new}}$ and calculating the inverse logit of the result (as in your second equation above.) This gives us a vector of samples of $\theta$ from the posterior distribution of $\theta$. Thanks to the Bernoulli assumption, these samples are samples from the posterior distribution of the expected value of $y_{\text{new}}$ as well as from the posterior distribution of the probability that $y_{\text{new}} = 1 | X_{\text{new}}, y, X$. Note that we are not using the posterior predictive distribution to do this, just the posterior distribution itself. Your confusion stems from the fact that you tried to work with the posterior predictive distribution of $y_{\text{new}}$, which, for the problem you faced, was a step too far. The samples of $\theta$ thus generated can be used in the usual MCMC ways in place of the full posterior distribution for $\theta$ or as inputs to a smoothing function of some sort (for calculating a smooth version of the posterior.) Either way, the sample-based posterior distribution contains the information you need to find (estimates of) the expected value of $p(y_{\text{new}} = 1| X_{\text{new}}, y, X)$, (estimates of) quantiles of same, etc. For example, the posterior mean of $p(y_{\text{new}} = 1| X_{\text{new}}, y, X)$ is just the mean of the sampled $\theta$ values, the 5th percentile is just the 5th percentile of the sampled $\theta$ values, etc.
How to make correct predictions of probabilities and their uncertainty in Bayesian logistic regressi I'm going to start by cleaning up a little notational lack of precision in the first part of the question, then go on to the meat of it. First, if $y \in \{0,1\}$, $y$ is, more precisely than stated i
54,203
Usefulness of convexity of linear regression when there is no closed form solution
When $(X^TX)$ is not invertible there is not one solution but several: an affine subspace. But they are still closed form solutions in a way. They are solutions of the linear system: $(X^TX)\beta=X^Ty$. Solving this system is not fundamentally more complicated than inverting a matrix I think. It is also possible to solve it with a convex optimization algorithm. But then there is not a single minimum since the function reaches a constant minimum on an affine subspace: imagine a straight line as the bottom of a valley (a straight river). The function is not strictly convex but just convex. I think most convex optimization algorithms will converge to one of the solutions: finish at any place in the affine subspace. So, the case when the matrix is not invertible is not so much special in terms of matrix method versus convex optimization algorithm. It's just that linear regression allows a special simple solution with matrices or linear systems, when general problems do not, and you have to find the minimum with an iterative method: an optimization algorithm. Note that there are cases where, in spite of apparent simplicity, inverting the matrix has a much higher algorithmic complexity than a reasonably precise convex optimization algorithm. This is usually when there are lots of features (hundreds to millions). That's why people use convex optimization methods even for linear regression. Finally, when the matrix is not invertible, it is usually that there are not enough data to really estimate $\beta$ with a realistic precision. The solution is extremely over-fitted. People will then use ridge regularization. The solution is $\beta=(X^TX+\lambda I)^{-1}X^Ty$. The matrix $(X^TX+\lambda I)$ is always invertible ($\lambda>0$).
Usefulness of convexity of linear regression when there is no closed form solution
When $(X^TX)$ is not invertible there is not one solution but several: an affine subspace. But they are still closed form solutions in a way. They are solutions of the linear system: $(X^TX)\beta=X^Ty
Usefulness of convexity of linear regression when there is no closed form solution When $(X^TX)$ is not invertible there is not one solution but several: an affine subspace. But they are still closed form solutions in a way. They are solutions of the linear system: $(X^TX)\beta=X^Ty$. Solving this system is not fundamentally more complicated than inverting a matrix I think. It is also possible to solve it with a convex optimization algorithm. But then there is not a single minimum since the function reaches a constant minimum on an affine subspace: imagine a straight line as the bottom of a valley (a straight river). The function is not strictly convex but just convex. I think most convex optimization algorithms will converge to one of the solutions: finish at any place in the affine subspace. So, the case when the matrix is not invertible is not so much special in terms of matrix method versus convex optimization algorithm. It's just that linear regression allows a special simple solution with matrices or linear systems, when general problems do not, and you have to find the minimum with an iterative method: an optimization algorithm. Note that there are cases where, in spite of apparent simplicity, inverting the matrix has a much higher algorithmic complexity than a reasonably precise convex optimization algorithm. This is usually when there are lots of features (hundreds to millions). That's why people use convex optimization methods even for linear regression. Finally, when the matrix is not invertible, it is usually that there are not enough data to really estimate $\beta$ with a realistic precision. The solution is extremely over-fitted. People will then use ridge regularization. The solution is $\beta=(X^TX+\lambda I)^{-1}X^Ty$. The matrix $(X^TX+\lambda I)$ is always invertible ($\lambda>0$).
Usefulness of convexity of linear regression when there is no closed form solution When $(X^TX)$ is not invertible there is not one solution but several: an affine subspace. But they are still closed form solutions in a way. They are solutions of the linear system: $(X^TX)\beta=X^Ty
54,204
How to make predictions with time-dependent covariates with Cox regression
Calculating predicted probabilities using a Cox model There is a way of obtaining prediction out of a Cox model, as survival probability at time $t$ ($S(t)$) depends on your cox model like so: $S(t) = e^{-H_0(t) * exp(LP)}$ in this formula $H_0(t)$ is called the baseline hazard at time $t$; and $LP$ is the linear predictor. If $X_i$ are the predictor variables $1,2,...,i$ in the Cox model, and $Ξ²_i$ are the corresponding coefficients from the Cox model, then the linear predictor is calculated like so: $LP = X_1*Ξ²_1 + X_2*Ξ²_2 + ... + X_i*Ξ²_i$ You might already be familiar with what a linear predictor is, but I added this for clarity sake. The baseline hazard is a little harder to obtain. Basically it is a function of time, which shows the hazard for an event of an individual who has a $LP$ of 0. Due to the way cox regression works this value is not estimated (look at this CV question or others, which sheds more light on baseline hazard). In R however (which I can see the OP uses), there is a function available called 'basehaz()' in the survival package, which will let you extract the baseline hazard at a specific timepoint based on your model's fit. If you extract this baseline hazard you can complete the formula above for any individual in your data, and for unseen data, as long as you are able to calculate the linear predictor for these individuals. Remember this formula results in the probability of survival or not having had the event at time $t$. If you want to know the probability of an event at time $t$, simply subtract the probability from 1: $P(event|LP) = 1-S(t) = 1 - e^{-H_0(t) * exp(LP)}$ Final remark: Note that the basehaz function gives you a baseline hazard at time $t$ based on your specific data. So, as is the case for the coefficients, extrapolating this to new cases might not result in a good fit/prediction, due to overfitting, bias, etc. Calibration & Discrimination With the predicted probabilities and the event status at one year you can calculate various calibration statistics. For the discrimination statistic (the c-index) there is a specific kind of rank correlation for a censored response variable. In R, the rcorr.cens function from the Hmisc package can provide you with it. Whether these statistics are the most suitable for your research I can not say, as that depends mostly on the specifics (e.g. are you building a new and first model for this setting, or are you comparing to previously developed ones?)
How to make predictions with time-dependent covariates with Cox regression
Calculating predicted probabilities using a Cox model There is a way of obtaining prediction out of a Cox model, as survival probability at time $t$ ($S(t)$) depends on your cox model like so: $S(t) =
How to make predictions with time-dependent covariates with Cox regression Calculating predicted probabilities using a Cox model There is a way of obtaining prediction out of a Cox model, as survival probability at time $t$ ($S(t)$) depends on your cox model like so: $S(t) = e^{-H_0(t) * exp(LP)}$ in this formula $H_0(t)$ is called the baseline hazard at time $t$; and $LP$ is the linear predictor. If $X_i$ are the predictor variables $1,2,...,i$ in the Cox model, and $Ξ²_i$ are the corresponding coefficients from the Cox model, then the linear predictor is calculated like so: $LP = X_1*Ξ²_1 + X_2*Ξ²_2 + ... + X_i*Ξ²_i$ You might already be familiar with what a linear predictor is, but I added this for clarity sake. The baseline hazard is a little harder to obtain. Basically it is a function of time, which shows the hazard for an event of an individual who has a $LP$ of 0. Due to the way cox regression works this value is not estimated (look at this CV question or others, which sheds more light on baseline hazard). In R however (which I can see the OP uses), there is a function available called 'basehaz()' in the survival package, which will let you extract the baseline hazard at a specific timepoint based on your model's fit. If you extract this baseline hazard you can complete the formula above for any individual in your data, and for unseen data, as long as you are able to calculate the linear predictor for these individuals. Remember this formula results in the probability of survival or not having had the event at time $t$. If you want to know the probability of an event at time $t$, simply subtract the probability from 1: $P(event|LP) = 1-S(t) = 1 - e^{-H_0(t) * exp(LP)}$ Final remark: Note that the basehaz function gives you a baseline hazard at time $t$ based on your specific data. So, as is the case for the coefficients, extrapolating this to new cases might not result in a good fit/prediction, due to overfitting, bias, etc. Calibration & Discrimination With the predicted probabilities and the event status at one year you can calculate various calibration statistics. For the discrimination statistic (the c-index) there is a specific kind of rank correlation for a censored response variable. In R, the rcorr.cens function from the Hmisc package can provide you with it. Whether these statistics are the most suitable for your research I can not say, as that depends mostly on the specifics (e.g. are you building a new and first model for this setting, or are you comparing to previously developed ones?)
How to make predictions with time-dependent covariates with Cox regression Calculating predicted probabilities using a Cox model There is a way of obtaining prediction out of a Cox model, as survival probability at time $t$ ($S(t)$) depends on your cox model like so: $S(t) =
54,205
How to make predictions with time-dependent covariates with Cox regression
How do I know which time-value to use in my calculations? This isn't a question that your model can answer you. How did you choose the cutoff points? The result of a Cox-PH model is a survival distribution over time for each patient. Without time dependence, one can look at the overall hazard of each patient to compare relative risks between patients. With time dependence, one could use the cumulative hazard. If you are interested in time dependence in your data, I would suggest other approaches (such as Random Forest survival) rather than modified Cox-PH, which is ad hoc.
How to make predictions with time-dependent covariates with Cox regression
How do I know which time-value to use in my calculations? This isn't a question that your model can answer you. How did you choose the cutoff points? The result of a Cox-PH model is a survival di
How to make predictions with time-dependent covariates with Cox regression How do I know which time-value to use in my calculations? This isn't a question that your model can answer you. How did you choose the cutoff points? The result of a Cox-PH model is a survival distribution over time for each patient. Without time dependence, one can look at the overall hazard of each patient to compare relative risks between patients. With time dependence, one could use the cumulative hazard. If you are interested in time dependence in your data, I would suggest other approaches (such as Random Forest survival) rather than modified Cox-PH, which is ad hoc.
How to make predictions with time-dependent covariates with Cox regression How do I know which time-value to use in my calculations? This isn't a question that your model can answer you. How did you choose the cutoff points? The result of a Cox-PH model is a survival di
54,206
ARMA - coefficient interpretation
Here Graeme Walsh explains how to better look at the fit than just the coefficients: ARIMA model interpretation Here David J Harris describes differences in the model selection parameters AIC, BIC, etc. AIC,BIC,CIC,DIC,EIC,FIC,GIC,HIC,IIC --- Can I use them interchangeably? Here an earlier question regarding ARMA models How do I choose which parameters to estimate in an ARMA model in python statsmodel? Your question: *I would like to tell about all those numbers as much as possible. * 1) The first part is descriptive (like name and selected model which is straightforward) + some measures like the AIC, BIC, HQIC, which are measures that mix the likelihood with the number of parameters and data points. Various texts explain how they relate with selecting an ARMA model. One example is: from this course http://halweb.uc3m.es/esp/Personal/personas/amalonso/esp/tsa.htm page 50 of the 9th lecture explains that AIC tends to overfit whereas BIC is consistent because it penalizes axtra parameters more, yet AIC is better to model a process that is potentially of infinite order (I am not sure what that means but imagine an order that grows with the sample size) 2) The second part are the values of the model parameters plus their estimates. Standard error relates to an estimate of the error of the predicted value (difference of the predicted value with the underlying "true" model value). This is a frequentists concept. If the experiment would (potentially) been done many times (with the same distribution of the residuals), how would the error be distributed (in simple words: how strong is the influence of the residual error terms on my estimate of the parameters, say if i do another test with similar distribution of the error terms then how much different parameters may I find just because of a different set of residual error terms?). The error is called 'standard error' because one expresses the estimate of the error in terms of the standard deviation, or the estimate of the second moment. This error may possibly calculated by some exact formula or less exact formula, one can also estimate it computationally it by simply creating a very large number of random data. z This seems to be the standardized value of the coefficient values. It is $z=coef/std.error$ Just another way to express it and not really new information. $P(\vert z \vert)$ seems to me to be a t-test. 95% conf interval Is also no new information and seems to be the coefficient value +/- a certain number of the standard error. The percentage refers to the probabilit/frequency that the procedure would include the correct value in this interval. The typical statistical interpretation stuff applies to these three concepts. But note the comment from Graeme Walsh in the linked reference that it is not so good to look at all these parameters on an individual basis. 3) The third part, with the complex numbers, relates to the roots of the characteristic equation of the model. This is explained for instance here: https://en.wikipedia.org/wiki/Autoregressive%E2%80%93moving-average_model#Specification_in_terms_of_lag_operator Using the characteristic equation several theoretic results can be expressed. See also in the answer by Graeme Walsh ARIMA model interpretation In your case you see for instance that the modulus is always >1 and therefore the process is stationary.
ARMA - coefficient interpretation
Here Graeme Walsh explains how to better look at the fit than just the coefficients: ARIMA model interpretation Here David J Harris describes differences in the model selection parameters AIC, BIC, e
ARMA - coefficient interpretation Here Graeme Walsh explains how to better look at the fit than just the coefficients: ARIMA model interpretation Here David J Harris describes differences in the model selection parameters AIC, BIC, etc. AIC,BIC,CIC,DIC,EIC,FIC,GIC,HIC,IIC --- Can I use them interchangeably? Here an earlier question regarding ARMA models How do I choose which parameters to estimate in an ARMA model in python statsmodel? Your question: *I would like to tell about all those numbers as much as possible. * 1) The first part is descriptive (like name and selected model which is straightforward) + some measures like the AIC, BIC, HQIC, which are measures that mix the likelihood with the number of parameters and data points. Various texts explain how they relate with selecting an ARMA model. One example is: from this course http://halweb.uc3m.es/esp/Personal/personas/amalonso/esp/tsa.htm page 50 of the 9th lecture explains that AIC tends to overfit whereas BIC is consistent because it penalizes axtra parameters more, yet AIC is better to model a process that is potentially of infinite order (I am not sure what that means but imagine an order that grows with the sample size) 2) The second part are the values of the model parameters plus their estimates. Standard error relates to an estimate of the error of the predicted value (difference of the predicted value with the underlying "true" model value). This is a frequentists concept. If the experiment would (potentially) been done many times (with the same distribution of the residuals), how would the error be distributed (in simple words: how strong is the influence of the residual error terms on my estimate of the parameters, say if i do another test with similar distribution of the error terms then how much different parameters may I find just because of a different set of residual error terms?). The error is called 'standard error' because one expresses the estimate of the error in terms of the standard deviation, or the estimate of the second moment. This error may possibly calculated by some exact formula or less exact formula, one can also estimate it computationally it by simply creating a very large number of random data. z This seems to be the standardized value of the coefficient values. It is $z=coef/std.error$ Just another way to express it and not really new information. $P(\vert z \vert)$ seems to me to be a t-test. 95% conf interval Is also no new information and seems to be the coefficient value +/- a certain number of the standard error. The percentage refers to the probabilit/frequency that the procedure would include the correct value in this interval. The typical statistical interpretation stuff applies to these three concepts. But note the comment from Graeme Walsh in the linked reference that it is not so good to look at all these parameters on an individual basis. 3) The third part, with the complex numbers, relates to the roots of the characteristic equation of the model. This is explained for instance here: https://en.wikipedia.org/wiki/Autoregressive%E2%80%93moving-average_model#Specification_in_terms_of_lag_operator Using the characteristic equation several theoretic results can be expressed. See also in the answer by Graeme Walsh ARIMA model interpretation In your case you see for instance that the modulus is always >1 and therefore the process is stationary.
ARMA - coefficient interpretation Here Graeme Walsh explains how to better look at the fit than just the coefficients: ARIMA model interpretation Here David J Harris describes differences in the model selection parameters AIC, BIC, e
54,207
ARMA - coefficient interpretation
You can use the pi weights to help you interpret/decipher/explain the model. The pi weights are obtained by dividing the ma polynomial by the ar polynomial . Presented in this way (i.e. as a pure ar) the model's parameters are simply a weighted average of the past . Bye the way your model is in my experience way and I mean "way" over-parameterized as a result of a poor model i.e. a list-based selection strategy rather than an iterative self-checking multi-stage approach. Unusual values need to be dealt with by incorporating pulse indicators and not torturing ar and ma coefficients to explain the anomalies. Often-time you can also have a non-constant error variance which can be dealt with GLS or a suitable power transformation. Finally there maybe break-points in time where model parameters change. Trying to form one set of coefficients to heterogeneous data is ill-advised.
ARMA - coefficient interpretation
You can use the pi weights to help you interpret/decipher/explain the model. The pi weights are obtained by dividing the ma polynomial by the ar polynomial . Presented in this way (i.e. as a pure ar)
ARMA - coefficient interpretation You can use the pi weights to help you interpret/decipher/explain the model. The pi weights are obtained by dividing the ma polynomial by the ar polynomial . Presented in this way (i.e. as a pure ar) the model's parameters are simply a weighted average of the past . Bye the way your model is in my experience way and I mean "way" over-parameterized as a result of a poor model i.e. a list-based selection strategy rather than an iterative self-checking multi-stage approach. Unusual values need to be dealt with by incorporating pulse indicators and not torturing ar and ma coefficients to explain the anomalies. Often-time you can also have a non-constant error variance which can be dealt with GLS or a suitable power transformation. Finally there maybe break-points in time where model parameters change. Trying to form one set of coefficients to heterogeneous data is ill-advised.
ARMA - coefficient interpretation You can use the pi weights to help you interpret/decipher/explain the model. The pi weights are obtained by dividing the ma polynomial by the ar polynomial . Presented in this way (i.e. as a pure ar)
54,208
Choosing a model for my unsupervised machine learning problem
Here are a couple suggestions, given that Gaussian mixture models work well for you in the absence of outliers. To increase robustness to outliers, you could use a trimmed estimator for Gaussian mixture models instead of fitting with the standard EM algorithm. Some relevant papers: Neykov et al. (2007). Robust fitting of mixtures using the trimmed likelihood estimator. Gallegos and Ritter (2009). Trimmed ML Estimation of Contaminated Mixtures. Instead of Gaussian mixture models, you could also consider student T mixture models. This will give the same properties you want (e.g. ability to compute cluster centroids and membership probabilities). Student T distributions have heavier tails than Gaussians, which increases robustness to outliers. Some relevant papers: Peel and McLachlan (2000). Robust mixture modelling using the t distribution. Svensen and Bishop (2005). Robust Bayesian Mixture Modelling. Archambeau and Verleysen (2007). Robust Bayesian clustering.
Choosing a model for my unsupervised machine learning problem
Here are a couple suggestions, given that Gaussian mixture models work well for you in the absence of outliers. To increase robustness to outliers, you could use a trimmed estimator for Gaussian mixtu
Choosing a model for my unsupervised machine learning problem Here are a couple suggestions, given that Gaussian mixture models work well for you in the absence of outliers. To increase robustness to outliers, you could use a trimmed estimator for Gaussian mixture models instead of fitting with the standard EM algorithm. Some relevant papers: Neykov et al. (2007). Robust fitting of mixtures using the trimmed likelihood estimator. Gallegos and Ritter (2009). Trimmed ML Estimation of Contaminated Mixtures. Instead of Gaussian mixture models, you could also consider student T mixture models. This will give the same properties you want (e.g. ability to compute cluster centroids and membership probabilities). Student T distributions have heavier tails than Gaussians, which increases robustness to outliers. Some relevant papers: Peel and McLachlan (2000). Robust mixture modelling using the t distribution. Svensen and Bishop (2005). Robust Bayesian Mixture Modelling. Archambeau and Verleysen (2007). Robust Bayesian clustering.
Choosing a model for my unsupervised machine learning problem Here are a couple suggestions, given that Gaussian mixture models work well for you in the absence of outliers. To increase robustness to outliers, you could use a trimmed estimator for Gaussian mixtu
54,209
Choosing a model for my unsupervised machine learning problem
Look, this might not be the best idea, but I'll suggest it anyway. You say that you are dealing with an unsupervised ML problem, but said that: you can assume that in this 4 dimensional space, your data is a multivariate Gaussian; your model works nicely when there aren't many outliers If you can afford not assigning a class to an outlier observation, go with DBSCAN as mentioned, or HDBSCAN, where you can even work without the Gaussian assumption. Now, my idea is: based on the fact that your problem works well when there aren't outliers I suggest you turn this problem into a supervised one provided that: you have a consistent number clusters every time you run the clustering algorithm the points are consistently assigned to the same clusters When you have a labeled data set (after clustering), you can use the most appropriate classification algorithm to assign new points to those clusters, or work based on the predicted probabilities (hint, KNN works wonders in low dimensional spaces, such as yours). If everything is working, outliers should be classified with a low probability of belonging to any clusters, and you can program your online application to handle these cases.
Choosing a model for my unsupervised machine learning problem
Look, this might not be the best idea, but I'll suggest it anyway. You say that you are dealing with an unsupervised ML problem, but said that: you can assume that in this 4 dimensional space, your d
Choosing a model for my unsupervised machine learning problem Look, this might not be the best idea, but I'll suggest it anyway. You say that you are dealing with an unsupervised ML problem, but said that: you can assume that in this 4 dimensional space, your data is a multivariate Gaussian; your model works nicely when there aren't many outliers If you can afford not assigning a class to an outlier observation, go with DBSCAN as mentioned, or HDBSCAN, where you can even work without the Gaussian assumption. Now, my idea is: based on the fact that your problem works well when there aren't outliers I suggest you turn this problem into a supervised one provided that: you have a consistent number clusters every time you run the clustering algorithm the points are consistently assigned to the same clusters When you have a labeled data set (after clustering), you can use the most appropriate classification algorithm to assign new points to those clusters, or work based on the predicted probabilities (hint, KNN works wonders in low dimensional spaces, such as yours). If everything is working, outliers should be classified with a low probability of belonging to any clusters, and you can program your online application to handle these cases.
Choosing a model for my unsupervised machine learning problem Look, this might not be the best idea, but I'll suggest it anyway. You say that you are dealing with an unsupervised ML problem, but said that: you can assume that in this 4 dimensional space, your d
54,210
Choosing a model for my unsupervised machine learning problem
I am assuming that the required four clusters will be of somewhat similar density. If that is the case then you can use a density based clustering approach, DBSCAN has worked well for me. You can find all the clusters that you can during training time and have a threshold on the cluster size to preclude the outliers from being part of your desired clusters. Using this, and some knowledge about what constitutes as an outlier, you can look into the clusters and see if the outliers (as ~30% seems a bit on the higher end) themselves form a cluster. Scikit-learn in python has an in-built function for this. sklearn.cluster.DBSCAN
Choosing a model for my unsupervised machine learning problem
I am assuming that the required four clusters will be of somewhat similar density. If that is the case then you can use a density based clustering approach, DBSCAN has worked well for me. You can find
Choosing a model for my unsupervised machine learning problem I am assuming that the required four clusters will be of somewhat similar density. If that is the case then you can use a density based clustering approach, DBSCAN has worked well for me. You can find all the clusters that you can during training time and have a threshold on the cluster size to preclude the outliers from being part of your desired clusters. Using this, and some knowledge about what constitutes as an outlier, you can look into the clusters and see if the outliers (as ~30% seems a bit on the higher end) themselves form a cluster. Scikit-learn in python has an in-built function for this. sklearn.cluster.DBSCAN
Choosing a model for my unsupervised machine learning problem I am assuming that the required four clusters will be of somewhat similar density. If that is the case then you can use a density based clustering approach, DBSCAN has worked well for me. You can find
54,211
Choosing a model for my unsupervised machine learning problem
Red flags go off in my head when someone says (paraphrasing heavily) I can assume property X for my algorithm, but this peroperty X need not be present in the data. That immediately points to possible performance issues and/ or grounds to revaluate model selection step. Having said that, I haven't worked with GMM for clustering beyond the base EM algorithm to find density parameters of points. But to my best understanding, clustering using GMM is fuzzy, i.e. depending on implementation methods you can vary cluster assignment based on some criteria of interest. Working off the above approach I recommend self-organizing maps that semantically group together similar items. As a second step, clustering can be applied on top of it. (Reference) Also, when working with mixed models, exploring kernel methods can be quite fruitful as now, your decision boundaries can be more expressive in higher dimensional space. Here's a paper that talks theoretically about how to make common clustering methods more robust using kernel, it covers data with mixture of Gaussian densities as well.
Choosing a model for my unsupervised machine learning problem
Red flags go off in my head when someone says (paraphrasing heavily) I can assume property X for my algorithm, but this peroperty X need not be present in the data. That immediately points to possib
Choosing a model for my unsupervised machine learning problem Red flags go off in my head when someone says (paraphrasing heavily) I can assume property X for my algorithm, but this peroperty X need not be present in the data. That immediately points to possible performance issues and/ or grounds to revaluate model selection step. Having said that, I haven't worked with GMM for clustering beyond the base EM algorithm to find density parameters of points. But to my best understanding, clustering using GMM is fuzzy, i.e. depending on implementation methods you can vary cluster assignment based on some criteria of interest. Working off the above approach I recommend self-organizing maps that semantically group together similar items. As a second step, clustering can be applied on top of it. (Reference) Also, when working with mixed models, exploring kernel methods can be quite fruitful as now, your decision boundaries can be more expressive in higher dimensional space. Here's a paper that talks theoretically about how to make common clustering methods more robust using kernel, it covers data with mixture of Gaussian densities as well.
Choosing a model for my unsupervised machine learning problem Red flags go off in my head when someone says (paraphrasing heavily) I can assume property X for my algorithm, but this peroperty X need not be present in the data. That immediately points to possib
54,212
Choosing a model for my unsupervised machine learning problem
Only 3 dimensions and 30% of your data are outliers? That does not seem to fit with what I normally think of as an outlier. Perhaps you can simply transform your variables to log scale or "cap" the outliers so that they are no more than 3 or 4 standard deviations from the mean. This may create clusters of points with no variance, though.
Choosing a model for my unsupervised machine learning problem
Only 3 dimensions and 30% of your data are outliers? That does not seem to fit with what I normally think of as an outlier. Perhaps you can simply transform your variables to log scale or "cap" the
Choosing a model for my unsupervised machine learning problem Only 3 dimensions and 30% of your data are outliers? That does not seem to fit with what I normally think of as an outlier. Perhaps you can simply transform your variables to log scale or "cap" the outliers so that they are no more than 3 or 4 standard deviations from the mean. This may create clusters of points with no variance, though.
Choosing a model for my unsupervised machine learning problem Only 3 dimensions and 30% of your data are outliers? That does not seem to fit with what I normally think of as an outlier. Perhaps you can simply transform your variables to log scale or "cap" the
54,213
Why do independent priors for two random variables not result in an independent joint posterior distribution?
When you have an independent prior on $X$ and $Y$, then the posterior might not factor into $X$ and $Y$ pieces just because the likelihood doesn't factor into $X$ and $Y$ pieces. It's easy to see that $$ p(x,y|D) \propto p(D|X,Y)p_X(x)p_Y(y). $$ So in your situation the posterior factors if and only if the likelihood factors.
Why do independent priors for two random variables not result in an independent joint posterior dist
When you have an independent prior on $X$ and $Y$, then the posterior might not factor into $X$ and $Y$ pieces just because the likelihood doesn't factor into $X$ and $Y$ pieces. It's easy to see tha
Why do independent priors for two random variables not result in an independent joint posterior distribution? When you have an independent prior on $X$ and $Y$, then the posterior might not factor into $X$ and $Y$ pieces just because the likelihood doesn't factor into $X$ and $Y$ pieces. It's easy to see that $$ p(x,y|D) \propto p(D|X,Y)p_X(x)p_Y(y). $$ So in your situation the posterior factors if and only if the likelihood factors.
Why do independent priors for two random variables not result in an independent joint posterior dist When you have an independent prior on $X$ and $Y$, then the posterior might not factor into $X$ and $Y$ pieces just because the likelihood doesn't factor into $X$ and $Y$ pieces. It's easy to see tha
54,214
Why should the variance equal the mean in Poisson regression?
Because it is a consequence of the functional form of the Poisson distribution that mean and variance are equal. If this condition is not met the model is inadequate and alternatives may be considered such as negative binomial regression (this is called overdispersion). See: https://en.wikipedia.org/wiki/Poisson_distribution
Why should the variance equal the mean in Poisson regression?
Because it is a consequence of the functional form of the Poisson distribution that mean and variance are equal. If this condition is not met the model is inadequate and alternatives may be considered
Why should the variance equal the mean in Poisson regression? Because it is a consequence of the functional form of the Poisson distribution that mean and variance are equal. If this condition is not met the model is inadequate and alternatives may be considered such as negative binomial regression (this is called overdispersion). See: https://en.wikipedia.org/wiki/Poisson_distribution
Why should the variance equal the mean in Poisson regression? Because it is a consequence of the functional form of the Poisson distribution that mean and variance are equal. If this condition is not met the model is inadequate and alternatives may be considered
54,215
Why should the variance equal the mean in Poisson regression?
To see this, let's consider the number of crashes for a specific road characteristic. Let's say this number follows a Poisson distribution with a mean $\mu$. This mean is for a certain number of km driven so let's introduce the rate $\lambda$, say 1 crash per km and the total number of km driven $T$. One assumption of Poisson distribution is that the rate remains constant over the total distance driven, consequently, $\mu=T \times \lambda.$ We divide the number of km driven into tiny $N$ short intervals of size $h$, so short that each sub-interval contains at most one crash. Now, the probability you see a crash in this tiny interval is like flipping a coin. We will denote this probability as $p$. This is known as a Bernoulli distribution and we will take for granted that the variance is $p \times (1 - p)$. On the other hand, we learnt earlier that the rate $\lambda$ is constant so we expect to see $\lambda \times h$ event in this sub-interval, that is, $p=\lambda \times h$. Now, if we assume that the probability to see a crash in this tiny sub-interval is extremely low then $1 - p$ approaches 1 (e.g. consider $h=\text{1 meter}$). We learnt earlier that the variance for Bernoulli distribution is $p \times (1-p)$ and if $p$ is extremely low, then $p \times (1-p) \simeq p=\lambda\times h.$ This is quite interesting because we just demonstrated that both mean and variance are equal to $\lambda \times h$ in this tiny sub-interval. If you extend this approach to $n$ consecutive intervals (like flipping coins $n$ times), you'll get something called a binomial distribution and in this case, the mean is $np$ and the variance $np(1-p)\simeq np$ when $p$ is small. To get to the point, for $N$ consecutive intervals of size $h$ with extremely low $p$, the mean and the variance are equal. Now, in practice, this is usually not the case in observational studies. The reason is that we cannot take into account all the factors for heterogeneity in the study. For example, the mean number of accidents may differ during the day time and the night time. Yet, if we were to aggregate both without accounting for the different factors, the marginal variance may become larger than what we expect. This is called an overdispersion.
Why should the variance equal the mean in Poisson regression?
To see this, let's consider the number of crashes for a specific road characteristic. Let's say this number follows a Poisson distribution with a mean $\mu$. This mean is for a certain number of km dr
Why should the variance equal the mean in Poisson regression? To see this, let's consider the number of crashes for a specific road characteristic. Let's say this number follows a Poisson distribution with a mean $\mu$. This mean is for a certain number of km driven so let's introduce the rate $\lambda$, say 1 crash per km and the total number of km driven $T$. One assumption of Poisson distribution is that the rate remains constant over the total distance driven, consequently, $\mu=T \times \lambda.$ We divide the number of km driven into tiny $N$ short intervals of size $h$, so short that each sub-interval contains at most one crash. Now, the probability you see a crash in this tiny interval is like flipping a coin. We will denote this probability as $p$. This is known as a Bernoulli distribution and we will take for granted that the variance is $p \times (1 - p)$. On the other hand, we learnt earlier that the rate $\lambda$ is constant so we expect to see $\lambda \times h$ event in this sub-interval, that is, $p=\lambda \times h$. Now, if we assume that the probability to see a crash in this tiny sub-interval is extremely low then $1 - p$ approaches 1 (e.g. consider $h=\text{1 meter}$). We learnt earlier that the variance for Bernoulli distribution is $p \times (1-p)$ and if $p$ is extremely low, then $p \times (1-p) \simeq p=\lambda\times h.$ This is quite interesting because we just demonstrated that both mean and variance are equal to $\lambda \times h$ in this tiny sub-interval. If you extend this approach to $n$ consecutive intervals (like flipping coins $n$ times), you'll get something called a binomial distribution and in this case, the mean is $np$ and the variance $np(1-p)\simeq np$ when $p$ is small. To get to the point, for $N$ consecutive intervals of size $h$ with extremely low $p$, the mean and the variance are equal. Now, in practice, this is usually not the case in observational studies. The reason is that we cannot take into account all the factors for heterogeneity in the study. For example, the mean number of accidents may differ during the day time and the night time. Yet, if we were to aggregate both without accounting for the different factors, the marginal variance may become larger than what we expect. This is called an overdispersion.
Why should the variance equal the mean in Poisson regression? To see this, let's consider the number of crashes for a specific road characteristic. Let's say this number follows a Poisson distribution with a mean $\mu$. This mean is for a certain number of km dr
54,216
Why should the variance equal the mean in Poisson regression?
Poisson regression allows for inference of regression parameters - generally a parameter vector, $\mathbf{\theta}$ - under the assumption that the errors in the model are distributed according to a Poisson distribution: $\epsilon \sim \operatorname{Poisson}(\lambda)$. This is appropriate for modelling certain data, a good example being heteroscedastic count data. Summarising an example from German Rodriguez' notes on Poisson Models for Count Data, we can consider a scenario wherein we are regressing the number of children a woman has, $\mathbf{y}$, onto predictors like her age, education level, region of residence, duration of marriage and so on, $\mathbf{X}$. In this situation, where the units of analysis are individual women, we can plot the relationship of the mean and variance of the outcome / target variable $\mathbf{y}$: Clearly, the assumption of constant variance is not valid. Although the variance is not exactly equal to the mean, it is not far from being proportional to it. Thus, we conclude that we can do far more justice to the data by fitting Poisson regression models than by clinging to ordinary linear models [where we regress assuming error distributed according to a singly-parametrised Gaussian]. The question then becomes why the mean, or Expectation, of the random error, $\epsilon$, is equal to its Variance, i.e. why $\mathbb{E}[\epsilon] = \mathbb{Var}[\epsilon]$ for $\epsilon \sim \operatorname{Poisson}(\lambda)$ in your regression model. To see why these are equal, we can directly derive them. Expectation of the Poisson errors For the Expectation, from the definition of expectation we have from the definition of the Poisson distribution: $$\mathrm{E}(X)=\sum_{k \geq 0} k \frac{1}{k !} \lambda^{k} e^{-\lambda}$$ Then: $$ \begin{aligned} \mathrm{E}(X) &=\lambda e^{-\lambda} \sum_{k \geq 1} \frac{1}{(k-1) !} \lambda^{k-1} & & \text { as the } k=0 \text { term vanishes } \\ &=\lambda e^{-\lambda} \sum_{j \geq 0} \frac{\lambda^{j}}{j !} & & \text { putting } j=k-1 \\ &=\lambda e^{-\lambda} e^{\lambda} & & \text { Taylor Series Expansion for Exponential Function } \\ &=\lambda & & \end{aligned} $$ Variance of the Poisson errors For this we exploit the identity: $$\operatorname{var}(X) =\mathrm{E}\left(X^{2}\right)-(\mathrm{E}(X))^{2}$$ So starting with the computation of $\mathrm{E}\left(X^{2}\right)$ we have: $$ \begin{array}{rlr} \mathrm{E}\left(X^{2}\right) & =\sum_{k \geq 0} k^{2} \frac{1}{k !} \lambda^{k} e^{-\lambda} & \text { Definition of Poisson Distribution } \\ & =\lambda e^{-\lambda} \sum_{k \geq 1} k \frac{1}{(k-1) !} \lambda^{k-1} & \text { Change of limit: term is zero when } k=0 \\ & =\lambda e^{-\lambda}\left(\sum_{k \geq 1}(k-1) \frac{1}{(k-1) !} \lambda^{k-1}+\sum_{k \geq 1} \frac{1}{(k-1) !} \lambda^{k-1}\right) \\ & =\lambda e^{-\lambda}\left(\lambda \sum_{k \geq 2} \frac{1}{(k-2) !} \lambda^{k-2}+\sum_{k \geq 1} \frac{1}{(k-1) !} \lambda^{k-1}\right) & \text { Change of limit: term is zero when } k-1=0 \\ & =\lambda e^{-\lambda}\left(\lambda \sum_{i \geq 0} \frac{1}{i !} \lambda^{i}+\sum_{j \geq 0} \frac{1}{j !} \lambda^{j}\right) & \text { putting } i=k-2, j=k-1 \\ & =\lambda e^{-\lambda}\left(\lambda e^{\lambda}+e^{\lambda}\right) & \text { Taylor Series Expansion for Exponential Function } \\ & =\lambda(\lambda+1) & \\ & =\lambda^{2}+\lambda \\ & =\lambda^{2}+\lambda-\lambda^{2} & & \\ & & & \end{array} $$ Then putting this together: $$\begin{aligned} \operatorname{var}(X) &=\mathrm{E}\left(X^{2}\right)-(\mathrm{E}(X))^{2} \\ &=\lambda^{2}+\lambda-\lambda^{2} \\ &=\lambda \end{aligned}$$ You can see that the equality of the expectation (or mean) and variance emerges from the definition of the Poisson distribution (which by the way, emerges from taking a limiting case of the Binomial distribution when $n \rightarrow \infty$ and so necessarily $p \rightarrow 0$). Both of the above proofs, for Expectation and Variance are available on ProofWiki.
Why should the variance equal the mean in Poisson regression?
Poisson regression allows for inference of regression parameters - generally a parameter vector, $\mathbf{\theta}$ - under the assumption that the errors in the model are distributed according to a Po
Why should the variance equal the mean in Poisson regression? Poisson regression allows for inference of regression parameters - generally a parameter vector, $\mathbf{\theta}$ - under the assumption that the errors in the model are distributed according to a Poisson distribution: $\epsilon \sim \operatorname{Poisson}(\lambda)$. This is appropriate for modelling certain data, a good example being heteroscedastic count data. Summarising an example from German Rodriguez' notes on Poisson Models for Count Data, we can consider a scenario wherein we are regressing the number of children a woman has, $\mathbf{y}$, onto predictors like her age, education level, region of residence, duration of marriage and so on, $\mathbf{X}$. In this situation, where the units of analysis are individual women, we can plot the relationship of the mean and variance of the outcome / target variable $\mathbf{y}$: Clearly, the assumption of constant variance is not valid. Although the variance is not exactly equal to the mean, it is not far from being proportional to it. Thus, we conclude that we can do far more justice to the data by fitting Poisson regression models than by clinging to ordinary linear models [where we regress assuming error distributed according to a singly-parametrised Gaussian]. The question then becomes why the mean, or Expectation, of the random error, $\epsilon$, is equal to its Variance, i.e. why $\mathbb{E}[\epsilon] = \mathbb{Var}[\epsilon]$ for $\epsilon \sim \operatorname{Poisson}(\lambda)$ in your regression model. To see why these are equal, we can directly derive them. Expectation of the Poisson errors For the Expectation, from the definition of expectation we have from the definition of the Poisson distribution: $$\mathrm{E}(X)=\sum_{k \geq 0} k \frac{1}{k !} \lambda^{k} e^{-\lambda}$$ Then: $$ \begin{aligned} \mathrm{E}(X) &=\lambda e^{-\lambda} \sum_{k \geq 1} \frac{1}{(k-1) !} \lambda^{k-1} & & \text { as the } k=0 \text { term vanishes } \\ &=\lambda e^{-\lambda} \sum_{j \geq 0} \frac{\lambda^{j}}{j !} & & \text { putting } j=k-1 \\ &=\lambda e^{-\lambda} e^{\lambda} & & \text { Taylor Series Expansion for Exponential Function } \\ &=\lambda & & \end{aligned} $$ Variance of the Poisson errors For this we exploit the identity: $$\operatorname{var}(X) =\mathrm{E}\left(X^{2}\right)-(\mathrm{E}(X))^{2}$$ So starting with the computation of $\mathrm{E}\left(X^{2}\right)$ we have: $$ \begin{array}{rlr} \mathrm{E}\left(X^{2}\right) & =\sum_{k \geq 0} k^{2} \frac{1}{k !} \lambda^{k} e^{-\lambda} & \text { Definition of Poisson Distribution } \\ & =\lambda e^{-\lambda} \sum_{k \geq 1} k \frac{1}{(k-1) !} \lambda^{k-1} & \text { Change of limit: term is zero when } k=0 \\ & =\lambda e^{-\lambda}\left(\sum_{k \geq 1}(k-1) \frac{1}{(k-1) !} \lambda^{k-1}+\sum_{k \geq 1} \frac{1}{(k-1) !} \lambda^{k-1}\right) \\ & =\lambda e^{-\lambda}\left(\lambda \sum_{k \geq 2} \frac{1}{(k-2) !} \lambda^{k-2}+\sum_{k \geq 1} \frac{1}{(k-1) !} \lambda^{k-1}\right) & \text { Change of limit: term is zero when } k-1=0 \\ & =\lambda e^{-\lambda}\left(\lambda \sum_{i \geq 0} \frac{1}{i !} \lambda^{i}+\sum_{j \geq 0} \frac{1}{j !} \lambda^{j}\right) & \text { putting } i=k-2, j=k-1 \\ & =\lambda e^{-\lambda}\left(\lambda e^{\lambda}+e^{\lambda}\right) & \text { Taylor Series Expansion for Exponential Function } \\ & =\lambda(\lambda+1) & \\ & =\lambda^{2}+\lambda \\ & =\lambda^{2}+\lambda-\lambda^{2} & & \\ & & & \end{array} $$ Then putting this together: $$\begin{aligned} \operatorname{var}(X) &=\mathrm{E}\left(X^{2}\right)-(\mathrm{E}(X))^{2} \\ &=\lambda^{2}+\lambda-\lambda^{2} \\ &=\lambda \end{aligned}$$ You can see that the equality of the expectation (or mean) and variance emerges from the definition of the Poisson distribution (which by the way, emerges from taking a limiting case of the Binomial distribution when $n \rightarrow \infty$ and so necessarily $p \rightarrow 0$). Both of the above proofs, for Expectation and Variance are available on ProofWiki.
Why should the variance equal the mean in Poisson regression? Poisson regression allows for inference of regression parameters - generally a parameter vector, $\mathbf{\theta}$ - under the assumption that the errors in the model are distributed according to a Po
54,217
How to calculate p-value for a parameter given confidence interval when null hypothesis != 0
The Wald statistic for testing the null hypothesis is: $$W = \frac{\hat{\theta} - \theta_0}{se(\hat{\theta})}$$ your $\hat{\theta}$ is the point estimate $b$: 0.20483 and $se(\hat{\theta})$ is the SE : 0.06723 used to describe the approximate normal sampling distribution of the test statistic when the null hypothesis is true. In this case, the null says: the $\theta_0 = 0.069$. So putting all the pieces together a p-value is obtained with: pt(abs({0.20483-0.069}/0.06723), lower.tail=F, df=9)*2 which gives the same result as Matthew Drury's answer of: > pt(abs({0.20483-0.069}/0.06723), lower.tail=F, df=9)*2 [1] 0.0740768 Note the degrees of freedom is nrow(d)-2 because you estimate two parameters.
How to calculate p-value for a parameter given confidence interval when null hypothesis != 0
The Wald statistic for testing the null hypothesis is: $$W = \frac{\hat{\theta} - \theta_0}{se(\hat{\theta})}$$ your $\hat{\theta}$ is the point estimate $b$: 0.20483 and $se(\hat{\theta})$ is the S
How to calculate p-value for a parameter given confidence interval when null hypothesis != 0 The Wald statistic for testing the null hypothesis is: $$W = \frac{\hat{\theta} - \theta_0}{se(\hat{\theta})}$$ your $\hat{\theta}$ is the point estimate $b$: 0.20483 and $se(\hat{\theta})$ is the SE : 0.06723 used to describe the approximate normal sampling distribution of the test statistic when the null hypothesis is true. In this case, the null says: the $\theta_0 = 0.069$. So putting all the pieces together a p-value is obtained with: pt(abs({0.20483-0.069}/0.06723), lower.tail=F, df=9)*2 which gives the same result as Matthew Drury's answer of: > pt(abs({0.20483-0.069}/0.06723), lower.tail=F, df=9)*2 [1] 0.0740768 Note the degrees of freedom is nrow(d)-2 because you estimate two parameters.
How to calculate p-value for a parameter given confidence interval when null hypothesis != 0 The Wald statistic for testing the null hypothesis is: $$W = \frac{\hat{\theta} - \theta_0}{se(\hat{\theta})}$$ your $\hat{\theta}$ is the point estimate $b$: 0.20483 and $se(\hat{\theta})$ is the S
54,218
How to calculate p-value for a parameter given confidence interval when null hypothesis != 0
How about estimating this model: M = a = nls(y ~ a * exp((b + 0.069) * x), data = d, start = list(a = 1, b = 0)) So b == 0 in this model if and only if b == 0.069 in the original model. The summary for the new model is: Formula: y ~ a * exp((b + 0.069) * x) Parameters: Estimate Std. Error t value Pr(>|t|) a 0.02980 0.05046 0.59 0.5694 b 0.13583 0.06723 2.02 0.0741 . --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 1.314 on 9 degrees of freedom Number of iterations to convergence: 20 Achieved convergence tolerance: 3.168e-06 Which gives you the p-values you want.
How to calculate p-value for a parameter given confidence interval when null hypothesis != 0
How about estimating this model: M = a = nls(y ~ a * exp((b + 0.069) * x), data = d, start = list(a = 1, b = 0)) So b == 0 in this model if and only if b == 0.069 in the original model. The summary f
How to calculate p-value for a parameter given confidence interval when null hypothesis != 0 How about estimating this model: M = a = nls(y ~ a * exp((b + 0.069) * x), data = d, start = list(a = 1, b = 0)) So b == 0 in this model if and only if b == 0.069 in the original model. The summary for the new model is: Formula: y ~ a * exp((b + 0.069) * x) Parameters: Estimate Std. Error t value Pr(>|t|) a 0.02980 0.05046 0.59 0.5694 b 0.13583 0.06723 2.02 0.0741 . --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 1.314 on 9 degrees of freedom Number of iterations to convergence: 20 Achieved convergence tolerance: 3.168e-06 Which gives you the p-values you want.
How to calculate p-value for a parameter given confidence interval when null hypothesis != 0 How about estimating this model: M = a = nls(y ~ a * exp((b + 0.069) * x), data = d, start = list(a = 1, b = 0)) So b == 0 in this model if and only if b == 0.069 in the original model. The summary f
54,219
Why does batch normalization use mini-batch statistics instead of the moving averages during training?
There is a follow-up paper by Sergej Ioffe (i.e. Batch Renormalization) which discusses this issue: https://arxiv.org/abs/1702.03275 A quote from that paper regarding regular batch normalization: It is natural to ask whether we could simply use the moving averages $\mu, \sigma$ to perform the normalization during training, since this would remove the dependence of the normalized activations on the other example in the mini-batch. This, however, has been observerd to lead to the model blowing up. As argued in [6, the original batch norm paper], such use of moving averages would cause the gradient optimization and the normalization to counteract each other. In that paper the this issue is fixed by introducing an additional affine transformation from the batch statistics to the moving average statistics, the coefficients of which are treated as constants by the optimisation. With this change the moving average can be used both during training and testing time.
Why does batch normalization use mini-batch statistics instead of the moving averages during trainin
There is a follow-up paper by Sergej Ioffe (i.e. Batch Renormalization) which discusses this issue: https://arxiv.org/abs/1702.03275 A quote from that paper regarding regular batch normalization: It
Why does batch normalization use mini-batch statistics instead of the moving averages during training? There is a follow-up paper by Sergej Ioffe (i.e. Batch Renormalization) which discusses this issue: https://arxiv.org/abs/1702.03275 A quote from that paper regarding regular batch normalization: It is natural to ask whether we could simply use the moving averages $\mu, \sigma$ to perform the normalization during training, since this would remove the dependence of the normalized activations on the other example in the mini-batch. This, however, has been observerd to lead to the model blowing up. As argued in [6, the original batch norm paper], such use of moving averages would cause the gradient optimization and the normalization to counteract each other. In that paper the this issue is fixed by introducing an additional affine transformation from the batch statistics to the moving average statistics, the coefficients of which are treated as constants by the optimisation. With this change the moving average can be used both during training and testing time.
Why does batch normalization use mini-batch statistics instead of the moving averages during trainin There is a follow-up paper by Sergej Ioffe (i.e. Batch Renormalization) which discusses this issue: https://arxiv.org/abs/1702.03275 A quote from that paper regarding regular batch normalization: It
54,220
How can I test if my observed PDF follows a binomial distribution?
Please note that this question is duplicated, and there is a better answer by Glen_b over here. Here is an example using the the R package fitdistrplus. The simulated data is strictly binomial, and hence we should fail to reject the null hypothesis. require("fitdistrplus") set.seed(10) n = 25 size = 27 prob = .4 data = rbinom(n, size = size, prob = prob) fit = fitdist(data = data, dist="binom", fix.arg=list(size = size), start=list(prob = 0.1)) summary(fit) Fitting of the distribution ' binom ' by maximum likelihood Parameters : estimate Std. Error prob 0.3822225 0.01870338 Fixed parameters: value size 27 Loglikelihood: -52.24948 AIC: 106.499 BIC: 107.7178 plot(fit) This estimation of the probability of success ($\Pr(S) = 0.38$) is very close to $\Pr(S) =0.4$ set up when the data was generated. However, you are not just estimating the parameter, but also the distribution, as Glen_b explains on the parallel post: ... not only are you estimating parameters (which may be able to be dealt with - some tests adapt to that more or less easily) but frequently you're also selecting between several choices of distributional model. You may want, therefore, to contrast the result with other distributions. For example here is the output selecting a Poisson distribution: fit2 = fitdist(data, dist = "pois") summary(fit2) plot(fit2) Fitting of the distribution ' pois ' by maximum likelihood Parameters : estimate Std. Error lambda 10.32 0.6424951 Loglikelihood: -55.95267 AIC: 113.9053 BIC: 115.1242 Predictably, the AIC increases: we have set up the data as binomial, so it would be expected that the better fitting distribution (lower AIC) is binomial, and not Poisson. Here are the corresponding plots: Notice that we have to supply the probability of success to then estimate the goodness of fit. In the simulation above, we asked fitdistrplus() to start estimating MLE from a value of $\small \Pr(\text{success})=0.1$. If we were to run a chi square goodness of fit test we could try several values. Here is a simulation with the same dataset as above: par(mfrow = c(1,3)) obs.counts = table(factor(data, levels = 0:27)) # Tabulated results of dataset plot(obs.counts,"h", col = "turquoise", main = "Obs'd counts", cex.main = .9) values = seq(0.3, 0.7, 0.1) # P(S) chosen (.3,.4,.5,.6,.7) targ.freq = matrix(0, length(values), size + 1) # Starting an empty matrix # PMF for different p (Prob) values: for(i in 1:length(values)) targ.freq[i,] = dbinom(0:size, size = size, prob = values[i]) plot(targ.freq[1,], type = "l", col = "darkblue", main = "Exp'd freq", cex.main = .9, ylab = 'Freq', xlab="") for(i in 1:nrow(targ.freq)) lines(targ.freq[i,], col = "darkblue") for(i in 1:nrow(targ.freq)) text(which.max(targ.freq[i,]), max(targ.freq[i,] + .005), values[i], col= "darkblue") exp.counts = matrix(0, length(values), size + 1) # Starting empty matrix # Calculating expected counts... for(i in 1:nrow(targ.freq)) exp.counts[i,] = round(targ.freq[i,] * length(data)) colnames(exp.counts) = 0:size vec = c(1,3,5) matplot(t(exp.counts)[ ,vec],type="h",col=c(2:nrow(exp.counts)+1),lty=1, ylab ="Exp. counts", main = "Exp counts 3 Pr(S)") lab = values[c(1,3,5)] x.positions = c(7,15,22) for(i in 1:nrow(targ.freq)) text(x.positions[i], max(exp.counts[i,] + .1), lab[i], col= "darkblue") The expected frequencies and counts (and consequentially the fit to the observed data) will change with the different probabilities of success (on the right plot only three of them were plotted): Likewise, the p values of the GOF chi square will show a peak around the probability we set up in the generation of the data: p.values = NULL for(i in 1:nrow(targ.freq)) p.values[i] = chisq.test(obs.counts , p = targ.freq[i,])$p.value par(mfrow = c(1,1)) plot(values, p.values, type = "h", main = "Chi square p values", cex.main = .8)
How can I test if my observed PDF follows a binomial distribution?
Please note that this question is duplicated, and there is a better answer by Glen_b over here. Here is an example using the the R package fitdistrplus. The simulated data is strictly binomial, and h
How can I test if my observed PDF follows a binomial distribution? Please note that this question is duplicated, and there is a better answer by Glen_b over here. Here is an example using the the R package fitdistrplus. The simulated data is strictly binomial, and hence we should fail to reject the null hypothesis. require("fitdistrplus") set.seed(10) n = 25 size = 27 prob = .4 data = rbinom(n, size = size, prob = prob) fit = fitdist(data = data, dist="binom", fix.arg=list(size = size), start=list(prob = 0.1)) summary(fit) Fitting of the distribution ' binom ' by maximum likelihood Parameters : estimate Std. Error prob 0.3822225 0.01870338 Fixed parameters: value size 27 Loglikelihood: -52.24948 AIC: 106.499 BIC: 107.7178 plot(fit) This estimation of the probability of success ($\Pr(S) = 0.38$) is very close to $\Pr(S) =0.4$ set up when the data was generated. However, you are not just estimating the parameter, but also the distribution, as Glen_b explains on the parallel post: ... not only are you estimating parameters (which may be able to be dealt with - some tests adapt to that more or less easily) but frequently you're also selecting between several choices of distributional model. You may want, therefore, to contrast the result with other distributions. For example here is the output selecting a Poisson distribution: fit2 = fitdist(data, dist = "pois") summary(fit2) plot(fit2) Fitting of the distribution ' pois ' by maximum likelihood Parameters : estimate Std. Error lambda 10.32 0.6424951 Loglikelihood: -55.95267 AIC: 113.9053 BIC: 115.1242 Predictably, the AIC increases: we have set up the data as binomial, so it would be expected that the better fitting distribution (lower AIC) is binomial, and not Poisson. Here are the corresponding plots: Notice that we have to supply the probability of success to then estimate the goodness of fit. In the simulation above, we asked fitdistrplus() to start estimating MLE from a value of $\small \Pr(\text{success})=0.1$. If we were to run a chi square goodness of fit test we could try several values. Here is a simulation with the same dataset as above: par(mfrow = c(1,3)) obs.counts = table(factor(data, levels = 0:27)) # Tabulated results of dataset plot(obs.counts,"h", col = "turquoise", main = "Obs'd counts", cex.main = .9) values = seq(0.3, 0.7, 0.1) # P(S) chosen (.3,.4,.5,.6,.7) targ.freq = matrix(0, length(values), size + 1) # Starting an empty matrix # PMF for different p (Prob) values: for(i in 1:length(values)) targ.freq[i,] = dbinom(0:size, size = size, prob = values[i]) plot(targ.freq[1,], type = "l", col = "darkblue", main = "Exp'd freq", cex.main = .9, ylab = 'Freq', xlab="") for(i in 1:nrow(targ.freq)) lines(targ.freq[i,], col = "darkblue") for(i in 1:nrow(targ.freq)) text(which.max(targ.freq[i,]), max(targ.freq[i,] + .005), values[i], col= "darkblue") exp.counts = matrix(0, length(values), size + 1) # Starting empty matrix # Calculating expected counts... for(i in 1:nrow(targ.freq)) exp.counts[i,] = round(targ.freq[i,] * length(data)) colnames(exp.counts) = 0:size vec = c(1,3,5) matplot(t(exp.counts)[ ,vec],type="h",col=c(2:nrow(exp.counts)+1),lty=1, ylab ="Exp. counts", main = "Exp counts 3 Pr(S)") lab = values[c(1,3,5)] x.positions = c(7,15,22) for(i in 1:nrow(targ.freq)) text(x.positions[i], max(exp.counts[i,] + .1), lab[i], col= "darkblue") The expected frequencies and counts (and consequentially the fit to the observed data) will change with the different probabilities of success (on the right plot only three of them were plotted): Likewise, the p values of the GOF chi square will show a peak around the probability we set up in the generation of the data: p.values = NULL for(i in 1:nrow(targ.freq)) p.values[i] = chisq.test(obs.counts , p = targ.freq[i,])$p.value par(mfrow = c(1,1)) plot(values, p.values, type = "h", main = "Chi square p values", cex.main = .8)
How can I test if my observed PDF follows a binomial distribution? Please note that this question is duplicated, and there is a better answer by Glen_b over here. Here is an example using the the R package fitdistrplus. The simulated data is strictly binomial, and h
54,221
Is Weibull distribution a exponential family?
The answer is NO. The logarithm of the weibull density is given by $$ \log f(x) = \log(k/\lambda) + (k-1)\log(x/\lambda) - (x/\lambda)^k $$ where $x>0$, $k>0$ (shape parameter), $\lambda>0$ (scale parameter). The problem is the last term. IF $k$ were known (prespecified), this would be a one-parameter exponential family. So one could say (maybe) that the two-parameter weibull family is an (infinite) union of one-parameter exponential families, but if that is of any help I do not know. To see this, the general form of the multi-parameter exponential family is $$ f(x;\theta) = h(x) \exp( \sum_1^s \eta_i(\theta) T_i(x) - A(\theta) $$ we can se that the parameter function $\eta_i(\theta)$ and the data function $T_i(x)$ always combines multiplicatively, which do not happen for the last term in the weibull formula above.
Is Weibull distribution a exponential family?
The answer is NO. The logarithm of the weibull density is given by $$ \log f(x) = \log(k/\lambda) + (k-1)\log(x/\lambda) - (x/\lambda)^k $$ where $x>0$, $k>0$ (shape parameter), $\lambda>0$ (scale
Is Weibull distribution a exponential family? The answer is NO. The logarithm of the weibull density is given by $$ \log f(x) = \log(k/\lambda) + (k-1)\log(x/\lambda) - (x/\lambda)^k $$ where $x>0$, $k>0$ (shape parameter), $\lambda>0$ (scale parameter). The problem is the last term. IF $k$ were known (prespecified), this would be a one-parameter exponential family. So one could say (maybe) that the two-parameter weibull family is an (infinite) union of one-parameter exponential families, but if that is of any help I do not know. To see this, the general form of the multi-parameter exponential family is $$ f(x;\theta) = h(x) \exp( \sum_1^s \eta_i(\theta) T_i(x) - A(\theta) $$ we can se that the parameter function $\eta_i(\theta)$ and the data function $T_i(x)$ always combines multiplicatively, which do not happen for the last term in the weibull formula above.
Is Weibull distribution a exponential family? The answer is NO. The logarithm of the weibull density is given by $$ \log f(x) = \log(k/\lambda) + (k-1)\log(x/\lambda) - (x/\lambda)^k $$ where $x>0$, $k>0$ (shape parameter), $\lambda>0$ (scale
54,222
Is Weibull distribution a exponential family?
The two parameter Weibull distribution (with $k$ and $\lambda$ as described on wikipedia) is not an exponential family. However, if you fix $k$ to anything, then it is an exponential family having sufficient statistics $x^k$ on support $[0, \infty)$.
Is Weibull distribution a exponential family?
The two parameter Weibull distribution (with $k$ and $\lambda$ as described on wikipedia) is not an exponential family. However, if you fix $k$ to anything, then it is an exponential family having su
Is Weibull distribution a exponential family? The two parameter Weibull distribution (with $k$ and $\lambda$ as described on wikipedia) is not an exponential family. However, if you fix $k$ to anything, then it is an exponential family having sufficient statistics $x^k$ on support $[0, \infty)$.
Is Weibull distribution a exponential family? The two parameter Weibull distribution (with $k$ and $\lambda$ as described on wikipedia) is not an exponential family. However, if you fix $k$ to anything, then it is an exponential family having su
54,223
Scaling step in Baum-Welch algorithm
Background and Notation Let $x_1, \ldots, x_T$ be the observations, and $z_1, \ldots, z_T$ be the hidden states. The forward-backward recursions are written in terms of $\alpha(z_n)$ and $\beta(z_n)$. $\alpha(z_n) = p(x_1,\ldots,x_n,z_n)$ is the joint distribution of all the heretofore observed data and the most recent state. As $n$ gets large, this gets very small. This is just because of simple properties of probabilities. Probabilities of subsets get smaller. This is axiomatic. $\beta(z_n) = p(x_{n+1},\ldots,x_T|z_n)$ is the probability of all the future data given the current state. This gets small as $n$ goes down, for the same reasons as above. Forward Algorithm You can see that both of these things become very small, and hence cause numerical issues, whenever you have a non-tiny dataset. So instead of using the forward recursion $$ \alpha(z_n) = p(x_n|z_n) \sum_{z_{n-1}}\alpha(z_{n-1})p(z_n|z_{n-1}) $$ use the filtering recursion: $$ p(z_n|x_1,\ldots,x_n) = \frac{p(x_n|z_n)\sum_{z_{n-1}} p(z_n|z_{n-1})p(z_{n-1}|x_1,\ldots,x_{n-1})}{p(x_n|x_1,\ldots,x_{n-1})}. $$ You can get this algebraically by dividing both sides of the first recursion by $p(x_1,\ldots,x_n)$. But I would program it by keeping track of the filtering distributions, not the $\alpha$ quantities. You also don't need to worry about the normalizing constant usually--the last step you can normalize your probability vector if your state space isn't too large. I am more familiar with the non-HMM state space model literature, and the filtering recursions are probably the most common set of recursions. I have found a few results on the internet where HMM people call this procedure "scaling." So it's nice to find this connection and figure out what all this $\alpha$ stuff is. Backward Algorithm Next, the traditional HMM backward algorithm is stated as $$ \beta(z_n) = \sum_{z_{n+1}}\beta(z_{n+1})p(x_{n+1}|z_{n+1})p(z_{n+1}|z_n). $$ This one is trickier though. I have found a few websites that say to divide both sides by $p(x_1,\ldots,x_n)$ again, and I have also found resources that say to divide both sides by $p(x_{t+1},\ldots,x_T|x_1,\ldots,x_t)$. I'm still working this out, though. Also, I suspect there's another connection between this and the backward recursions in other areas of state space model literature (for marginal and joint smoothing distributions). I'll keep you posted. Edit: It turns out if you multiply both sides of the HMM backward equation above by $p(x_1, \ldots, x_n,z_n)/p(x_1,\ldots,x_T)$ you get the backward equation for the marginal smoothing distribution, which are more common in non-HMM state space models. $$ p(z_n|x_1,\ldots,x_T) = p(z_n|x_1,\ldots,x_n) \sum_{z_{n+1}}\frac{p(z_{n+1}|z_n)}{p(z_{n+1}|x_1,\ldots,x_n)}p(z_{n+1}|x_1,\ldots,x_T). $$ I don't know if this is standard for using Baum-Welch, but it's pretty standard with other state space models. Cool. Edit 2: https://www.springer.com/us/book/9780387402642 define on page 63 the normalized backward function as $$ \hat{\beta}(z_n) = \beta(z_n)\frac{p(x_{1:n})}{p(x_{1:T})} = \frac{p(z_n \mid x_{1:T}) }{p(z_n \mid x_{1:n})}, $$ which, as they mention on page 65, has the interpretation of the ratio of two marginal smoothing distributions.
Scaling step in Baum-Welch algorithm
Background and Notation Let $x_1, \ldots, x_T$ be the observations, and $z_1, \ldots, z_T$ be the hidden states. The forward-backward recursions are written in terms of $\alpha(z_n)$ and $\beta(z_n)$.
Scaling step in Baum-Welch algorithm Background and Notation Let $x_1, \ldots, x_T$ be the observations, and $z_1, \ldots, z_T$ be the hidden states. The forward-backward recursions are written in terms of $\alpha(z_n)$ and $\beta(z_n)$. $\alpha(z_n) = p(x_1,\ldots,x_n,z_n)$ is the joint distribution of all the heretofore observed data and the most recent state. As $n$ gets large, this gets very small. This is just because of simple properties of probabilities. Probabilities of subsets get smaller. This is axiomatic. $\beta(z_n) = p(x_{n+1},\ldots,x_T|z_n)$ is the probability of all the future data given the current state. This gets small as $n$ goes down, for the same reasons as above. Forward Algorithm You can see that both of these things become very small, and hence cause numerical issues, whenever you have a non-tiny dataset. So instead of using the forward recursion $$ \alpha(z_n) = p(x_n|z_n) \sum_{z_{n-1}}\alpha(z_{n-1})p(z_n|z_{n-1}) $$ use the filtering recursion: $$ p(z_n|x_1,\ldots,x_n) = \frac{p(x_n|z_n)\sum_{z_{n-1}} p(z_n|z_{n-1})p(z_{n-1}|x_1,\ldots,x_{n-1})}{p(x_n|x_1,\ldots,x_{n-1})}. $$ You can get this algebraically by dividing both sides of the first recursion by $p(x_1,\ldots,x_n)$. But I would program it by keeping track of the filtering distributions, not the $\alpha$ quantities. You also don't need to worry about the normalizing constant usually--the last step you can normalize your probability vector if your state space isn't too large. I am more familiar with the non-HMM state space model literature, and the filtering recursions are probably the most common set of recursions. I have found a few results on the internet where HMM people call this procedure "scaling." So it's nice to find this connection and figure out what all this $\alpha$ stuff is. Backward Algorithm Next, the traditional HMM backward algorithm is stated as $$ \beta(z_n) = \sum_{z_{n+1}}\beta(z_{n+1})p(x_{n+1}|z_{n+1})p(z_{n+1}|z_n). $$ This one is trickier though. I have found a few websites that say to divide both sides by $p(x_1,\ldots,x_n)$ again, and I have also found resources that say to divide both sides by $p(x_{t+1},\ldots,x_T|x_1,\ldots,x_t)$. I'm still working this out, though. Also, I suspect there's another connection between this and the backward recursions in other areas of state space model literature (for marginal and joint smoothing distributions). I'll keep you posted. Edit: It turns out if you multiply both sides of the HMM backward equation above by $p(x_1, \ldots, x_n,z_n)/p(x_1,\ldots,x_T)$ you get the backward equation for the marginal smoothing distribution, which are more common in non-HMM state space models. $$ p(z_n|x_1,\ldots,x_T) = p(z_n|x_1,\ldots,x_n) \sum_{z_{n+1}}\frac{p(z_{n+1}|z_n)}{p(z_{n+1}|x_1,\ldots,x_n)}p(z_{n+1}|x_1,\ldots,x_T). $$ I don't know if this is standard for using Baum-Welch, but it's pretty standard with other state space models. Cool. Edit 2: https://www.springer.com/us/book/9780387402642 define on page 63 the normalized backward function as $$ \hat{\beta}(z_n) = \beta(z_n)\frac{p(x_{1:n})}{p(x_{1:T})} = \frac{p(z_n \mid x_{1:T}) }{p(z_n \mid x_{1:n})}, $$ which, as they mention on page 65, has the interpretation of the ratio of two marginal smoothing distributions.
Scaling step in Baum-Welch algorithm Background and Notation Let $x_1, \ldots, x_T$ be the observations, and $z_1, \ldots, z_T$ be the hidden states. The forward-backward recursions are written in terms of $\alpha(z_n)$ and $\beta(z_n)$.
54,224
Scaling step in Baum-Welch algorithm
I'm not an expert in HMM, but happened to go through the same process of learning and implementing HMM while encountering the same normalization issues. Below is the answer based on my understanding. - Forward algorithm: The forward algorithm calculates $\alpha_t(j) \equiv P(z_t = j | x_{1:t})$, i.e., the probability that the hidden state $z_t$ at time $t$ is $j$, given the observed data points up to time $t$. In vector form: $\alpha_1 \propto \phi_t \odot \pi$ $\alpha_t \propto \phi_t \odot (A^T \alpha_{t-1})$, for $t=2,\dots,T$ Where $\phi_t(j) \equiv P(x_t | z_t = j)$ is the local evidence, $\pi$ the prior distribution over states, and $A$ the transition matrix whose entry $A[i, j]$ corresponds to the transition probability from state i to state j, i.e., $A[i, j] = P(z_{t+1} = j | z_{t} = i)$. Since the above $\alpha_t$'s are not normalized, let's write a (Python) procedure to normalize a vector: def normalize(v): norm = np.sum(v) v = v/norm return (v, norm) With the normalization procedure defined above, we can normalize the $\alpha_t$'s: $\hat{\alpha}_t, \text{norm}_t$ = normalize($\alpha$), for $t=1,\dots,T$ - Backward algorithm: The backward algorithm calculates $\beta_t(j) \equiv P(x_{t+1:T} | z_t = j)$, i.e., the likelihood of future evidence given that the hidden state $z_t$ at time $t$ is $j$. In vector form: $\beta_T = 1_{\mathbf{K}}$ $\beta_{t} = A (\phi_{t+1} \odot \beta_{t+1})$, for $t = T-1, \dots, 1$ Where $K$ is the number of hidden states. Note that $\beta_t(j)$ is not a probability distribution over states, hence $\sum_j \beta_t(j) \neq 1$. The catch here is that we can normalize $\beta_t$ with the normalization factor we previously use to normalize $\alpha_t$, $\text{norm}_t$ calculated above. $\hat{\beta}_t = \beta_t / \text{norm}_t$, for $t=1,\dots,T$ The rationale is described in the next bullet. - Forward-Backward: In this step, we calculate $\gamma_t(j) \equiv P(z_t=j | x_{1:T})$, i.e., the smoothed marginal given the fully observed sequence of data points. One can show that $\gamma_t(j) \propto \alpha_t(j) \beta_t(j)$. In vector form: $\gamma_t \propto \alpha_t \odot \beta_t \propto \hat{\alpha}_t \odot \hat{\beta}_t$, for $t=1,\dots,T$ Since $\hat{\alpha}_t$ carries the normalization factors: $\text{norm}_1, \text{norm}_2, \dots, \text{norm}_t$, and $\hat{\beta}_t$ carries the normalization factors $\text{norm}_T, \text{norm}_{T-1}, \dots, \text{norm}_{t+1}$, therefore for all the $\gamma_t$'s, they all have the same factor: $\prod_{t=1}^T \text{norm}_t$.
Scaling step in Baum-Welch algorithm
I'm not an expert in HMM, but happened to go through the same process of learning and implementing HMM while encountering the same normalization issues. Below is the answer based on my understanding.
Scaling step in Baum-Welch algorithm I'm not an expert in HMM, but happened to go through the same process of learning and implementing HMM while encountering the same normalization issues. Below is the answer based on my understanding. - Forward algorithm: The forward algorithm calculates $\alpha_t(j) \equiv P(z_t = j | x_{1:t})$, i.e., the probability that the hidden state $z_t$ at time $t$ is $j$, given the observed data points up to time $t$. In vector form: $\alpha_1 \propto \phi_t \odot \pi$ $\alpha_t \propto \phi_t \odot (A^T \alpha_{t-1})$, for $t=2,\dots,T$ Where $\phi_t(j) \equiv P(x_t | z_t = j)$ is the local evidence, $\pi$ the prior distribution over states, and $A$ the transition matrix whose entry $A[i, j]$ corresponds to the transition probability from state i to state j, i.e., $A[i, j] = P(z_{t+1} = j | z_{t} = i)$. Since the above $\alpha_t$'s are not normalized, let's write a (Python) procedure to normalize a vector: def normalize(v): norm = np.sum(v) v = v/norm return (v, norm) With the normalization procedure defined above, we can normalize the $\alpha_t$'s: $\hat{\alpha}_t, \text{norm}_t$ = normalize($\alpha$), for $t=1,\dots,T$ - Backward algorithm: The backward algorithm calculates $\beta_t(j) \equiv P(x_{t+1:T} | z_t = j)$, i.e., the likelihood of future evidence given that the hidden state $z_t$ at time $t$ is $j$. In vector form: $\beta_T = 1_{\mathbf{K}}$ $\beta_{t} = A (\phi_{t+1} \odot \beta_{t+1})$, for $t = T-1, \dots, 1$ Where $K$ is the number of hidden states. Note that $\beta_t(j)$ is not a probability distribution over states, hence $\sum_j \beta_t(j) \neq 1$. The catch here is that we can normalize $\beta_t$ with the normalization factor we previously use to normalize $\alpha_t$, $\text{norm}_t$ calculated above. $\hat{\beta}_t = \beta_t / \text{norm}_t$, for $t=1,\dots,T$ The rationale is described in the next bullet. - Forward-Backward: In this step, we calculate $\gamma_t(j) \equiv P(z_t=j | x_{1:T})$, i.e., the smoothed marginal given the fully observed sequence of data points. One can show that $\gamma_t(j) \propto \alpha_t(j) \beta_t(j)$. In vector form: $\gamma_t \propto \alpha_t \odot \beta_t \propto \hat{\alpha}_t \odot \hat{\beta}_t$, for $t=1,\dots,T$ Since $\hat{\alpha}_t$ carries the normalization factors: $\text{norm}_1, \text{norm}_2, \dots, \text{norm}_t$, and $\hat{\beta}_t$ carries the normalization factors $\text{norm}_T, \text{norm}_{T-1}, \dots, \text{norm}_{t+1}$, therefore for all the $\gamma_t$'s, they all have the same factor: $\prod_{t=1}^T \text{norm}_t$.
Scaling step in Baum-Welch algorithm I'm not an expert in HMM, but happened to go through the same process of learning and implementing HMM while encountering the same normalization issues. Below is the answer based on my understanding.
54,225
If $P(|X-19.1| \leq a) = 0.98$ then why is it necessarily $P(\frac{X-19.1}{17} \leq \frac{a}{17})= 0.99$?
This seems to be discussing some symmetric distribution - so that the 0.01 quantile ($q_{0.01}$) is as far from $19.1$ as the 0.99 quantile ($q_{0.99}$). (I've used a normal density in my image but this argument applies to symmetric distributions more generally.) In the image below we have $P(|X-19.1|\leq a) = 0.98$ (i.e. $a=q_{0.99}-19.1$). We can see that if we drop the absolute-value part in $|X-19.1|\leq a$ (giving $X-19.1\leq a$), we will include everything below $q_{0.01}$. This adds the 1% probability that lies below that first percentile, taking us from $0.98$ to $0.99$. So we have $P(X-19.1\leq a) = 0.99$. Now we can scale the left and right side of that inequality by the same constant without changing the probability (it's like going from saying "The probability that a randomly chosen basketball player's height is below $2\,\text{m}$ is $p$" to saying "The probability that half a randomly chosen basketball player's height is below $1\,\text{m}$ is $p$"). That is we can go from $P(X-19.1\leq a) = 0.99$ to $P(\frac{X-19.1}{17}\leq \frac{a}{17}) = 0.99$; scaling both halves of the inequality by a positive multiplier ($1/17$ in this case) doesn't change anything.
If $P(|X-19.1| \leq a) = 0.98$ then why is it necessarily $P(\frac{X-19.1}{17} \leq \frac{a}{17})= 0
This seems to be discussing some symmetric distribution - so that the 0.01 quantile ($q_{0.01}$) is as far from $19.1$ as the 0.99 quantile ($q_{0.99}$). (I've used a normal density in my image but th
If $P(|X-19.1| \leq a) = 0.98$ then why is it necessarily $P(\frac{X-19.1}{17} \leq \frac{a}{17})= 0.99$? This seems to be discussing some symmetric distribution - so that the 0.01 quantile ($q_{0.01}$) is as far from $19.1$ as the 0.99 quantile ($q_{0.99}$). (I've used a normal density in my image but this argument applies to symmetric distributions more generally.) In the image below we have $P(|X-19.1|\leq a) = 0.98$ (i.e. $a=q_{0.99}-19.1$). We can see that if we drop the absolute-value part in $|X-19.1|\leq a$ (giving $X-19.1\leq a$), we will include everything below $q_{0.01}$. This adds the 1% probability that lies below that first percentile, taking us from $0.98$ to $0.99$. So we have $P(X-19.1\leq a) = 0.99$. Now we can scale the left and right side of that inequality by the same constant without changing the probability (it's like going from saying "The probability that a randomly chosen basketball player's height is below $2\,\text{m}$ is $p$" to saying "The probability that half a randomly chosen basketball player's height is below $1\,\text{m}$ is $p$"). That is we can go from $P(X-19.1\leq a) = 0.99$ to $P(\frac{X-19.1}{17}\leq \frac{a}{17}) = 0.99$; scaling both halves of the inequality by a positive multiplier ($1/17$ in this case) doesn't change anything.
If $P(|X-19.1| \leq a) = 0.98$ then why is it necessarily $P(\frac{X-19.1}{17} \leq \frac{a}{17})= 0 This seems to be discussing some symmetric distribution - so that the 0.01 quantile ($q_{0.01}$) is as far from $19.1$ as the 0.99 quantile ($q_{0.99}$). (I've used a normal density in my image but th
54,226
If $P(|X-19.1| \leq a) = 0.98$ then why is it necessarily $P(\frac{X-19.1}{17} \leq \frac{a}{17})= 0.99$?
This assumes a normal distribution and the first relationship amounts to transform to the standard normal. You can then use 0.99 in the table of the cumulative standard normal distribution to find $a$. The change to 0.99 occurs because the left tail (all values below $-a/1.7$) is left out.
If $P(|X-19.1| \leq a) = 0.98$ then why is it necessarily $P(\frac{X-19.1}{17} \leq \frac{a}{17})= 0
This assumes a normal distribution and the first relationship amounts to transform to the standard normal. You can then use 0.99 in the table of the cumulative standard normal distribution to find $a$
If $P(|X-19.1| \leq a) = 0.98$ then why is it necessarily $P(\frac{X-19.1}{17} \leq \frac{a}{17})= 0.99$? This assumes a normal distribution and the first relationship amounts to transform to the standard normal. You can then use 0.99 in the table of the cumulative standard normal distribution to find $a$. The change to 0.99 occurs because the left tail (all values below $-a/1.7$) is left out.
If $P(|X-19.1| \leq a) = 0.98$ then why is it necessarily $P(\frac{X-19.1}{17} \leq \frac{a}{17})= 0 This assumes a normal distribution and the first relationship amounts to transform to the standard normal. You can then use 0.99 in the table of the cumulative standard normal distribution to find $a$
54,227
Can the likelihood function in MLE be equal to zero?
If you have a very inadequate model such that at least one discretely (or continuously) distributed observation has zero probability (or probability density) for any parameter value $\theta\in\Theta$, that is, you essentially observe something that is impossible under that model, then yes, your maximum likelihood would be zero.
Can the likelihood function in MLE be equal to zero?
If you have a very inadequate model such that at least one discretely (or continuously) distributed observation has zero probability (or probability density) for any parameter value $\theta\in\Theta$,
Can the likelihood function in MLE be equal to zero? If you have a very inadequate model such that at least one discretely (or continuously) distributed observation has zero probability (or probability density) for any parameter value $\theta\in\Theta$, that is, you essentially observe something that is impossible under that model, then yes, your maximum likelihood would be zero.
Can the likelihood function in MLE be equal to zero? If you have a very inadequate model such that at least one discretely (or continuously) distributed observation has zero probability (or probability density) for any parameter value $\theta\in\Theta$,
54,228
Can the likelihood function in MLE be equal to zero?
If you observe a sample that has zero probability density under every possible parameter value then the likelihood function is zero over the parameter space. In this case every parameter value is a legitimate value of the MLE (i.e., the MLE is the whole parameter space) and the likelihood is zero at every point that is an MLE. That situation is unusual (and usually entails a misspecified model). The more usual situation is when the likelihood function is positive for at least one parameter value in the parameter space, in which case the likelihood at any MLE point must be positive (proof below). Theorem: Consider a likelihood function $L_\mathbf{x}: \Theta \rightarrow \mathbb{R}$ where we have: $$L_\mathbf{x}(\theta) > 0 \quad \text{for some } \theta \in \Theta.$$ Any point $\hat{\theta} \in \Theta$ that is an MLE of this likelihood function must satisfy $L_\mathbf{x}(\hat{\theta}) > 0$. Proof: We proceed using a proof by contradiction. Suppose, contrary to the theorem that there is a point $\hat{\theta}$ that is an MLE and has $L_\mathbf{x}(\hat{\theta}) \leqslant 0$. Since this point is an MLE it must satisfy: $$L_\mathbf{x}(\hat{\theta}) \geqslant L_\mathbf{x}(\theta) \quad \text{for all } \theta \in \Theta.$$ This implies that $L_\mathbf{x}(\theta) \leqslant L_\mathbf{x}(\hat{\theta}) \leqslant 0$ for all $\theta \in \Theta$ which contradicts the condition on the likelihood in the theorem. By contradiction, this completes the proof. $\blacksquare$
Can the likelihood function in MLE be equal to zero?
If you observe a sample that has zero probability density under every possible parameter value then the likelihood function is zero over the parameter space. In this case every parameter value is a l
Can the likelihood function in MLE be equal to zero? If you observe a sample that has zero probability density under every possible parameter value then the likelihood function is zero over the parameter space. In this case every parameter value is a legitimate value of the MLE (i.e., the MLE is the whole parameter space) and the likelihood is zero at every point that is an MLE. That situation is unusual (and usually entails a misspecified model). The more usual situation is when the likelihood function is positive for at least one parameter value in the parameter space, in which case the likelihood at any MLE point must be positive (proof below). Theorem: Consider a likelihood function $L_\mathbf{x}: \Theta \rightarrow \mathbb{R}$ where we have: $$L_\mathbf{x}(\theta) > 0 \quad \text{for some } \theta \in \Theta.$$ Any point $\hat{\theta} \in \Theta$ that is an MLE of this likelihood function must satisfy $L_\mathbf{x}(\hat{\theta}) > 0$. Proof: We proceed using a proof by contradiction. Suppose, contrary to the theorem that there is a point $\hat{\theta}$ that is an MLE and has $L_\mathbf{x}(\hat{\theta}) \leqslant 0$. Since this point is an MLE it must satisfy: $$L_\mathbf{x}(\hat{\theta}) \geqslant L_\mathbf{x}(\theta) \quad \text{for all } \theta \in \Theta.$$ This implies that $L_\mathbf{x}(\theta) \leqslant L_\mathbf{x}(\hat{\theta}) \leqslant 0$ for all $\theta \in \Theta$ which contradicts the condition on the likelihood in the theorem. By contradiction, this completes the proof. $\blacksquare$
Can the likelihood function in MLE be equal to zero? If you observe a sample that has zero probability density under every possible parameter value then the likelihood function is zero over the parameter space. In this case every parameter value is a l
54,229
Can the likelihood function in MLE be equal to zero?
When you define the MLE as the unique point where the likelihood function has a maximum, and if the parameter space consists of multiple points then the likelihood in the MLE must be non-zero. The reason is that the likelihood must be non-negative and thus zero is the minimum value of the entire range. If the MLE has zero likelihood then this leads to a contradiction. This is because: if the extremum value equals the minimum value of the entire range then there can not be a unique extremum point. Every point will have the same value.
Can the likelihood function in MLE be equal to zero?
When you define the MLE as the unique point where the likelihood function has a maximum, and if the parameter space consists of multiple points then the likelihood in the MLE must be non-zero. The rea
Can the likelihood function in MLE be equal to zero? When you define the MLE as the unique point where the likelihood function has a maximum, and if the parameter space consists of multiple points then the likelihood in the MLE must be non-zero. The reason is that the likelihood must be non-negative and thus zero is the minimum value of the entire range. If the MLE has zero likelihood then this leads to a contradiction. This is because: if the extremum value equals the minimum value of the entire range then there can not be a unique extremum point. Every point will have the same value.
Can the likelihood function in MLE be equal to zero? When you define the MLE as the unique point where the likelihood function has a maximum, and if the parameter space consists of multiple points then the likelihood in the MLE must be non-zero. The rea
54,230
Can the likelihood function in MLE be equal to zero?
To add a concrete example to the answer by @Jarle Tufto: If $X_1, X_2, \dotsc, X_n$ are iid $\mathcal{U}(0, \theta)$, then any $\theta$ lesser than the maximum observed value give a zero likelihood, since under this model all observations must be lesser (or equal) to $\theta$.
Can the likelihood function in MLE be equal to zero?
To add a concrete example to the answer by @Jarle Tufto: If $X_1, X_2, \dotsc, X_n$ are iid $\mathcal{U}(0, \theta)$, then any $\theta$ lesser than the maximum observed value give a zero likelihood, s
Can the likelihood function in MLE be equal to zero? To add a concrete example to the answer by @Jarle Tufto: If $X_1, X_2, \dotsc, X_n$ are iid $\mathcal{U}(0, \theta)$, then any $\theta$ lesser than the maximum observed value give a zero likelihood, since under this model all observations must be lesser (or equal) to $\theta$.
Can the likelihood function in MLE be equal to zero? To add a concrete example to the answer by @Jarle Tufto: If $X_1, X_2, \dotsc, X_n$ are iid $\mathcal{U}(0, \theta)$, then any $\theta$ lesser than the maximum observed value give a zero likelihood, s
54,231
Updating q-values in q-learning
That's a good idea. What you are thinking of is called eligibility traces. They often improve convergence speed, but this comes at the cost of significantly greater computation and memory cost, as well as increased complexity. The Sutton and Barto book describes them in detail.
Updating q-values in q-learning
That's a good idea. What you are thinking of is called eligibility traces. They often improve convergence speed, but this comes at the cost of significantly greater computation and memory cost, as wel
Updating q-values in q-learning That's a good idea. What you are thinking of is called eligibility traces. They often improve convergence speed, but this comes at the cost of significantly greater computation and memory cost, as well as increased complexity. The Sutton and Barto book describes them in detail.
Updating q-values in q-learning That's a good idea. What you are thinking of is called eligibility traces. They often improve convergence speed, but this comes at the cost of significantly greater computation and memory cost, as wel
54,232
Updating q-values in q-learning
You are right, in some cases it makes sense to do that. What you suggest is Monte Carlo, which is the same as TD($\lambda$) with $\lambda = 1$.
Updating q-values in q-learning
You are right, in some cases it makes sense to do that. What you suggest is Monte Carlo, which is the same as TD($\lambda$) with $\lambda = 1$.
Updating q-values in q-learning You are right, in some cases it makes sense to do that. What you suggest is Monte Carlo, which is the same as TD($\lambda$) with $\lambda = 1$.
Updating q-values in q-learning You are right, in some cases it makes sense to do that. What you suggest is Monte Carlo, which is the same as TD($\lambda$) with $\lambda = 1$.
54,233
Prove that $\Pr\left(A_n\right)=1 \implies \Pr\left(\bigcap_n A_n\right)=1$
This is easier than you think. We have $\Pr(A_n)=1$ which means that $\Pr(A_n^c)=1-\Pr(A_n)=0$, for all $n \geq 1$. Recall that for probability measures we have countable sub-additivity (also referred to as Boole's inequality), i.e. $$\Pr\left( \bigcup_{n=1}^{\infty} B_n \right) \leq \sum_{i=1}^{\infty} \Pr(B_n)$$ for any (measurable) events $B_1, B_2, \ldots $. Therefore, by De Morgan's law, we have $$\Pr\left( \bigcap_{n=1}^{\infty} A_n \right) = 1-\Pr\left( \bigcup_{n}^{\infty} A_n^{c} \right) \geq 1- \sum_{n=1}^{\infty} \Pr \left( A_n^c\right) = 1$$ and that does it.
Prove that $\Pr\left(A_n\right)=1 \implies \Pr\left(\bigcap_n A_n\right)=1$
This is easier than you think. We have $\Pr(A_n)=1$ which means that $\Pr(A_n^c)=1-\Pr(A_n)=0$, for all $n \geq 1$. Recall that for probability measures we have countable sub-additivity (also referred
Prove that $\Pr\left(A_n\right)=1 \implies \Pr\left(\bigcap_n A_n\right)=1$ This is easier than you think. We have $\Pr(A_n)=1$ which means that $\Pr(A_n^c)=1-\Pr(A_n)=0$, for all $n \geq 1$. Recall that for probability measures we have countable sub-additivity (also referred to as Boole's inequality), i.e. $$\Pr\left( \bigcup_{n=1}^{\infty} B_n \right) \leq \sum_{i=1}^{\infty} \Pr(B_n)$$ for any (measurable) events $B_1, B_2, \ldots $. Therefore, by De Morgan's law, we have $$\Pr\left( \bigcap_{n=1}^{\infty} A_n \right) = 1-\Pr\left( \bigcup_{n}^{\infty} A_n^{c} \right) \geq 1- \sum_{n=1}^{\infty} \Pr \left( A_n^c\right) = 1$$ and that does it.
Prove that $\Pr\left(A_n\right)=1 \implies \Pr\left(\bigcap_n A_n\right)=1$ This is easier than you think. We have $\Pr(A_n)=1$ which means that $\Pr(A_n^c)=1-\Pr(A_n)=0$, for all $n \geq 1$. Recall that for probability measures we have countable sub-additivity (also referred
54,234
Is there a way to determine the important features (weight) for an SVM that uses an RBF kernel?
Unfortunately not. Although SVMs are often interpreted as transforming your features into a high-dimensional space and fitting a linear classifier in the new space, the transformation is implicit and cannot be easily retrieved. In fact, SVMs with the RBF kernel behave more like soft nearest neighbours. To see this, denote by $\{x_i,y_i\}_{i=1}^N$ the training data, and $K(.,.)$ the kernel of choice: in this case $K(x,x')=\exp(-\gamma\|x-x'\|^2)$. Then the SVM prediction for an example $x$ takes the form $$ \mathrm{sign}\left( \sum_{i=1}^N \alpha_i y_i K(x_i, x) + \rho \right), $$ where $\rho$ is the intercept_ and $\alpha_i$ are the dual_coef_ in sklearn (see here). As you can see, the decision function is just a linear combination of training labels $y_i$, where the influence of each training example $x_i$ is determined by its overall importance $\alpha_i$ and its distance from $x$, as given by $K$. Closer points have an exponentially larger effect on the prediction, hence the nearest neighbour analogy. Coming back to your question: maybe it doesn't make so much sense to think in terms of features for SVMs with RBFs (would you ask about features for nearest neighbours?). You may want to have a look at the most influential data points, though, to get a "template" interpretation of your model.
Is there a way to determine the important features (weight) for an SVM that uses an RBF kernel?
Unfortunately not. Although SVMs are often interpreted as transforming your features into a high-dimensional space and fitting a linear classifier in the new space, the transformation is implicit and
Is there a way to determine the important features (weight) for an SVM that uses an RBF kernel? Unfortunately not. Although SVMs are often interpreted as transforming your features into a high-dimensional space and fitting a linear classifier in the new space, the transformation is implicit and cannot be easily retrieved. In fact, SVMs with the RBF kernel behave more like soft nearest neighbours. To see this, denote by $\{x_i,y_i\}_{i=1}^N$ the training data, and $K(.,.)$ the kernel of choice: in this case $K(x,x')=\exp(-\gamma\|x-x'\|^2)$. Then the SVM prediction for an example $x$ takes the form $$ \mathrm{sign}\left( \sum_{i=1}^N \alpha_i y_i K(x_i, x) + \rho \right), $$ where $\rho$ is the intercept_ and $\alpha_i$ are the dual_coef_ in sklearn (see here). As you can see, the decision function is just a linear combination of training labels $y_i$, where the influence of each training example $x_i$ is determined by its overall importance $\alpha_i$ and its distance from $x$, as given by $K$. Closer points have an exponentially larger effect on the prediction, hence the nearest neighbour analogy. Coming back to your question: maybe it doesn't make so much sense to think in terms of features for SVMs with RBFs (would you ask about features for nearest neighbours?). You may want to have a look at the most influential data points, though, to get a "template" interpretation of your model.
Is there a way to determine the important features (weight) for an SVM that uses an RBF kernel? Unfortunately not. Although SVMs are often interpreted as transforming your features into a high-dimensional space and fitting a linear classifier in the new space, the transformation is implicit and
54,235
theta parameter in R's LTM package
Theta estimation according to the expected response pattern: Theta <- factor.scores(two_pl) Theta estimation according to the real data response pattern: Theta <- factor.scores(two_pl, method = "EAP", resp.patterns = data)
theta parameter in R's LTM package
Theta estimation according to the expected response pattern: Theta <- factor.scores(two_pl) Theta estimation according to the real data response pattern: Theta <- factor.scores(two_pl, method = "EAP"
theta parameter in R's LTM package Theta estimation according to the expected response pattern: Theta <- factor.scores(two_pl) Theta estimation according to the real data response pattern: Theta <- factor.scores(two_pl, method = "EAP", resp.patterns = data)
theta parameter in R's LTM package Theta estimation according to the expected response pattern: Theta <- factor.scores(two_pl) Theta estimation according to the real data response pattern: Theta <- factor.scores(two_pl, method = "EAP"
54,236
theta parameter in R's LTM package
It's the factor.scores function: WIRStheta <- ltm::factor.scores(two_pl)
theta parameter in R's LTM package
It's the factor.scores function: WIRStheta <- ltm::factor.scores(two_pl)
theta parameter in R's LTM package It's the factor.scores function: WIRStheta <- ltm::factor.scores(two_pl)
theta parameter in R's LTM package It's the factor.scores function: WIRStheta <- ltm::factor.scores(two_pl)
54,237
Random Effects for Mantel Haenszel
A proper random-effects model extension to the standard Mantel-Haenszel procedure is described by van Houwelingen, Zwinderman, and Stijnen (1993). In essence, one can think of the M-H procedure as a model based on the (non-central) hypergeometric distribution (Mantel & Haenszel, 1959). So, using this as the starting point, van Houwelingen and colleagues extend the method by adding a random effect to the model where the 2x2 tables are modeled by non-central hypergeometric distributions. See also Stijnen, Hamza, and Ozdemir (2010). The resulting model can also be thought of as a conditional mixed-effects logistic regression model. The equations get a bit messy, but we can write things pretty compactly if we let $L(\theta_i|a_i, b_i, c_i, d_i)$ denote the likelihood function of a non-central hypergeometric distribution for the $i$th study, where $a_i, b_i, c_i, d_i$ are the 2x2 table counts and $\theta_i$ is the true log odds ratio (see wikipedia for the pmf of the non-central hypergeometric distribution). Now let $f(\theta_i)$ denote the density of a normal distribution with mean $\mu$ and variance $\tau^2$. So the log-likelihood for the random-effects model is given by $$ll = \sum_{i=1}^k \ln \left[\int_{-\infty}^\infty L(\theta_i|a_i, b_i, c_i, d_i) f(\theta_i)d\theta_i\right].$$ There is no closed-form solution to the values of $\mu$ and $\tau^2$ that maximize $ll$, so those values must be obtained numerically. References van Houwelingen, H. C., Zwinderman, K. H., & Stijnen, T. (1993). A bivariate approach to meta-analysis. Statistics in Medicine, 12(24), 2273-2284. Mantel, N., & Haenszel, W. (1959). Statistical aspects of the analysis of data from retrospective studies of disease. Journal of the National Cancer Institute, 22(4), 719-748. Stijnen, T., Hamza, T. H., & Ozdemir, P. (2010). Random effects meta-analysis of event outcome in the framework of the generalized linear mixed model with applications in sparse data. Statistics in Medicine, 29(29), 3046-3067.
Random Effects for Mantel Haenszel
A proper random-effects model extension to the standard Mantel-Haenszel procedure is described by van Houwelingen, Zwinderman, and Stijnen (1993). In essence, one can think of the M-H procedure as a m
Random Effects for Mantel Haenszel A proper random-effects model extension to the standard Mantel-Haenszel procedure is described by van Houwelingen, Zwinderman, and Stijnen (1993). In essence, one can think of the M-H procedure as a model based on the (non-central) hypergeometric distribution (Mantel & Haenszel, 1959). So, using this as the starting point, van Houwelingen and colleagues extend the method by adding a random effect to the model where the 2x2 tables are modeled by non-central hypergeometric distributions. See also Stijnen, Hamza, and Ozdemir (2010). The resulting model can also be thought of as a conditional mixed-effects logistic regression model. The equations get a bit messy, but we can write things pretty compactly if we let $L(\theta_i|a_i, b_i, c_i, d_i)$ denote the likelihood function of a non-central hypergeometric distribution for the $i$th study, where $a_i, b_i, c_i, d_i$ are the 2x2 table counts and $\theta_i$ is the true log odds ratio (see wikipedia for the pmf of the non-central hypergeometric distribution). Now let $f(\theta_i)$ denote the density of a normal distribution with mean $\mu$ and variance $\tau^2$. So the log-likelihood for the random-effects model is given by $$ll = \sum_{i=1}^k \ln \left[\int_{-\infty}^\infty L(\theta_i|a_i, b_i, c_i, d_i) f(\theta_i)d\theta_i\right].$$ There is no closed-form solution to the values of $\mu$ and $\tau^2$ that maximize $ll$, so those values must be obtained numerically. References van Houwelingen, H. C., Zwinderman, K. H., & Stijnen, T. (1993). A bivariate approach to meta-analysis. Statistics in Medicine, 12(24), 2273-2284. Mantel, N., & Haenszel, W. (1959). Statistical aspects of the analysis of data from retrospective studies of disease. Journal of the National Cancer Institute, 22(4), 719-748. Stijnen, T., Hamza, T. H., & Ozdemir, P. (2010). Random effects meta-analysis of event outcome in the framework of the generalized linear mixed model with applications in sparse data. Statistics in Medicine, 29(29), 3046-3067.
Random Effects for Mantel Haenszel A proper random-effects model extension to the standard Mantel-Haenszel procedure is described by van Houwelingen, Zwinderman, and Stijnen (1993). In essence, one can think of the M-H procedure as a m
54,238
Random Effects for Mantel Haenszel
Jonathan Deeks and Julian Higgins have a nice document showing all the calculations used in Review Manager. Scroll down to page 8 for DerSimonian and Laird random-effects models that can be used with the Mantel-Haenszel summary models.
Random Effects for Mantel Haenszel
Jonathan Deeks and Julian Higgins have a nice document showing all the calculations used in Review Manager. Scroll down to page 8 for DerSimonian and Laird random-effects models that can be used with
Random Effects for Mantel Haenszel Jonathan Deeks and Julian Higgins have a nice document showing all the calculations used in Review Manager. Scroll down to page 8 for DerSimonian and Laird random-effects models that can be used with the Mantel-Haenszel summary models.
Random Effects for Mantel Haenszel Jonathan Deeks and Julian Higgins have a nice document showing all the calculations used in Review Manager. Scroll down to page 8 for DerSimonian and Laird random-effects models that can be used with
54,239
How to Compute Bivariate Empirical Distribution?
By definition, the ECDF $F$ at any location $(x,y)$ counts the data points that lie to the left and beneath $(x,y)$. Specifically, writing $(x_i,y_i), i=1, 2, \ldots, n$ for the data points (which may include duplicates), $$F(x,y) = \frac{1}{n}\times \#\{(x_i, y_i)\mid x_i \le x, \ y_i \le y\}.\tag{1}$$ Equivalently, each data point $(x_i,y_i)$ contributes $1/n$ towards the count at all points lying above it and to its right. Such points form a bi-infinite rectangle in the plane with its lower left corner at $(x_i,y_i)$. Imagine, then, overlaying $n$ such translucent rectangles: the number overlaid at any point $(x,y)$ is the count in $(1)$. This is better illustrated by showing various configurations of points in the plane, rather than a single configuration. I therefore took all three $x$ values $2,4,8$ and all three $y$ values $12, 32, 36$ and re-matched them using all six possible permutations to produce six datasets. They show all six possible qualitative configurations of the ECDF. Here they are as contour plots, with the data points overlaid in red. The panel at the upper left depicts the data given in the question. The colors graduate from darkest (for a value of $0$) in discrete steps of $1/n=1/3$ up to lightest (a value of $1$). Think of the bi-infinite rectangles as being light blue: where two of them overlap they look light green and where all three overlap they are gold. These descriptions extend to more than two dimensions (and down to one dimension) with the obvious modifications in the numbers of indexes.
How to Compute Bivariate Empirical Distribution?
By definition, the ECDF $F$ at any location $(x,y)$ counts the data points that lie to the left and beneath $(x,y)$. Specifically, writing $(x_i,y_i), i=1, 2, \ldots, n$ for the data points (which ma
How to Compute Bivariate Empirical Distribution? By definition, the ECDF $F$ at any location $(x,y)$ counts the data points that lie to the left and beneath $(x,y)$. Specifically, writing $(x_i,y_i), i=1, 2, \ldots, n$ for the data points (which may include duplicates), $$F(x,y) = \frac{1}{n}\times \#\{(x_i, y_i)\mid x_i \le x, \ y_i \le y\}.\tag{1}$$ Equivalently, each data point $(x_i,y_i)$ contributes $1/n$ towards the count at all points lying above it and to its right. Such points form a bi-infinite rectangle in the plane with its lower left corner at $(x_i,y_i)$. Imagine, then, overlaying $n$ such translucent rectangles: the number overlaid at any point $(x,y)$ is the count in $(1)$. This is better illustrated by showing various configurations of points in the plane, rather than a single configuration. I therefore took all three $x$ values $2,4,8$ and all three $y$ values $12, 32, 36$ and re-matched them using all six possible permutations to produce six datasets. They show all six possible qualitative configurations of the ECDF. Here they are as contour plots, with the data points overlaid in red. The panel at the upper left depicts the data given in the question. The colors graduate from darkest (for a value of $0$) in discrete steps of $1/n=1/3$ up to lightest (a value of $1$). Think of the bi-infinite rectangles as being light blue: where two of them overlap they look light green and where all three overlap they are gold. These descriptions extend to more than two dimensions (and down to one dimension) with the obvious modifications in the numbers of indexes.
How to Compute Bivariate Empirical Distribution? By definition, the ECDF $F$ at any location $(x,y)$ counts the data points that lie to the left and beneath $(x,y)$. Specifically, writing $(x_i,y_i), i=1, 2, \ldots, n$ for the data points (which ma
54,240
How to chose the order for polynomial regression?
This question can be generalized for selecting any machine learning algorithm hyper-parameters. For example, number of clusters in K-means, number of Hidden unit in neural networks, etc. At very high level, there are two ways (not mutually exclusive, in fact combining two ways would be ideal.): Data driven and knowledge driven. Data driven means using data to figure out which one is the best. We usually have training set and testing set. There are some other variations, such as adding one additional validation data set, run repeated cross validation etc. But the overall idea is and pick the best one in testing set, and we can make sure testing set is very close to production data. Knowledge driven means using "domain knowledge" to make the decision on parameter tuning. For example, we are fitting some data from some trajectory data and we know our data from physics would generally follow a parabola trend, not a 5th order polynomial curve. Then we would like pick the 2nd order polynomial to fit. In addition, if we know our data is periodic, we may choose Fourier expansion on the data instead of polynomials. See this post What's wrong to fit periodic data with polynomials? In sum, if we have a lot of data, and can make sure we have a fair representation to production data in testing set. Then data driven would be good. On the other hand, if we have lots of domain knowledge about the relationship between input and output, then knowledge driven is good. The ideal case would be combining two: know the relationship in data and testing it carefully using a good testing set.
How to chose the order for polynomial regression?
This question can be generalized for selecting any machine learning algorithm hyper-parameters. For example, number of clusters in K-means, number of Hidden unit in neural networks, etc. At very high
How to chose the order for polynomial regression? This question can be generalized for selecting any machine learning algorithm hyper-parameters. For example, number of clusters in K-means, number of Hidden unit in neural networks, etc. At very high level, there are two ways (not mutually exclusive, in fact combining two ways would be ideal.): Data driven and knowledge driven. Data driven means using data to figure out which one is the best. We usually have training set and testing set. There are some other variations, such as adding one additional validation data set, run repeated cross validation etc. But the overall idea is and pick the best one in testing set, and we can make sure testing set is very close to production data. Knowledge driven means using "domain knowledge" to make the decision on parameter tuning. For example, we are fitting some data from some trajectory data and we know our data from physics would generally follow a parabola trend, not a 5th order polynomial curve. Then we would like pick the 2nd order polynomial to fit. In addition, if we know our data is periodic, we may choose Fourier expansion on the data instead of polynomials. See this post What's wrong to fit periodic data with polynomials? In sum, if we have a lot of data, and can make sure we have a fair representation to production data in testing set. Then data driven would be good. On the other hand, if we have lots of domain knowledge about the relationship between input and output, then knowledge driven is good. The ideal case would be combining two: know the relationship in data and testing it carefully using a good testing set.
How to chose the order for polynomial regression? This question can be generalized for selecting any machine learning algorithm hyper-parameters. For example, number of clusters in K-means, number of Hidden unit in neural networks, etc. At very high
54,241
How to chose the order for polynomial regression?
In a polynomial regression process(gradient descent) try to find the global minima to optimize the cost function. We choose the degree of polynomial for which the variance as computed by $\frac{Sr(m)}{n-m-1}$ is a minimum or when there is no significant decrease in its value as the degree of polynomial is increased. In the above formula, Sr(m) = sum of the square of the residuals for the mth order polynomial n= number of data points m=order of polynomial (so m+1 is the number of constants of the model) Refer : https://autarkaw.org/2008/07/05/finding-the-optimum-polynomial-order-to-use-for-regression/
How to chose the order for polynomial regression?
In a polynomial regression process(gradient descent) try to find the global minima to optimize the cost function. We choose the degree of polynomial for which the variance as computed by $\frac{Sr(m)}
How to chose the order for polynomial regression? In a polynomial regression process(gradient descent) try to find the global minima to optimize the cost function. We choose the degree of polynomial for which the variance as computed by $\frac{Sr(m)}{n-m-1}$ is a minimum or when there is no significant decrease in its value as the degree of polynomial is increased. In the above formula, Sr(m) = sum of the square of the residuals for the mth order polynomial n= number of data points m=order of polynomial (so m+1 is the number of constants of the model) Refer : https://autarkaw.org/2008/07/05/finding-the-optimum-polynomial-order-to-use-for-regression/
How to chose the order for polynomial regression? In a polynomial regression process(gradient descent) try to find the global minima to optimize the cost function. We choose the degree of polynomial for which the variance as computed by $\frac{Sr(m)}
54,242
How to perform Validation on Unsupervised learning?
I realize this comes very late, but perhaps it is still useful for anyone looking into the same subject and coming across this question. I don't believe there is a standard method, as you ask. However, I worked on this about two years ago for my MSc thesis in Statistical Science: https://www.universiteitleiden.nl/binaries/content/assets/science/mi/scripties/statscience/2017-2018/2018_08_27_masterthesis_debakker.pdf. I think Chapter 2.4 (page 18-30) might be of interest with regard to your question and the following is about/in that chapter. I worked out a v-fold cross-validation scheme to optimize a generic value for k, the number of clusters to look for in a data set. I reviewed and used/adapted several existing validation indices to measure "goodness of fit" of a clustering; many exist since, as you pointed out, there is no ground thruth in unsupervised learning, so there is no standard way to measure how well a clustering looking for a certain k number of clusters is doing. See also the literature study in my thesis, if you want an overview (note it's two years old by now and I have not followed the literature since). A personal favourite is Prediction Strength by Tibshirani and Walther (https://doi.org/10.1198/106186005X59243). In principle, any such cluster number validation index could in theory be implemented in the framework I designed (see image below, from thesis page 30). Subsequently I applied this method to a data set I had at hand back then, but that will be of less interest for you, I assume.
How to perform Validation on Unsupervised learning?
I realize this comes very late, but perhaps it is still useful for anyone looking into the same subject and coming across this question. I don't believe there is a standard method, as you ask. However
How to perform Validation on Unsupervised learning? I realize this comes very late, but perhaps it is still useful for anyone looking into the same subject and coming across this question. I don't believe there is a standard method, as you ask. However, I worked on this about two years ago for my MSc thesis in Statistical Science: https://www.universiteitleiden.nl/binaries/content/assets/science/mi/scripties/statscience/2017-2018/2018_08_27_masterthesis_debakker.pdf. I think Chapter 2.4 (page 18-30) might be of interest with regard to your question and the following is about/in that chapter. I worked out a v-fold cross-validation scheme to optimize a generic value for k, the number of clusters to look for in a data set. I reviewed and used/adapted several existing validation indices to measure "goodness of fit" of a clustering; many exist since, as you pointed out, there is no ground thruth in unsupervised learning, so there is no standard way to measure how well a clustering looking for a certain k number of clusters is doing. See also the literature study in my thesis, if you want an overview (note it's two years old by now and I have not followed the literature since). A personal favourite is Prediction Strength by Tibshirani and Walther (https://doi.org/10.1198/106186005X59243). In principle, any such cluster number validation index could in theory be implemented in the framework I designed (see image below, from thesis page 30). Subsequently I applied this method to a data set I had at hand back then, but that will be of less interest for you, I assume.
How to perform Validation on Unsupervised learning? I realize this comes very late, but perhaps it is still useful for anyone looking into the same subject and coming across this question. I don't believe there is a standard method, as you ask. However
54,243
How to perform Validation on Unsupervised learning?
I'm not sure if it will be considered and answer as in fact is a pointer to a possible answer, but at the same time, I don't have enough reputation to add it as a comment. So it will go here, maybe someone with more rights can move it as comment. I'm struggling with this theme too and today I found this PhD thesis "CROSS-VALIDATION FOR UNSUPERVISED LEARNING" by Patrick O. Perry September 2009 - Stanford University in the abstract the author states This thesis discusses some extensions of cross-validation to unsupervised learning, specifically focusing on the problem of choosing how many principal components to keep. We introduce the latent factor model, define an objective criterion, and show how CV can be used to estimate the intrinsic dimensionality of a data set. Through both simulation and theory, we demonstrate that cross-validation is a valuable tool for unsupervised learning. http://ptrckprry.com/reports/
How to perform Validation on Unsupervised learning?
I'm not sure if it will be considered and answer as in fact is a pointer to a possible answer, but at the same time, I don't have enough reputation to add it as a comment. So it will go here, maybe so
How to perform Validation on Unsupervised learning? I'm not sure if it will be considered and answer as in fact is a pointer to a possible answer, but at the same time, I don't have enough reputation to add it as a comment. So it will go here, maybe someone with more rights can move it as comment. I'm struggling with this theme too and today I found this PhD thesis "CROSS-VALIDATION FOR UNSUPERVISED LEARNING" by Patrick O. Perry September 2009 - Stanford University in the abstract the author states This thesis discusses some extensions of cross-validation to unsupervised learning, specifically focusing on the problem of choosing how many principal components to keep. We introduce the latent factor model, define an objective criterion, and show how CV can be used to estimate the intrinsic dimensionality of a data set. Through both simulation and theory, we demonstrate that cross-validation is a valuable tool for unsupervised learning. http://ptrckprry.com/reports/
How to perform Validation on Unsupervised learning? I'm not sure if it will be considered and answer as in fact is a pointer to a possible answer, but at the same time, I don't have enough reputation to add it as a comment. So it will go here, maybe so
54,244
Architecture of autoencoders
Are there any examples of papers which used architectures consisting of multiple hidden layers? Yes, e.g. look for "deep autoencoders" a.k.a. "stacked autoencoders", such as {1}: Hugo Larochelle has the video on it: Neural networks [7.6] : Deep learning - deep autoencoder Geoffrey Hinton also has a video on it: Lecture 15.2 β€” Deep autoencoders [Neural Networks for Machine Learning] Examples of deep autoencoders which don't make use of pretraining: http://ufldl.stanford.edu/wiki/index.php/Stacked_Autoencoders A good way to obtain good parameters for a stacked autoencoder is to use greedy layer-wise training. E.g., {2} uses a stacked autoencoder with greedy layer-wise training. Note that one can use autoencoders fancier than feedforward fully connected neural networks, e.g. {3}. References: {1} Hinton, Geoffrey E., and Ruslan R. Salakhutdinov. "Reducing the dimensionality of data with neural networks." science 313, no. 5786 (2006): 504-507. https://scholar.google.com/scholar?hl=en&q=Reducing+the+Dimensionality+of+Data+with+Neural+Networks&btnG=&as_sdt=1%2C22&as_sdtp= ; https://www.cs.toronto.edu/~hinton/science.pdf (~5k citations) {2} Heydarzadeh, Mehrdad, Mehrdad Nourani, and Sarah Ostadabbas. "In-bed posture classification using deep autoencoders." In Engineering in Medicine and Biology Society (EMBC), 2016 IEEE 38th Annual International Conference of the, pp. 3839-3842. IEEE, 2016. https://scholar.google.com/scholar?cluster=16153787462804186587&hl=en&as_sdt=0,22 {3} Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, Koray Kavukcuoglu. Conditional Image Generation with PixelCNN Decoders. NIPS 2016. https://arxiv.org/abs/1606.05328 ; http://papers.nips.cc/paper/6527-tree-structured-reinforcement-learning-for-sequential-object-localization.pdf
Architecture of autoencoders
Are there any examples of papers which used architectures consisting of multiple hidden layers? Yes, e.g. look for "deep autoencoders" a.k.a. "stacked autoencoders", such as {1}: Hugo Larochelle has
Architecture of autoencoders Are there any examples of papers which used architectures consisting of multiple hidden layers? Yes, e.g. look for "deep autoencoders" a.k.a. "stacked autoencoders", such as {1}: Hugo Larochelle has the video on it: Neural networks [7.6] : Deep learning - deep autoencoder Geoffrey Hinton also has a video on it: Lecture 15.2 β€” Deep autoencoders [Neural Networks for Machine Learning] Examples of deep autoencoders which don't make use of pretraining: http://ufldl.stanford.edu/wiki/index.php/Stacked_Autoencoders A good way to obtain good parameters for a stacked autoencoder is to use greedy layer-wise training. E.g., {2} uses a stacked autoencoder with greedy layer-wise training. Note that one can use autoencoders fancier than feedforward fully connected neural networks, e.g. {3}. References: {1} Hinton, Geoffrey E., and Ruslan R. Salakhutdinov. "Reducing the dimensionality of data with neural networks." science 313, no. 5786 (2006): 504-507. https://scholar.google.com/scholar?hl=en&q=Reducing+the+Dimensionality+of+Data+with+Neural+Networks&btnG=&as_sdt=1%2C22&as_sdtp= ; https://www.cs.toronto.edu/~hinton/science.pdf (~5k citations) {2} Heydarzadeh, Mehrdad, Mehrdad Nourani, and Sarah Ostadabbas. "In-bed posture classification using deep autoencoders." In Engineering in Medicine and Biology Society (EMBC), 2016 IEEE 38th Annual International Conference of the, pp. 3839-3842. IEEE, 2016. https://scholar.google.com/scholar?cluster=16153787462804186587&hl=en&as_sdt=0,22 {3} Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, Koray Kavukcuoglu. Conditional Image Generation with PixelCNN Decoders. NIPS 2016. https://arxiv.org/abs/1606.05328 ; http://papers.nips.cc/paper/6527-tree-structured-reinforcement-learning-for-sequential-object-localization.pdf
Architecture of autoencoders Are there any examples of papers which used architectures consisting of multiple hidden layers? Yes, e.g. look for "deep autoencoders" a.k.a. "stacked autoencoders", such as {1}: Hugo Larochelle has
54,245
What to do if you suspect combinations of two distributions?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Adding to Maarten's answer, there are also the zero-inflated Poisson and zero-inflated negative binomial models. All these have been discussed here in the past.
What to do if you suspect combinations of two distributions?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
What to do if you suspect combinations of two distributions? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Adding to Maarten's answer, there are also the zero-inflated Poisson and zero-inflated negative binomial models. All these have been discussed here in the past.
What to do if you suspect combinations of two distributions? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
54,246
What to do if you suspect combinations of two distributions?
Let's say your variable is money spent on meat. In that case your spike at 0 are the vegetarians. You seem to be doing a regression model. Not taking the presence of vegetarians into account would be problematic. One possibility for that kind of data would be a Heckman selection model.
What to do if you suspect combinations of two distributions?
Let's say your variable is money spent on meat. In that case your spike at 0 are the vegetarians. You seem to be doing a regression model. Not taking the presence of vegetarians into account would be
What to do if you suspect combinations of two distributions? Let's say your variable is money spent on meat. In that case your spike at 0 are the vegetarians. You seem to be doing a regression model. Not taking the presence of vegetarians into account would be problematic. One possibility for that kind of data would be a Heckman selection model.
What to do if you suspect combinations of two distributions? Let's say your variable is money spent on meat. In that case your spike at 0 are the vegetarians. You seem to be doing a regression model. Not taking the presence of vegetarians into account would be
54,247
What to do if you suspect combinations of two distributions?
Observing a distribution that is actually the convolution of two or more distinct distributions is very common. In general, you can be observing the same phenomenon for a single population, but observe different causes leading to the need of multiple distributions to describe the data; or you might be observing two different phenomena in the same graph. As an example of the first, you might be measuring height for a male population with a ruler, and have Parkinson's disease; your distribution will then be the convolution of male's height, and your own tremor - both are Gaussians centered at the true value. Or you could be perfectly healthy and measure people's height using a sample of both males and females; you will then observe two partially overlapping distinct Gaussian distributions. Whatever is your case, there is no reason to ignore the subsample around zero, unless you think it's noise, and noise you don't want to model.
What to do if you suspect combinations of two distributions?
Observing a distribution that is actually the convolution of two or more distinct distributions is very common. In general, you can be observing the same phenomenon for a single population, but obser
What to do if you suspect combinations of two distributions? Observing a distribution that is actually the convolution of two or more distinct distributions is very common. In general, you can be observing the same phenomenon for a single population, but observe different causes leading to the need of multiple distributions to describe the data; or you might be observing two different phenomena in the same graph. As an example of the first, you might be measuring height for a male population with a ruler, and have Parkinson's disease; your distribution will then be the convolution of male's height, and your own tremor - both are Gaussians centered at the true value. Or you could be perfectly healthy and measure people's height using a sample of both males and females; you will then observe two partially overlapping distinct Gaussian distributions. Whatever is your case, there is no reason to ignore the subsample around zero, unless you think it's noise, and noise you don't want to model.
What to do if you suspect combinations of two distributions? Observing a distribution that is actually the convolution of two or more distinct distributions is very common. In general, you can be observing the same phenomenon for a single population, but obser
54,248
What does this Q-Q plot indicate about my data?
The shape of the plot is consistent with a left-skew, possibly bimodal distribution (with a small mode on the left). It is possible that there are two groups with similar spread (such as a mixture of two normals with about the same standard deviation, the smaller subpopulation having a lower mean than the rest). This would suggest the possibility of a missing predictor -- which would correspond to the two groups). However, the following discussion relies on the regression assumption that the conditional mean and spread of errors is zero and constant respectively, so that we can interpret the QQ plot of residuals as conveying information about the conditional distribution of errors. [Note that interpreting the marginal distribution of the residuals this way makes little sense if the residuals actually come from several different distributions. Other diagnostics - including those relating to other possible predictors - must be considered first] Note that there's a "steep part" between the two less steep sections at the left and right, but either side of that steep part the slope is similar: This suggests a reasonably normal-ish looking in the center and on the right and also in the left tail, but that there's a "gap" between with fewer points (in the ballpark of -1.3). So the distribution is probably bimodal - (with the second peak being a pretty small bump on the left). You can get a similar appearance by generating data from a normal distribution and leaving out a substantial proportion of points in an interval near -1.3. Like so: This is ten sets of simulated data of (originally) 400 values each from a standard normal with points near -1.3 then having some chance of being omitted; resulting in on average 349 points with a somewhat bimodal appearance and whose qq plots typically having something like the appearance of your own -- with points at the left and at the center-and-right seeming to lay near roughly parallel lines, and in between a steeper section (indicating the lower density)
What does this Q-Q plot indicate about my data?
The shape of the plot is consistent with a left-skew, possibly bimodal distribution (with a small mode on the left). It is possible that there are two groups with similar spread (such as a mixture of
What does this Q-Q plot indicate about my data? The shape of the plot is consistent with a left-skew, possibly bimodal distribution (with a small mode on the left). It is possible that there are two groups with similar spread (such as a mixture of two normals with about the same standard deviation, the smaller subpopulation having a lower mean than the rest). This would suggest the possibility of a missing predictor -- which would correspond to the two groups). However, the following discussion relies on the regression assumption that the conditional mean and spread of errors is zero and constant respectively, so that we can interpret the QQ plot of residuals as conveying information about the conditional distribution of errors. [Note that interpreting the marginal distribution of the residuals this way makes little sense if the residuals actually come from several different distributions. Other diagnostics - including those relating to other possible predictors - must be considered first] Note that there's a "steep part" between the two less steep sections at the left and right, but either side of that steep part the slope is similar: This suggests a reasonably normal-ish looking in the center and on the right and also in the left tail, but that there's a "gap" between with fewer points (in the ballpark of -1.3). So the distribution is probably bimodal - (with the second peak being a pretty small bump on the left). You can get a similar appearance by generating data from a normal distribution and leaving out a substantial proportion of points in an interval near -1.3. Like so: This is ten sets of simulated data of (originally) 400 values each from a standard normal with points near -1.3 then having some chance of being omitted; resulting in on average 349 points with a somewhat bimodal appearance and whose qq plots typically having something like the appearance of your own -- with points at the left and at the center-and-right seeming to lay near roughly parallel lines, and in between a steeper section (indicating the lower density)
What does this Q-Q plot indicate about my data? The shape of the plot is consistent with a left-skew, possibly bimodal distribution (with a small mode on the left). It is possible that there are two groups with similar spread (such as a mixture of
54,249
What does this Q-Q plot indicate about my data?
There are many ways to formally or informally take a sample and check to see if it is approximately normal. The pp or qq plots are usually used as exploratory tools. If that is your intent I would not worry much about error bars. The graph should look like a straight line roughly for normality to be considered a reasonable model. From the circles on the plot, it looks like you have a reasonably large sample size. It would help to tell us what that sample size is. Regarding your data you should at least believe that it behaves like a random sample from a population preferably continuous. Departures from the straight line at the extreme ends of the plot can indicate skewness (asymmetry) or kurtosis (heavy tails). The eye test suggests that there is a big departure from normality in the lower tail. In the body and the right tail, the behavior appears to be close to what you would expect from a normal distribution. You should check out the CV post How to interpret a qq plot. Glen_b has a nice answer with several plots and their interpretation. Also I like the University of Virginia Library article with the title How to interpret a qq plot that you can find it with a Google search under qq plot.
What does this Q-Q plot indicate about my data?
There are many ways to formally or informally take a sample and check to see if it is approximately normal. The pp or qq plots are usually used as exploratory tools. If that is your intent I would not
What does this Q-Q plot indicate about my data? There are many ways to formally or informally take a sample and check to see if it is approximately normal. The pp or qq plots are usually used as exploratory tools. If that is your intent I would not worry much about error bars. The graph should look like a straight line roughly for normality to be considered a reasonable model. From the circles on the plot, it looks like you have a reasonably large sample size. It would help to tell us what that sample size is. Regarding your data you should at least believe that it behaves like a random sample from a population preferably continuous. Departures from the straight line at the extreme ends of the plot can indicate skewness (asymmetry) or kurtosis (heavy tails). The eye test suggests that there is a big departure from normality in the lower tail. In the body and the right tail, the behavior appears to be close to what you would expect from a normal distribution. You should check out the CV post How to interpret a qq plot. Glen_b has a nice answer with several plots and their interpretation. Also I like the University of Virginia Library article with the title How to interpret a qq plot that you can find it with a Google search under qq plot.
What does this Q-Q plot indicate about my data? There are many ways to formally or informally take a sample and check to see if it is approximately normal. The pp or qq plots are usually used as exploratory tools. If that is your intent I would not
54,250
How to interpret the classification boundary?
The reason is that you are NOT asking model to provide "a desired boundary", BUT simply ask the model to correctly classify your data. There are infinite decision boundaries exist, that achieve same classification task with same accuracy. When we use neural network, the model can chose whatever it wants. In addition, the model does not know the shape of the data (the groundtruth / generative model / spiral shape in your example). The model will just select one "working" decision boundary, but not "really optimal to the true distribution" (as indicated in your figure 3) If you want to do something with decision boundary, please check support vector machine. In fact, even you use SVM, decision boundary may not be what you expected, because it will max the "margin", but still have no idea about true distribution (spiral or other shape). As mentioned in the comment, different types of the model have different decision boundaries. For example, logistic regression and linear discriminant analysis (LDA) will have a line (or hyperplane in high dimensional space), and quadratic discriminant analysis (QDA) will have a quadratic curve as division boundary. pic source Finally, My answer for another question gives some examples on the different model's decision boundaries. Do all machine learning algorithms separate data linearly?
How to interpret the classification boundary?
The reason is that you are NOT asking model to provide "a desired boundary", BUT simply ask the model to correctly classify your data. There are infinite decision boundaries exist, that achieve same c
How to interpret the classification boundary? The reason is that you are NOT asking model to provide "a desired boundary", BUT simply ask the model to correctly classify your data. There are infinite decision boundaries exist, that achieve same classification task with same accuracy. When we use neural network, the model can chose whatever it wants. In addition, the model does not know the shape of the data (the groundtruth / generative model / spiral shape in your example). The model will just select one "working" decision boundary, but not "really optimal to the true distribution" (as indicated in your figure 3) If you want to do something with decision boundary, please check support vector machine. In fact, even you use SVM, decision boundary may not be what you expected, because it will max the "margin", but still have no idea about true distribution (spiral or other shape). As mentioned in the comment, different types of the model have different decision boundaries. For example, logistic regression and linear discriminant analysis (LDA) will have a line (or hyperplane in high dimensional space), and quadratic discriminant analysis (QDA) will have a quadratic curve as division boundary. pic source Finally, My answer for another question gives some examples on the different model's decision boundaries. Do all machine learning algorithms separate data linearly?
How to interpret the classification boundary? The reason is that you are NOT asking model to provide "a desired boundary", BUT simply ask the model to correctly classify your data. There are infinite decision boundaries exist, that achieve same c
54,251
What is central tendency?
You may find it more useful to think of "central tendency" as giving a sense of the distribution's location. This is in contrast to measures of spread (variance, range, etc.), which don't communicate location. From the wikipedia entry for central tendency: In statistics, a central tendency (or, more commonly, a measure of central tendency) is a central or typical value for a probability distribution. It may also be called a center or location of the distribution. For many examples, the mean, median or mode will communicate the location of the distribution well. If you want to know how much it costs to buy a houseboat in Amsterdam, for example, knowing the average price, the 50th percentile price, or the most common price will all give you a sense of what a houseboat there costs. Of course, there will be plenty of variability in that distribution, and knowing the mean (or median or mode) won't tell you how much any individual houseboat would cost. But it does give you an idea of the location of the distribution of prices of houseboats on the scale of \$0 to \$infinity (e.g. you'll have the sense that it costs more than a cup of coffee and less than buying a restaurant in Tokyo). Even for many discrete variables, the mean can be interesting and useful (for example, you may be curious about the average number of rooms per houseboat even though it's nonsensical to talk about a fraction of a room). Because the mean is so useful and so widely applicable, I think many people conflate "central tendency" with "average value", but there's no real reason to do that. As you point out, there are plenty of situations where the mean (or median) is an unusual value (multimodal or nonsymmetric distributions) or even an impossible value (in the case of discrete distributions). Happily, there are plenty of ways to communicate location / central tendency. Just pick one that makes sense for your data.
What is central tendency?
You may find it more useful to think of "central tendency" as giving a sense of the distribution's location. This is in contrast to measures of spread (variance, range, etc.), which don't communicate
What is central tendency? You may find it more useful to think of "central tendency" as giving a sense of the distribution's location. This is in contrast to measures of spread (variance, range, etc.), which don't communicate location. From the wikipedia entry for central tendency: In statistics, a central tendency (or, more commonly, a measure of central tendency) is a central or typical value for a probability distribution. It may also be called a center or location of the distribution. For many examples, the mean, median or mode will communicate the location of the distribution well. If you want to know how much it costs to buy a houseboat in Amsterdam, for example, knowing the average price, the 50th percentile price, or the most common price will all give you a sense of what a houseboat there costs. Of course, there will be plenty of variability in that distribution, and knowing the mean (or median or mode) won't tell you how much any individual houseboat would cost. But it does give you an idea of the location of the distribution of prices of houseboats on the scale of \$0 to \$infinity (e.g. you'll have the sense that it costs more than a cup of coffee and less than buying a restaurant in Tokyo). Even for many discrete variables, the mean can be interesting and useful (for example, you may be curious about the average number of rooms per houseboat even though it's nonsensical to talk about a fraction of a room). Because the mean is so useful and so widely applicable, I think many people conflate "central tendency" with "average value", but there's no real reason to do that. As you point out, there are plenty of situations where the mean (or median) is an unusual value (multimodal or nonsymmetric distributions) or even an impossible value (in the case of discrete distributions). Happily, there are plenty of ways to communicate location / central tendency. Just pick one that makes sense for your data.
What is central tendency? You may find it more useful to think of "central tendency" as giving a sense of the distribution's location. This is in contrast to measures of spread (variance, range, etc.), which don't communicate
54,252
In what cases is it OK to use categorical predictors with many levels in regression?
Nothing is "always ok", as there are always exceptions. For example, logit and probit models get into trouble when one or more categories of your predictor perfectly predict the outcome. This can easily happen regardless of how large your sample size is. Another case where your model would be somewhat problematic occurs when n is large but the number of observations in one or more categories is very small. This would be problematic when your interest focuses on these small categories.
In what cases is it OK to use categorical predictors with many levels in regression?
Nothing is "always ok", as there are always exceptions. For example, logit and probit models get into trouble when one or more categories of your predictor perfectly predict the outcome. This can easi
In what cases is it OK to use categorical predictors with many levels in regression? Nothing is "always ok", as there are always exceptions. For example, logit and probit models get into trouble when one or more categories of your predictor perfectly predict the outcome. This can easily happen regardless of how large your sample size is. Another case where your model would be somewhat problematic occurs when n is large but the number of observations in one or more categories is very small. This would be problematic when your interest focuses on these small categories.
In what cases is it OK to use categorical predictors with many levels in regression? Nothing is "always ok", as there are always exceptions. For example, logit and probit models get into trouble when one or more categories of your predictor perfectly predict the outcome. This can easi
54,253
In what cases is it OK to use categorical predictors with many levels in regression?
I don't think there is a definite answer. If there are no purely statistical issues (See Maarten Buis' answer) than this is a more theoretical issue. The way I see it, is while many properties are naturally multi-categorical, there is not always a logical reason of making use of all that data. It can make a model cumbersome, and it might be self defeating. Lets say we have a variable $x_1$ with $d$ levels. If $x_1$ is a control variable, it might not make a big difference in using it as is (besides being an eye-sore). If, however, $x_1$ is an effect that is theoretically interesting, some reduction might be in order. I'll elaborate. Using $x_1$ as an explanatory variable means that we have $d-1$ categories, each with a coefficient which is the difference between it and the reference category. If we are determined to understand differences between world countries and Japan, than fine, but this conveys little information on the relationship between the other $d-1$ categories and themselves. When we are interested in measuring interactions with $x_1$, having many categories makes it very annoying to interpret. So oftentimes it would be prudent to think if there is logic behind merging categories. Perhaps East Asian countries can go together, maybe EU countries (maybe not). Maybe customers who are new are whats interesting and comparing them to various categories of seniority is not as interesting as to non-new ones. Many times clumping categories together will sacrifice specificity, but gain clarity - and that's not a bad thing.
In what cases is it OK to use categorical predictors with many levels in regression?
I don't think there is a definite answer. If there are no purely statistical issues (See Maarten Buis' answer) than this is a more theoretical issue. The way I see it, is while many properties are na
In what cases is it OK to use categorical predictors with many levels in regression? I don't think there is a definite answer. If there are no purely statistical issues (See Maarten Buis' answer) than this is a more theoretical issue. The way I see it, is while many properties are naturally multi-categorical, there is not always a logical reason of making use of all that data. It can make a model cumbersome, and it might be self defeating. Lets say we have a variable $x_1$ with $d$ levels. If $x_1$ is a control variable, it might not make a big difference in using it as is (besides being an eye-sore). If, however, $x_1$ is an effect that is theoretically interesting, some reduction might be in order. I'll elaborate. Using $x_1$ as an explanatory variable means that we have $d-1$ categories, each with a coefficient which is the difference between it and the reference category. If we are determined to understand differences between world countries and Japan, than fine, but this conveys little information on the relationship between the other $d-1$ categories and themselves. When we are interested in measuring interactions with $x_1$, having many categories makes it very annoying to interpret. So oftentimes it would be prudent to think if there is logic behind merging categories. Perhaps East Asian countries can go together, maybe EU countries (maybe not). Maybe customers who are new are whats interesting and comparing them to various categories of seniority is not as interesting as to non-new ones. Many times clumping categories together will sacrifice specificity, but gain clarity - and that's not a bad thing.
In what cases is it OK to use categorical predictors with many levels in regression? I don't think there is a definite answer. If there are no purely statistical issues (See Maarten Buis' answer) than this is a more theoretical issue. The way I see it, is while many properties are na
54,254
Var self-normalised sampling estimator
This is some work showing the Delta Method for approximating the variance of a ratio. Let $X_1, \ldots, X_n \overset{iid}{\sim} q()$ be samples from your normalized instrumental density $q(\cdot)$. Let $p(\cdot) = C^{-1}p_u(\cdot)$ be your target density. Assume you can only evaluate $p_u$. Call $w_i = w_i(x_i) = p_u(x_i)/q(x_i)$. The Delta Method is justified with Taylor approximations. Call $A = \frac{1}{n}\sum_i w_i x_i$, $B=\frac{1}{n}\sum_j w_j$, the numerator and denominator of your expression. Also, call $\mu_A$ and $\mu_B$ their expected values. That's $$ \sum_{i=1}^n {w_i * x_i \over \sum _{i=1}^n {w_i}} = \frac{A}{B}. $$ Delta Method takes the Taylor approximation, $$ f(A,B) \approx f(\mu_A,\mu_B) + f_{A}(\mu_A,\mu_B)(A-\mu_A) + f_B(\mu_A,\mu_B)(B-\mu_B) $$ and takes the variance on both sides: $$ \text{Var}\left[\frac{A}{B}\right] \approx [f_{A}(\mu_A,\mu_B)]^2\text{Var}[A] + [f_B(\mu_A,\mu_B)]^2\text{Var}[B] + 2f_{A}(\mu_A,\mu_B)f_B(\mu_A,\mu_B)\text{Cov}(A,B). $$ Or in your case: \begin{align*} &\frac{1}{\mu_B^2}\frac{1}{n}E[(WX - \mu_A)^2] + \frac{\mu_A^2}{\mu_B^4}\frac{1}{n}E[(W - \mu_B)^2] - 2\frac{1}{\mu_B}\frac{\mu_A}{\mu_B^2}E[W^2X] + 2\frac{1}{\mu_B}\frac{\mu_A}{\mu_B^2}E[WX]E[W] \\ &= \frac{1}{\mu_B^2}\frac{1}{n}\left\{E[W^2X^2] + \frac{\mu_A^2}{\mu_B^2}E[W^2] - 2 \frac{\mu_A}{\mu_B}E[W^2X] \right\} \\ &= \frac{1}{\mu_B^2}\frac{1}{n}\left\{ E[(XW - \frac{\mu_A}{\mu_B}W)^2]\right\}\\ &= \frac{1}{\mu_B^2}\frac{1}{n}\left\{ E[W^2(X - \frac{\mu_A}{\mu_B})^2]\right\}, \end{align*} where we use an uppercase $W$ to denote any of the random unnormalized weights. If you plug in the sample estimates for all the above quantities you get $$ \frac{1}{n}\frac{\frac{1}{n}\sum_i w_i^2(x_i - A/B)^2 }{B^2 } = \sum_{i=1}^n \left[\frac{w_i}{\sum_j w_j}\right]^2(x_i - A/B)^2. $$ I used this as a reference: http://statweb.stanford.edu/~owen/mc/Ch-var-is.pdf
Var self-normalised sampling estimator
This is some work showing the Delta Method for approximating the variance of a ratio. Let $X_1, \ldots, X_n \overset{iid}{\sim} q()$ be samples from your normalized instrumental density $q(\cdot)$. Le
Var self-normalised sampling estimator This is some work showing the Delta Method for approximating the variance of a ratio. Let $X_1, \ldots, X_n \overset{iid}{\sim} q()$ be samples from your normalized instrumental density $q(\cdot)$. Let $p(\cdot) = C^{-1}p_u(\cdot)$ be your target density. Assume you can only evaluate $p_u$. Call $w_i = w_i(x_i) = p_u(x_i)/q(x_i)$. The Delta Method is justified with Taylor approximations. Call $A = \frac{1}{n}\sum_i w_i x_i$, $B=\frac{1}{n}\sum_j w_j$, the numerator and denominator of your expression. Also, call $\mu_A$ and $\mu_B$ their expected values. That's $$ \sum_{i=1}^n {w_i * x_i \over \sum _{i=1}^n {w_i}} = \frac{A}{B}. $$ Delta Method takes the Taylor approximation, $$ f(A,B) \approx f(\mu_A,\mu_B) + f_{A}(\mu_A,\mu_B)(A-\mu_A) + f_B(\mu_A,\mu_B)(B-\mu_B) $$ and takes the variance on both sides: $$ \text{Var}\left[\frac{A}{B}\right] \approx [f_{A}(\mu_A,\mu_B)]^2\text{Var}[A] + [f_B(\mu_A,\mu_B)]^2\text{Var}[B] + 2f_{A}(\mu_A,\mu_B)f_B(\mu_A,\mu_B)\text{Cov}(A,B). $$ Or in your case: \begin{align*} &\frac{1}{\mu_B^2}\frac{1}{n}E[(WX - \mu_A)^2] + \frac{\mu_A^2}{\mu_B^4}\frac{1}{n}E[(W - \mu_B)^2] - 2\frac{1}{\mu_B}\frac{\mu_A}{\mu_B^2}E[W^2X] + 2\frac{1}{\mu_B}\frac{\mu_A}{\mu_B^2}E[WX]E[W] \\ &= \frac{1}{\mu_B^2}\frac{1}{n}\left\{E[W^2X^2] + \frac{\mu_A^2}{\mu_B^2}E[W^2] - 2 \frac{\mu_A}{\mu_B}E[W^2X] \right\} \\ &= \frac{1}{\mu_B^2}\frac{1}{n}\left\{ E[(XW - \frac{\mu_A}{\mu_B}W)^2]\right\}\\ &= \frac{1}{\mu_B^2}\frac{1}{n}\left\{ E[W^2(X - \frac{\mu_A}{\mu_B})^2]\right\}, \end{align*} where we use an uppercase $W$ to denote any of the random unnormalized weights. If you plug in the sample estimates for all the above quantities you get $$ \frac{1}{n}\frac{\frac{1}{n}\sum_i w_i^2(x_i - A/B)^2 }{B^2 } = \sum_{i=1}^n \left[\frac{w_i}{\sum_j w_j}\right]^2(x_i - A/B)^2. $$ I used this as a reference: http://statweb.stanford.edu/~owen/mc/Ch-var-is.pdf
Var self-normalised sampling estimator This is some work showing the Delta Method for approximating the variance of a ratio. Let $X_1, \ldots, X_n \overset{iid}{\sim} q()$ be samples from your normalized instrumental density $q(\cdot)$. Le
54,255
Var self-normalised sampling estimator
A very crude approximation to the variance of the self-normalised importance sampling estimator $$ \hat{\mu}_n = \sum_{i=1}^n {\omega_i x_i \over \sum _{i=1}^n {\omega_i}} $$ is $$ \textrm{Var}_q(\hat{\mu}_n) \approx \textrm{Var}_p(\hat{\mu}_n) (1 + \textrm{Var}_q(W)). $$ where $p$ is the target distribution and $q$ the importance distribution.
Var self-normalised sampling estimator
A very crude approximation to the variance of the self-normalised importance sampling estimator $$ \hat{\mu}_n = \sum_{i=1}^n {\omega_i x_i \over \sum _{i=1}^n {\omega_i}} $$ is $$ \textrm{Var}_q(\ha
Var self-normalised sampling estimator A very crude approximation to the variance of the self-normalised importance sampling estimator $$ \hat{\mu}_n = \sum_{i=1}^n {\omega_i x_i \over \sum _{i=1}^n {\omega_i}} $$ is $$ \textrm{Var}_q(\hat{\mu}_n) \approx \textrm{Var}_p(\hat{\mu}_n) (1 + \textrm{Var}_q(W)). $$ where $p$ is the target distribution and $q$ the importance distribution.
Var self-normalised sampling estimator A very crude approximation to the variance of the self-normalised importance sampling estimator $$ \hat{\mu}_n = \sum_{i=1}^n {\omega_i x_i \over \sum _{i=1}^n {\omega_i}} $$ is $$ \textrm{Var}_q(\ha
54,256
Var self-normalised sampling estimator
To get a more accurate estimate than the Delta Method (shown in answer by @Taylor), use bootstrapping https://en.wikipedia.org/wiki/Bootstrapping_(statistics) and https://www.crcpress.com/An-Introduction-to-the-Bootstrap/Efron-Tibshirani/p/book/9780412042317. As a bonus, that will give you an estimate of the entire distribution. However if you want to get an estimate of variance without (prior to) having any data, then use the Delta Method.
Var self-normalised sampling estimator
To get a more accurate estimate than the Delta Method (shown in answer by @Taylor), use bootstrapping https://en.wikipedia.org/wiki/Bootstrapping_(statistics) and https://www.crcpress.com/An-Introdu
Var self-normalised sampling estimator To get a more accurate estimate than the Delta Method (shown in answer by @Taylor), use bootstrapping https://en.wikipedia.org/wiki/Bootstrapping_(statistics) and https://www.crcpress.com/An-Introduction-to-the-Bootstrap/Efron-Tibshirani/p/book/9780412042317. As a bonus, that will give you an estimate of the entire distribution. However if you want to get an estimate of variance without (prior to) having any data, then use the Delta Method.
Var self-normalised sampling estimator To get a more accurate estimate than the Delta Method (shown in answer by @Taylor), use bootstrapping https://en.wikipedia.org/wiki/Bootstrapping_(statistics) and https://www.crcpress.com/An-Introdu
54,257
How do I avoid computationally singular matrices in R?
Removing highly correlated (or identical) variables by hand can work, but : It can become unfeasible as the number of variables becomes too large Selecting the variables by hand is purely arbitrary With factor variables, it becomes slightly harder to detect correlated variables (unless you look at the predictors with dummy variables) Singularity can also arise because a variable is a linear combination of other variables, which needs some further preprocessing to detect I would recommend a ridge regression / Tikhonov regularization : It makes the matrix always invertible introducing a penalty If some of the variables are identical, they will receive the same weight It is easily usable (and fast) using the R package glmnet The penalization parameter can be selected by cross-validation
How do I avoid computationally singular matrices in R?
Removing highly correlated (or identical) variables by hand can work, but : It can become unfeasible as the number of variables becomes too large Selecting the variables by hand is purely arbitrary
How do I avoid computationally singular matrices in R? Removing highly correlated (or identical) variables by hand can work, but : It can become unfeasible as the number of variables becomes too large Selecting the variables by hand is purely arbitrary With factor variables, it becomes slightly harder to detect correlated variables (unless you look at the predictors with dummy variables) Singularity can also arise because a variable is a linear combination of other variables, which needs some further preprocessing to detect I would recommend a ridge regression / Tikhonov regularization : It makes the matrix always invertible introducing a penalty If some of the variables are identical, they will receive the same weight It is easily usable (and fast) using the R package glmnet The penalization parameter can be selected by cross-validation
How do I avoid computationally singular matrices in R? Removing highly correlated (or identical) variables by hand can work, but : It can become unfeasible as the number of variables becomes too large Selecting the variables by hand is purely arbitrary
54,258
How do I avoid computationally singular matrices in R?
RUser4512 has a good answer (+1). I just want to add some comments on Matrix Condition Number, which we can use to check the numerical stability issues. In R the function is kappa. Here is an example in R. In this example, we create a data with two highly correlated columns. x1 and x2. Note they are not identical but really close. In experiment 1, although it has a large condition number. R still can solve it. > set.seed(0) > x1=runif(1e3)*1e3 > x2=x1+runif(1e3)*1e-3 > x=cbind(x1,x2) > y=runif(1e3) > kappa(t(x) %*% x) [1] 8.855766e+12 > solve(t(x) %*% x, t(x) %*%y) [,1] x1 -399.9371 x2 399.9375 In experiment 2, we further reduce the difference on two columns by 1000 times. R will produce an error as you described. > x[,2]=x[,2]*1e-3 > kappa(t(x) %*% x) [1] 2.220277e+18 > solve(t(x) %*% x, t(x) %*%y) Error in solve.default(t(x) %*% x, t(x) %*% y) : system is computationally singular: reciprocal condition number = 4.49945e-19
How do I avoid computationally singular matrices in R?
RUser4512 has a good answer (+1). I just want to add some comments on Matrix Condition Number, which we can use to check the numerical stability issues. In R the function is kappa. Here is an example
How do I avoid computationally singular matrices in R? RUser4512 has a good answer (+1). I just want to add some comments on Matrix Condition Number, which we can use to check the numerical stability issues. In R the function is kappa. Here is an example in R. In this example, we create a data with two highly correlated columns. x1 and x2. Note they are not identical but really close. In experiment 1, although it has a large condition number. R still can solve it. > set.seed(0) > x1=runif(1e3)*1e3 > x2=x1+runif(1e3)*1e-3 > x=cbind(x1,x2) > y=runif(1e3) > kappa(t(x) %*% x) [1] 8.855766e+12 > solve(t(x) %*% x, t(x) %*%y) [,1] x1 -399.9371 x2 399.9375 In experiment 2, we further reduce the difference on two columns by 1000 times. R will produce an error as you described. > x[,2]=x[,2]*1e-3 > kappa(t(x) %*% x) [1] 2.220277e+18 > solve(t(x) %*% x, t(x) %*%y) Error in solve.default(t(x) %*% x, t(x) %*% y) : system is computationally singular: reciprocal condition number = 4.49945e-19
How do I avoid computationally singular matrices in R? RUser4512 has a good answer (+1). I just want to add some comments on Matrix Condition Number, which we can use to check the numerical stability issues. In R the function is kappa. Here is an example
54,259
Does the proportional hazards assumption still matter if the covariate is time-dependent?
You are still assuming that the effect of the value at each covariates/factor at each timepoint is the same, you simply allow the covariate to vary its value over time (but the change in the log-hazard rate associated with a particular value is still exactly the same across all timepoints). Thus, it does not change the assumption. Or was the presenter perhaps talking about also putting the covariate by time (or log(time)) interaction in the model as a time-dependent covariate? If you do that (for all covariates), then you have a model that might possibly approximate (a linear interaction cannot fully capture the possibly more complex things that may be going on in any one dataset, but may be okay for approximately capturing it) a model that does not make such an assumption.
Does the proportional hazards assumption still matter if the covariate is time-dependent?
You are still assuming that the effect of the value at each covariates/factor at each timepoint is the same, you simply allow the covariate to vary its value over time (but the change in the log-hazar
Does the proportional hazards assumption still matter if the covariate is time-dependent? You are still assuming that the effect of the value at each covariates/factor at each timepoint is the same, you simply allow the covariate to vary its value over time (but the change in the log-hazard rate associated with a particular value is still exactly the same across all timepoints). Thus, it does not change the assumption. Or was the presenter perhaps talking about also putting the covariate by time (or log(time)) interaction in the model as a time-dependent covariate? If you do that (for all covariates), then you have a model that might possibly approximate (a linear interaction cannot fully capture the possibly more complex things that may be going on in any one dataset, but may be okay for approximately capturing it) a model that does not make such an assumption.
Does the proportional hazards assumption still matter if the covariate is time-dependent? You are still assuming that the effect of the value at each covariates/factor at each timepoint is the same, you simply allow the covariate to vary its value over time (but the change in the log-hazar
54,260
Does the proportional hazards assumption still matter if the covariate is time-dependent?
I may be wrong but I believe that BjΓΆrn's answer is not completely correct. The proportional hazards assumption means that the ratio of the hazard for a particular group of observations (determined by the values of the covariates) to the baseline hazard (when all covariates are zero) is constant over time. If there are time-varying covariates this is not true, and therefore the Cox model no longer assumes proportional hazards. Here is a quote I have recently come across from David Collett's book, Modelling Survival Data in Medical Research (2nd ed., 2003, p. 253), that may be helpful: It is important to note that in the model given in equation $h_i(t) = \exp \left\{ \sum_{j=1}^p \beta_j x_{ji}(t) \right\} h_o(t)$, the values of the variables $x_{ji}(t)$ depend on the time $t$, and so the relative hazard $h_i(t)/h_0(t)$ is also time-dependent. This means that the hazard of death at time $t$ is no longer proportional to the baseline hazard, and the model is no longer a proportional hazards model. The accepted answer to this question on CV may also be relevant.
Does the proportional hazards assumption still matter if the covariate is time-dependent?
I may be wrong but I believe that BjΓΆrn's answer is not completely correct. The proportional hazards assumption means that the ratio of the hazard for a particular group of observations (determined by
Does the proportional hazards assumption still matter if the covariate is time-dependent? I may be wrong but I believe that BjΓΆrn's answer is not completely correct. The proportional hazards assumption means that the ratio of the hazard for a particular group of observations (determined by the values of the covariates) to the baseline hazard (when all covariates are zero) is constant over time. If there are time-varying covariates this is not true, and therefore the Cox model no longer assumes proportional hazards. Here is a quote I have recently come across from David Collett's book, Modelling Survival Data in Medical Research (2nd ed., 2003, p. 253), that may be helpful: It is important to note that in the model given in equation $h_i(t) = \exp \left\{ \sum_{j=1}^p \beta_j x_{ji}(t) \right\} h_o(t)$, the values of the variables $x_{ji}(t)$ depend on the time $t$, and so the relative hazard $h_i(t)/h_0(t)$ is also time-dependent. This means that the hazard of death at time $t$ is no longer proportional to the baseline hazard, and the model is no longer a proportional hazards model. The accepted answer to this question on CV may also be relevant.
Does the proportional hazards assumption still matter if the covariate is time-dependent? I may be wrong but I believe that BjΓΆrn's answer is not completely correct. The proportional hazards assumption means that the ratio of the hazard for a particular group of observations (determined by
54,261
Is adjusted R squared score still appropriate when number of regressors is larger than the sample size?
The adjusted $R^2$ value is specifically for linear regression where it's easy to know the effect of adding many predictors. If you were doing linear regression with more predictors than samples a linear regression would give $R^2=1$ so you must not be using a linear regression model. That means you can't adjust the $R^2$ figure regardless of how large your sample size is. If you tried to adjust the $R^2$ value with your figures you'll notice that you get a value greater than $1$ and this is statistically meaningless. But your question is still relevant if you had used linear regression with more predictors than samples and got $R^2=1$. You'll notice that when $p=n-1$ the adjusted $R^2$ is undefined, and in fact the adjustment isn't valid when $p\geq n-1$
Is adjusted R squared score still appropriate when number of regressors is larger than the sample si
The adjusted $R^2$ value is specifically for linear regression where it's easy to know the effect of adding many predictors. If you were doing linear regression with more predictors than samples a lin
Is adjusted R squared score still appropriate when number of regressors is larger than the sample size? The adjusted $R^2$ value is specifically for linear regression where it's easy to know the effect of adding many predictors. If you were doing linear regression with more predictors than samples a linear regression would give $R^2=1$ so you must not be using a linear regression model. That means you can't adjust the $R^2$ figure regardless of how large your sample size is. If you tried to adjust the $R^2$ value with your figures you'll notice that you get a value greater than $1$ and this is statistically meaningless. But your question is still relevant if you had used linear regression with more predictors than samples and got $R^2=1$. You'll notice that when $p=n-1$ the adjusted $R^2$ is undefined, and in fact the adjustment isn't valid when $p\geq n-1$
Is adjusted R squared score still appropriate when number of regressors is larger than the sample si The adjusted $R^2$ value is specifically for linear regression where it's easy to know the effect of adding many predictors. If you were doing linear regression with more predictors than samples a lin
54,262
Is adjusted R squared score still appropriate when number of regressors is larger than the sample size?
To state notation, let $y$ be the $n$-vector of responses, let $X$ be the ($n\times p$) design matrix and let $\beta$ be the $p$-vector of unknown regression coefficients, with $n$ being the sample size. The well known least squares estimate of $\beta$ is $\hat\beta = (X^TX)^{-1} X^Ty$. The coefficient of determination is $R^2 = 1-\frac{SS_{res}}{SS_{tot}}$, where $SS_{tot}$ is the total sum of squares and $SS_{res}$ is the residuals sum of squares. The adjusted $R^2$ is as you wrote. Coming to you question, when $n<p$, $\hat\beta$ is not anymore uniquely defined because the inverse of $X^TX$ is not defined. Hence, as far as $n<p$, no matter what algorithm you use to find $\hat\beta$, the latter will always be undefined and arbitrary. Essentially, in this case, the objective function of $\beta$ is a flat surface. Consequently, $R^2$ is also arbitrary and therefore meaningless. For this reason, adjusted $R^2$ will be meaningless as well. That's why you obtain such a strange value for the adjusted $R^2$.
Is adjusted R squared score still appropriate when number of regressors is larger than the sample si
To state notation, let $y$ be the $n$-vector of responses, let $X$ be the ($n\times p$) design matrix and let $\beta$ be the $p$-vector of unknown regression coefficients, with $n$ being the sample si
Is adjusted R squared score still appropriate when number of regressors is larger than the sample size? To state notation, let $y$ be the $n$-vector of responses, let $X$ be the ($n\times p$) design matrix and let $\beta$ be the $p$-vector of unknown regression coefficients, with $n$ being the sample size. The well known least squares estimate of $\beta$ is $\hat\beta = (X^TX)^{-1} X^Ty$. The coefficient of determination is $R^2 = 1-\frac{SS_{res}}{SS_{tot}}$, where $SS_{tot}$ is the total sum of squares and $SS_{res}$ is the residuals sum of squares. The adjusted $R^2$ is as you wrote. Coming to you question, when $n<p$, $\hat\beta$ is not anymore uniquely defined because the inverse of $X^TX$ is not defined. Hence, as far as $n<p$, no matter what algorithm you use to find $\hat\beta$, the latter will always be undefined and arbitrary. Essentially, in this case, the objective function of $\beta$ is a flat surface. Consequently, $R^2$ is also arbitrary and therefore meaningless. For this reason, adjusted $R^2$ will be meaningless as well. That's why you obtain such a strange value for the adjusted $R^2$.
Is adjusted R squared score still appropriate when number of regressors is larger than the sample si To state notation, let $y$ be the $n$-vector of responses, let $X$ be the ($n\times p$) design matrix and let $\beta$ be the $p$-vector of unknown regression coefficients, with $n$ being the sample si
54,263
Is adjusted R squared score still appropriate when number of regressors is larger than the sample size?
The adjusted R-square value is always less than R-square when n>p that means number of observation is greater than the number of parameters.
Is adjusted R squared score still appropriate when number of regressors is larger than the sample si
The adjusted R-square value is always less than R-square when n>p that means number of observation is greater than the number of parameters.
Is adjusted R squared score still appropriate when number of regressors is larger than the sample size? The adjusted R-square value is always less than R-square when n>p that means number of observation is greater than the number of parameters.
Is adjusted R squared score still appropriate when number of regressors is larger than the sample si The adjusted R-square value is always less than R-square when n>p that means number of observation is greater than the number of parameters.
54,264
What is a practical explanation of affine equivariance and why does it matter for a covariance estimator?
I will first recall the property formally: Given an $n$ by $p$, $n>p$ data matrix $X$, an affine equivariant estimator of location and scatter $(m(X), S(X))$ is one for which: $$(0)\quad m(A X)=A m(X)$$ $$(1)\quad S(A X)=A^\top S(X)A$$ for any $p$ by $p$ non singular matrix $A$. Consider a situation where one would use $(m(X),S(X))$ to compute the statistical distance between a point $x$ and $m(X)$, the center of $X$, in the metric $S(X)$ (assuming $S(X)$ is invertible): $$d(x,m(X), S(X))=\sqrt{(x-m(X))^{\top}S^{-1}(X)(x-m(X))}$$ Affine equivariance of $(m(X),S(X))$ is equivalent ($\Leftrightarrow$) to affine invariance of $d(x,m(X), S(X))$. Affine invariance of $d(x,m(X), S(X))$ means that this measure (of outlyingness of $x$ wrt to $X$) will not be affected by the scale and orientation (correlation structure) of the columns of $X$. I many application equi/invariance is extremely helpful. The alternative is that one has to run the analysis obtained using the non equivariant procedure (in this case the OGK) on many transformed versions of $X$: $\{X',X'',\ldots\}$ --each obtained by applying random matrices $\{A', A'',\ldots\}$ to the original data matrix $X$-- in the hope of assessing the sensitivity of the analysis (in your case the observations flagged as outliers) to the coordinate system in which you measure the data $X$. I stress that this sort of sensitivity check is not restricted to robust statistics. For example, PCA analysis is not scale equivariant. When performing PCA, it is prudent to run the PCA analysis on various rescaling of the data to assess the sensitivity of whatever results are found to the original scaling of the data. Likewise, Deep Neural Nets are not rotation equivariant and here too (at least in images and character recognition) it is common to re-run the DNN of rotated copies of the inputs to assess the sensitivity of the results to the orientation of the training data. With equivariant procedures, these particular sensitivity checks are not necessary (for example, the statistical distances wrt to the FMCD estimates of location and scatter at $\{X', X'',\ldots\}$ would all always be identical).
What is a practical explanation of affine equivariance and why does it matter for a covariance estim
I will first recall the property formally: Given an $n$ by $p$, $n>p$ data matrix $X$, an affine equivariant estimator of location and scatter $(m(X), S(X))$ is one for which: $$(0)\quad m(A X)=A m(X)
What is a practical explanation of affine equivariance and why does it matter for a covariance estimator? I will first recall the property formally: Given an $n$ by $p$, $n>p$ data matrix $X$, an affine equivariant estimator of location and scatter $(m(X), S(X))$ is one for which: $$(0)\quad m(A X)=A m(X)$$ $$(1)\quad S(A X)=A^\top S(X)A$$ for any $p$ by $p$ non singular matrix $A$. Consider a situation where one would use $(m(X),S(X))$ to compute the statistical distance between a point $x$ and $m(X)$, the center of $X$, in the metric $S(X)$ (assuming $S(X)$ is invertible): $$d(x,m(X), S(X))=\sqrt{(x-m(X))^{\top}S^{-1}(X)(x-m(X))}$$ Affine equivariance of $(m(X),S(X))$ is equivalent ($\Leftrightarrow$) to affine invariance of $d(x,m(X), S(X))$. Affine invariance of $d(x,m(X), S(X))$ means that this measure (of outlyingness of $x$ wrt to $X$) will not be affected by the scale and orientation (correlation structure) of the columns of $X$. I many application equi/invariance is extremely helpful. The alternative is that one has to run the analysis obtained using the non equivariant procedure (in this case the OGK) on many transformed versions of $X$: $\{X',X'',\ldots\}$ --each obtained by applying random matrices $\{A', A'',\ldots\}$ to the original data matrix $X$-- in the hope of assessing the sensitivity of the analysis (in your case the observations flagged as outliers) to the coordinate system in which you measure the data $X$. I stress that this sort of sensitivity check is not restricted to robust statistics. For example, PCA analysis is not scale equivariant. When performing PCA, it is prudent to run the PCA analysis on various rescaling of the data to assess the sensitivity of whatever results are found to the original scaling of the data. Likewise, Deep Neural Nets are not rotation equivariant and here too (at least in images and character recognition) it is common to re-run the DNN of rotated copies of the inputs to assess the sensitivity of the results to the orientation of the training data. With equivariant procedures, these particular sensitivity checks are not necessary (for example, the statistical distances wrt to the FMCD estimates of location and scatter at $\{X', X'',\ldots\}$ would all always be identical).
What is a practical explanation of affine equivariance and why does it matter for a covariance estim I will first recall the property formally: Given an $n$ by $p$, $n>p$ data matrix $X$, an affine equivariant estimator of location and scatter $(m(X), S(X))$ is one for which: $$(0)\quad m(A X)=A m(X)
54,265
R and SAS produce the same test-statistics but different p values for normality tests
The actual Kolmogorov-Smirnov, Anderson-Darling and Cramer-von Mises tests are for completely specified distributions. You're estimating the mean and variance of the residuals in your code so you don't have completely specified distributions, which will make your p-values larger than they should be. There's another test based on estimating parameters and using a Kolmogorov-Smirnov type statistic -- properly called a Lilliefors test; it's no longer distribution free and you need a different distribution for the test statistic depending on which distribution you start with and which parameters you esitmate. Lilliefors did the normal and exponential cases. The normal with both parameters estimated case can be done in R using lillie.test in the nortest package. For the other two tests the same comments apply (though approximate adjustments are a little simpler); the versions you're using in goftest are again for completely specified distributions. In the same package I mentioned earlier (nortest) there are versions of the Cramer-von Mises and Anderson Darling tests for the case of testing normality. If you check the help on those functions, they specify that they're for the composite hypothesis of normality, which is what you seek here. That won't necessarily make the p-values identical across SAS and R (they may not use the same approximations, for example) but if you use the corresponding tests they should be much closer. There's an additional issue in your case -- it appears you're testing residuals (perhaps in an AR, but it doesn't matter for the present point). Even the versions in nortest don't account for the dependence between residuals. They're for independent, identically distributed values from a normal distribution with unspecified mean and variance. If you had normal errors you don't have independence of residuals and you don't usually have exactly identical distributions. So even if you account for the estimation issue, the tests still won't be exactly right. I don't know what SAS is doing, but my guess is it's probably not accounting for this non-i.i.d. issue either. As a general rule, if you want to test normality I wouldn't use multiple tests, (pick one that best identifies the kinds of deviations from normality you most want to pick up) and indeed, I wouldn't use those tests (though the Anderson Darling is often a pretty decent choice) -- I'd use Shapiro Wilk or one of the related tests to it. On the other hand if I am trying to assess the suitability of a normality assumption for some model, I wouldn't use a formal hypothesis test at all. The problem is not "are the errors really normal?" (outside of simulated data are they ever actually normal? I seriously doubt it), it's "how much difference does it make?". That's an effect-size question, not a hypothesis testing question.
R and SAS produce the same test-statistics but different p values for normality tests
The actual Kolmogorov-Smirnov, Anderson-Darling and Cramer-von Mises tests are for completely specified distributions. You're estimating the mean and variance of the residuals in your code so you don'
R and SAS produce the same test-statistics but different p values for normality tests The actual Kolmogorov-Smirnov, Anderson-Darling and Cramer-von Mises tests are for completely specified distributions. You're estimating the mean and variance of the residuals in your code so you don't have completely specified distributions, which will make your p-values larger than they should be. There's another test based on estimating parameters and using a Kolmogorov-Smirnov type statistic -- properly called a Lilliefors test; it's no longer distribution free and you need a different distribution for the test statistic depending on which distribution you start with and which parameters you esitmate. Lilliefors did the normal and exponential cases. The normal with both parameters estimated case can be done in R using lillie.test in the nortest package. For the other two tests the same comments apply (though approximate adjustments are a little simpler); the versions you're using in goftest are again for completely specified distributions. In the same package I mentioned earlier (nortest) there are versions of the Cramer-von Mises and Anderson Darling tests for the case of testing normality. If you check the help on those functions, they specify that they're for the composite hypothesis of normality, which is what you seek here. That won't necessarily make the p-values identical across SAS and R (they may not use the same approximations, for example) but if you use the corresponding tests they should be much closer. There's an additional issue in your case -- it appears you're testing residuals (perhaps in an AR, but it doesn't matter for the present point). Even the versions in nortest don't account for the dependence between residuals. They're for independent, identically distributed values from a normal distribution with unspecified mean and variance. If you had normal errors you don't have independence of residuals and you don't usually have exactly identical distributions. So even if you account for the estimation issue, the tests still won't be exactly right. I don't know what SAS is doing, but my guess is it's probably not accounting for this non-i.i.d. issue either. As a general rule, if you want to test normality I wouldn't use multiple tests, (pick one that best identifies the kinds of deviations from normality you most want to pick up) and indeed, I wouldn't use those tests (though the Anderson Darling is often a pretty decent choice) -- I'd use Shapiro Wilk or one of the related tests to it. On the other hand if I am trying to assess the suitability of a normality assumption for some model, I wouldn't use a formal hypothesis test at all. The problem is not "are the errors really normal?" (outside of simulated data are they ever actually normal? I seriously doubt it), it's "how much difference does it make?". That's an effect-size question, not a hypothesis testing question.
R and SAS produce the same test-statistics but different p values for normality tests The actual Kolmogorov-Smirnov, Anderson-Darling and Cramer-von Mises tests are for completely specified distributions. You're estimating the mean and variance of the residuals in your code so you don'
54,266
With an R function that expects a covariance matrix, can I give it a correlation matrix?
A correlation matrix is a covariance matrix (of standardized variables) so you can do it (a correlation matrix is a valid covariance matrix* after all) -- the question is whether you end up with what you need. You'll get multivariate normals with unit variance, with the population correlation matrix you supplied as a covariance. If you don't mind all your variances being 1, that should be fine. * if you supply a sample correlation matrix, it is possible to get a matrix that's not positive definite -- it's even possible to get a matrix that's not positive semidefinite. This would be a problem for the function. For example if the correlations are calculated "pairwise" (ignoring missingness in other variables than the two you're computing the current correlation of) you can easily get this issue.
With an R function that expects a covariance matrix, can I give it a correlation matrix?
A correlation matrix is a covariance matrix (of standardized variables) so you can do it (a correlation matrix is a valid covariance matrix* after all) -- the question is whether you end up with what
With an R function that expects a covariance matrix, can I give it a correlation matrix? A correlation matrix is a covariance matrix (of standardized variables) so you can do it (a correlation matrix is a valid covariance matrix* after all) -- the question is whether you end up with what you need. You'll get multivariate normals with unit variance, with the population correlation matrix you supplied as a covariance. If you don't mind all your variances being 1, that should be fine. * if you supply a sample correlation matrix, it is possible to get a matrix that's not positive definite -- it's even possible to get a matrix that's not positive semidefinite. This would be a problem for the function. For example if the correlations are calculated "pairwise" (ignoring missingness in other variables than the two you're computing the current correlation of) you can easily get this issue.
With an R function that expects a covariance matrix, can I give it a correlation matrix? A correlation matrix is a covariance matrix (of standardized variables) so you can do it (a correlation matrix is a valid covariance matrix* after all) -- the question is whether you end up with what
54,267
With an R function that expects a covariance matrix, can I give it a correlation matrix?
No you cannot do this, those two things are not the same thing. There is no way, in general, you can go from correlation to covariance without knowing the individual variances. As pointed out in the comment below, the correlation will equal the covariance if all the variances are indeed $=1$. The formula for the correlation is $$ Cor(x,y)=\frac{cov(x,y)}{\sigma_x \cdot \sigma_y} $$ So if you wish to specify the correlation, rather than the covariance, perhaps because it is easier to talk about correlation (since it is scale independent). You can create the correlations you wish, along with whatever variance you wish, from this you form a propper covariance matrix - and feed it to the function.
With an R function that expects a covariance matrix, can I give it a correlation matrix?
No you cannot do this, those two things are not the same thing. There is no way, in general, you can go from correlation to covariance without knowing the individual variances. As pointed out in the c
With an R function that expects a covariance matrix, can I give it a correlation matrix? No you cannot do this, those two things are not the same thing. There is no way, in general, you can go from correlation to covariance without knowing the individual variances. As pointed out in the comment below, the correlation will equal the covariance if all the variances are indeed $=1$. The formula for the correlation is $$ Cor(x,y)=\frac{cov(x,y)}{\sigma_x \cdot \sigma_y} $$ So if you wish to specify the correlation, rather than the covariance, perhaps because it is easier to talk about correlation (since it is scale independent). You can create the correlations you wish, along with whatever variance you wish, from this you form a propper covariance matrix - and feed it to the function.
With an R function that expects a covariance matrix, can I give it a correlation matrix? No you cannot do this, those two things are not the same thing. There is no way, in general, you can go from correlation to covariance without knowing the individual variances. As pointed out in the c
54,268
Using proportions directly instead of cbind() in glm() binomial regression is the same [R]? [closed]
Questions that are only about how to use R are off topic here; this will be closed. Regarding the statistical issues involved in this situation, @JeremyMiles has provided a good answer. For an R-specific response to this, it may help you to read the documentation for ?glm: For a binomial GLM prior weights are used to give the number of trials when the response is the proportion of successes So you need: glm(proportion ~ other variables, family=binomial, weights=totals)
Using proportions directly instead of cbind() in glm() binomial regression is the same [R]? [closed]
Questions that are only about how to use R are off topic here; this will be closed. Regarding the statistical issues involved in this situation, @JeremyMiles has provided a good answer. For an R-sp
Using proportions directly instead of cbind() in glm() binomial regression is the same [R]? [closed] Questions that are only about how to use R are off topic here; this will be closed. Regarding the statistical issues involved in this situation, @JeremyMiles has provided a good answer. For an R-specific response to this, it may help you to read the documentation for ?glm: For a binomial GLM prior weights are used to give the number of trials when the response is the proportion of successes So you need: glm(proportion ~ other variables, family=binomial, weights=totals)
Using proportions directly instead of cbind() in glm() binomial regression is the same [R]? [closed] Questions that are only about how to use R are off topic here; this will be closed. Regarding the statistical issues involved in this situation, @JeremyMiles has provided a good answer. For an R-sp
54,269
Using proportions directly instead of cbind() in glm() binomial regression is the same [R]? [closed]
First, you don't get the same answer. (And if you think you do, can you provide a reproducible example). When you use the proportion, you discard information about the level of certainty of the effect. 1 success from two trials is different to 100 successes / 200 trials.
Using proportions directly instead of cbind() in glm() binomial regression is the same [R]? [closed]
First, you don't get the same answer. (And if you think you do, can you provide a reproducible example). When you use the proportion, you discard information about the level of certainty of the effec
Using proportions directly instead of cbind() in glm() binomial regression is the same [R]? [closed] First, you don't get the same answer. (And if you think you do, can you provide a reproducible example). When you use the proportion, you discard information about the level of certainty of the effect. 1 success from two trials is different to 100 successes / 200 trials.
Using proportions directly instead of cbind() in glm() binomial regression is the same [R]? [closed] First, you don't get the same answer. (And if you think you do, can you provide a reproducible example). When you use the proportion, you discard information about the level of certainty of the effec
54,270
Is the joint probability of two sets equal to their intersection?
Short answer: yes Long answer: probability is just a measure of the likelihood of some set of events happening (e.g. a coin flip landing heads is the event, and the probability of this event for a fair coin is 0.5). So if you're looking at 2 sets that are exactly the same, namely A intersect B or A,B (although I find the latter notation a bit ambiguous), then their probabilities have to be the same. In general, it's a lot easier to think about events and how they relate, then using that to construct your probabilities.
Is the joint probability of two sets equal to their intersection?
Short answer: yes Long answer: probability is just a measure of the likelihood of some set of events happening (e.g. a coin flip landing heads is the event, and the probability of this event for a fai
Is the joint probability of two sets equal to their intersection? Short answer: yes Long answer: probability is just a measure of the likelihood of some set of events happening (e.g. a coin flip landing heads is the event, and the probability of this event for a fair coin is 0.5). So if you're looking at 2 sets that are exactly the same, namely A intersect B or A,B (although I find the latter notation a bit ambiguous), then their probabilities have to be the same. In general, it's a lot easier to think about events and how they relate, then using that to construct your probabilities.
Is the joint probability of two sets equal to their intersection? Short answer: yes Long answer: probability is just a measure of the likelihood of some set of events happening (e.g. a coin flip landing heads is the event, and the probability of this event for a fai
54,271
Is the joint probability of two sets equal to their intersection?
Yes, commas generally are used to denote intersection even by those who are otherwise very careful to avoid the possibility of their writings being misinterpreted. The most common usage is $P(X\leq x, Y \leq y)$ for the more prolix and correct $P\big ( (X \leq x)\cap (Y \leq y)\big)$ to denote the value of the joint CDF of random variables $X$ and $Y$.
Is the joint probability of two sets equal to their intersection?
Yes, commas generally are used to denote intersection even by those who are otherwise very careful to avoid the possibility of their writings being misinterpreted. The most common usage is $P(X\leq x
Is the joint probability of two sets equal to their intersection? Yes, commas generally are used to denote intersection even by those who are otherwise very careful to avoid the possibility of their writings being misinterpreted. The most common usage is $P(X\leq x, Y \leq y)$ for the more prolix and correct $P\big ( (X \leq x)\cap (Y \leq y)\big)$ to denote the value of the joint CDF of random variables $X$ and $Y$.
Is the joint probability of two sets equal to their intersection? Yes, commas generally are used to denote intersection even by those who are otherwise very careful to avoid the possibility of their writings being misinterpreted. The most common usage is $P(X\leq x
54,272
Applying L1, L2 and Tikhonov Regularization to Neural Nets: Possible Misconceptions
If you are interested in pursuing neural networks (or other machine learning), you would be well served to spend some time on the mathematical fundamentals. I started skimming this trendy new treatise, which IMO is not too difficult mathematically (although I have seen some complain otherwise, and I was a math major, so YMMV!). That said, I will try to address your questions as best I can at a high level. Can L1, L2 and Tikhonov calculations be appended as extra terms directly to a cost function, such as MSE? Yes, these are all penalty methods which add terms to the cost function. If #1 is correct, is is possible to apply them in supervised learning situations where there are no cost functions? I am not sure what you are thinking here, but in my mind the supervised case is the most clear in terms of cost functions and regularization. In general you can think of the error conceptually as $$\text{Total Error}=\left(\text{ Data Error }\right) + \left(\text{ Prior Error }\right)$$ where the first term penalizes the misfit of the model predictions vs. the training data, while the second term penalizes overfitting of the model (with the goal of lowering generalization errors). For a problem where features $x$ are used to predict data $y$ via a function $f(x,w)$ parameterized by weights $w$, the above cost function will typically be realized mathematically by something like $$E[w]=\|\,f(x,w)-y\,\|_p^p + \|\Lambda w\|_q^q$$ Here the terms have the same interpretation as above, with the errors measured by "$L_p$ norms": The data misfit usually has $p=2$, which corresponds to MSE, while the regularization term may use $q=2$ or $q=1$. (The $\Lambda$ I will get to below.) The latter corresponds to absolute(-value) errors rather than square errors, and is commonly used to promote sparsity in the solution vector $w$ (i.e. many 0 weights, effectively limiting the connectivity paths). The $L_1$ norm can be used for the data misfit term also, typically to reduce sensitivity to outliers in the data $y$. (Conceptually, predictions will target the median of $y$ rather than the mean of $y$.) Note that in many unsupervised learning scenarios there are effectively two "machines", with each taking turns as "data" vs. "prediction" (e.g. the coder and decoder parts of an autoencoder). I've gathered that L2 is a special case of Tikhonov Regularization ... The two can be used as synonyms. Some communities tend to use "$L_2$" to refer to the special case where the Tikhonov matrix $\Lambda$ is simply a scalar $\lambda$. Terminology varies quite a bit. L2 is differentiable and therefore compatible with gradient descent ... however, it appears that we can substitute other operations like difference and Fourier operators to derive other brands of Tikhonov Regularization that are not equivalent to L2. This is not correct. Tikhonov regularization always uses the $L_2$ norm, so is always a differentiable $L_2$ regularization. The matrix $\Lambda$ does not impact this (it is constant, it does not matter if it is a scalar, a diagonal covariance, a finite difference operator, a Fourier transform, etc.). For differentiability, the comparison is typically between $L_2$ and $L_1$. You can think of it like $$(x^2)'=2x \quad \text{ vs. } \quad |x|'=\mathrm{sgn}(x)$$ i.e. the $L_2$ norm has a continuous derivative while the $L_1$ norm has a discontinuous derivative. This difference is not as big as you might imagine in terms of optimize-ability. For example the popular ReLU activation function also has a discontinuous derivative, i.e. $$\max(0,x)'=(x>0)$$ More importantly, the $L_1$ regularizer is convex, so helps suppres local minima in the composite cost function. (Note I am avoiding technical issues around convex optimization & differentiability, such as "subgradients", as these are not essential to the point.) If #4 is true, can someone post an example of a Tikhonov formula that is not equivalent to L2, yet would be useful in neural nets? This was answered above: No, because all Tikhonov regularizers use the $L_2$ norm. (Wether they are useful for NN or not!) There's are arcane references in the literature to a "Tikhonov Matrix," which I can't seem to find any definitions of (although I've run across several unanswered questions scattered across the Internet about its meaning). An explanation of its meaning would be helpful. This is simply the matrix $\Lambda$. There are many forms that it can take, depending on the goals of the regularization: In the simplest case it is simply a constant ($\lambda>0$). This penalizes large weights (i.e. promotes $w_i^2\approx 0$ on average), which can be useful to prevent over-fitting. In the next simplest case, $\Lambda$ is diagonal, which allows per-weight regularization (i.e. $\lambda_iw_i^2\approx 0$). For example the regularization might vary with level in a deep network. Many other forms are possible, so I will end with one example of a sparse but non-diagonal $\Lambda$ that is common: A finite difference operator. This only makes sense when the weights $w$ are arranged in some definite spatial pattern. In this case $\Lambda w\approx \nabla w$, so the regularization promotes smoothness (i.e. $\nabla w\approx 0$). This is common in image processing (e.g. tomography), but could conceivably be applied in some types of neural-network architectures as well (e.g. ConvNets). In the last case, note that this type of "Tikhonov" matrix $\Lambda$ can really be used in $L_1$ regularization as well. A great example I did once was with the smoothing example in an image processing context: I had a least squares ($L_2$) cost function set up to do denoising, and by literally just putting the same matrices into a "robust least squares" solver* I got an edge-preserving smoother "for free"! (*Essentially this just changed the $p$'s and $q$'s in my first equation from 2 to 1.) I hope this answer has helped you to understand regularization better! I would be happy to expand on any part that is still not clear.
Applying L1, L2 and Tikhonov Regularization to Neural Nets: Possible Misconceptions
If you are interested in pursuing neural networks (or other machine learning), you would be well served to spend some time on the mathematical fundamentals. I started skimming this trendy new treatise
Applying L1, L2 and Tikhonov Regularization to Neural Nets: Possible Misconceptions If you are interested in pursuing neural networks (or other machine learning), you would be well served to spend some time on the mathematical fundamentals. I started skimming this trendy new treatise, which IMO is not too difficult mathematically (although I have seen some complain otherwise, and I was a math major, so YMMV!). That said, I will try to address your questions as best I can at a high level. Can L1, L2 and Tikhonov calculations be appended as extra terms directly to a cost function, such as MSE? Yes, these are all penalty methods which add terms to the cost function. If #1 is correct, is is possible to apply them in supervised learning situations where there are no cost functions? I am not sure what you are thinking here, but in my mind the supervised case is the most clear in terms of cost functions and regularization. In general you can think of the error conceptually as $$\text{Total Error}=\left(\text{ Data Error }\right) + \left(\text{ Prior Error }\right)$$ where the first term penalizes the misfit of the model predictions vs. the training data, while the second term penalizes overfitting of the model (with the goal of lowering generalization errors). For a problem where features $x$ are used to predict data $y$ via a function $f(x,w)$ parameterized by weights $w$, the above cost function will typically be realized mathematically by something like $$E[w]=\|\,f(x,w)-y\,\|_p^p + \|\Lambda w\|_q^q$$ Here the terms have the same interpretation as above, with the errors measured by "$L_p$ norms": The data misfit usually has $p=2$, which corresponds to MSE, while the regularization term may use $q=2$ or $q=1$. (The $\Lambda$ I will get to below.) The latter corresponds to absolute(-value) errors rather than square errors, and is commonly used to promote sparsity in the solution vector $w$ (i.e. many 0 weights, effectively limiting the connectivity paths). The $L_1$ norm can be used for the data misfit term also, typically to reduce sensitivity to outliers in the data $y$. (Conceptually, predictions will target the median of $y$ rather than the mean of $y$.) Note that in many unsupervised learning scenarios there are effectively two "machines", with each taking turns as "data" vs. "prediction" (e.g. the coder and decoder parts of an autoencoder). I've gathered that L2 is a special case of Tikhonov Regularization ... The two can be used as synonyms. Some communities tend to use "$L_2$" to refer to the special case where the Tikhonov matrix $\Lambda$ is simply a scalar $\lambda$. Terminology varies quite a bit. L2 is differentiable and therefore compatible with gradient descent ... however, it appears that we can substitute other operations like difference and Fourier operators to derive other brands of Tikhonov Regularization that are not equivalent to L2. This is not correct. Tikhonov regularization always uses the $L_2$ norm, so is always a differentiable $L_2$ regularization. The matrix $\Lambda$ does not impact this (it is constant, it does not matter if it is a scalar, a diagonal covariance, a finite difference operator, a Fourier transform, etc.). For differentiability, the comparison is typically between $L_2$ and $L_1$. You can think of it like $$(x^2)'=2x \quad \text{ vs. } \quad |x|'=\mathrm{sgn}(x)$$ i.e. the $L_2$ norm has a continuous derivative while the $L_1$ norm has a discontinuous derivative. This difference is not as big as you might imagine in terms of optimize-ability. For example the popular ReLU activation function also has a discontinuous derivative, i.e. $$\max(0,x)'=(x>0)$$ More importantly, the $L_1$ regularizer is convex, so helps suppres local minima in the composite cost function. (Note I am avoiding technical issues around convex optimization & differentiability, such as "subgradients", as these are not essential to the point.) If #4 is true, can someone post an example of a Tikhonov formula that is not equivalent to L2, yet would be useful in neural nets? This was answered above: No, because all Tikhonov regularizers use the $L_2$ norm. (Wether they are useful for NN or not!) There's are arcane references in the literature to a "Tikhonov Matrix," which I can't seem to find any definitions of (although I've run across several unanswered questions scattered across the Internet about its meaning). An explanation of its meaning would be helpful. This is simply the matrix $\Lambda$. There are many forms that it can take, depending on the goals of the regularization: In the simplest case it is simply a constant ($\lambda>0$). This penalizes large weights (i.e. promotes $w_i^2\approx 0$ on average), which can be useful to prevent over-fitting. In the next simplest case, $\Lambda$ is diagonal, which allows per-weight regularization (i.e. $\lambda_iw_i^2\approx 0$). For example the regularization might vary with level in a deep network. Many other forms are possible, so I will end with one example of a sparse but non-diagonal $\Lambda$ that is common: A finite difference operator. This only makes sense when the weights $w$ are arranged in some definite spatial pattern. In this case $\Lambda w\approx \nabla w$, so the regularization promotes smoothness (i.e. $\nabla w\approx 0$). This is common in image processing (e.g. tomography), but could conceivably be applied in some types of neural-network architectures as well (e.g. ConvNets). In the last case, note that this type of "Tikhonov" matrix $\Lambda$ can really be used in $L_1$ regularization as well. A great example I did once was with the smoothing example in an image processing context: I had a least squares ($L_2$) cost function set up to do denoising, and by literally just putting the same matrices into a "robust least squares" solver* I got an edge-preserving smoother "for free"! (*Essentially this just changed the $p$'s and $q$'s in my first equation from 2 to 1.) I hope this answer has helped you to understand regularization better! I would be happy to expand on any part that is still not clear.
Applying L1, L2 and Tikhonov Regularization to Neural Nets: Possible Misconceptions If you are interested in pursuing neural networks (or other machine learning), you would be well served to spend some time on the mathematical fundamentals. I started skimming this trendy new treatise
54,273
How to classify data which is spiral in shape?
You could use SVM with an RBF kernel. Example: import numpy as np import matplotlib.pyplot as plt import mlpy # sudo pip install mlpy f = np.loadtxt("spiral.data") x, y = f[:, :2], f[:, 2] svm = mlpy.LibSvm(svm_type='c_svc', kernel_type='rbf', gamma=100) svm.learn(x, y) xmin, xmax = x[:,0].min()-0.1, x[:,0].max()+0.1 ymin, ymax = x[:,1].min()-0.1, x[:,1].max()+0.1 xx, yy = np.meshgrid(np.arange(xmin, xmax, 0.01), np.arange(ymin, ymax, 0.01)) xnew = np.c_[xx.ravel(), yy.ravel()] ynew = svm.pred(xnew).reshape(xx.shape) fig = plt.figure(1) plt.set_cmap(plt.cm.Paired) plt.pcolormesh(xx, yy, ynew) plt.scatter(x[:,0], x[:,1], c=y) plt.show() You can also use least squares support vector machine. spiral.data: 1 0 1 -1 0 -1 0.971354 0.209317 1 -0.971354 -0.209317 -1 0.906112 0.406602 1 -0.906112 -0.406602 -1 0.807485 0.584507 1 -0.807485 -0.584507 -1 0.679909 0.736572 1 -0.679909 -0.736572 -1 0.528858 0.857455 1 -0.528858 -0.857455 -1 0.360603 0.943128 1 -0.360603 -0.943128 -1 0.181957 0.991002 1 -0.181957 -0.991002 -1 -3.07692e-06 1 1 3.07692e-06 -1 -1 -0.178211 0.970568 1 0.178211 -0.970568 -1 -0.345891 0.90463 1 0.345891 -0.90463 -1 -0.496812 0.805483 1 0.496812 -0.805483 -1 -0.625522 0.67764 1 0.625522 -0.67764 -1 -0.727538 0.52663 1 0.727538 -0.52663 -1 -0.799514 0.35876 1 0.799514 -0.35876 -1 -0.839328 0.180858 1 0.839328 -0.180858 -1 -0.846154 -6.66667e-06 1 0.846154 6.66667e-06 -1 -0.820463 -0.176808 1 0.820463 0.176808 -1 -0.763975 -0.342827 1 0.763975 0.342827 -1 -0.679563 -0.491918 1 0.679563 0.491918 -1 -0.57112 -0.618723 1 0.57112 0.618723 -1 -0.443382 -0.71888 1 0.443382 0.71888 -1 -0.301723 -0.78915 1 0.301723 0.78915 -1 -0.151937 -0.82754 1 0.151937 0.82754 -1 9.23077e-06 -0.833333 1 -9.23077e-06 0.833333 -1 0.148202 -0.807103 1 -0.148202 0.807103 -1 0.287022 -0.750648 1 -0.287022 0.750648 -1 0.411343 -0.666902 1 -0.411343 0.666902 -1 0.516738 -0.559785 1 -0.516738 0.559785 -1 0.599623 -0.43403 1 -0.599623 0.43403 -1 0.65738 -0.294975 1 -0.65738 0.294975 -1 0.688438 -0.14834 1 -0.688438 0.14834 -1 0.692308 1.16667e-05 1 -0.692308 -1.16667e-05 -1 0.669572 0.144297 1 -0.669572 -0.144297 -1 0.621838 0.27905 1 -0.621838 -0.27905 -1 0.551642 0.399325 1 -0.551642 -0.399325 -1 0.462331 0.500875 1 -0.462331 -0.500875 -1 0.357906 0.580303 1 -0.357906 -0.580303 -1 0.242846 0.635172 1 -0.242846 -0.635172 -1 0.12192 0.664075 1 -0.12192 -0.664075 -1 -1.07692e-05 0.666667 1 1.07692e-05 -0.666667 -1 -0.118191 0.643638 1 0.118191 -0.643638 -1 -0.228149 0.596667 1 0.228149 -0.596667 -1 -0.325872 0.528323 1 0.325872 -0.528323 -1 -0.407954 0.441933 1 0.407954 -0.441933 -1 -0.471706 0.341433 1 0.471706 -0.341433 -1 -0.515245 0.231193 1 0.515245 -0.231193 -1 -0.537548 0.115822 1 0.537548 -0.115822 -1 -0.538462 -1.33333e-05 1 0.538462 1.33333e-05 -1 -0.518682 -0.111783 1 0.518682 0.111783 -1 -0.479702 -0.215272 1 0.479702 0.215272 -1 -0.423723 -0.306732 1 0.423723 0.306732 -1 -0.353545 -0.383025 1 0.353545 0.383025 -1 -0.272434 -0.441725 1 0.272434 0.441725 -1 -0.183971 -0.481192 1 0.183971 0.481192 -1 -0.0919062 -0.500612 1 0.0919062 0.500612 -1 1.23077e-05 -0.5 1 -1.23077e-05 0.5 -1 0.0881769 -0.480173 1 -0.0881769 0.480173 -1 0.169275 -0.442687 1 -0.169275 0.442687 -1 0.2404 -0.389745 1 -0.2404 0.389745 -1 0.299169 -0.324082 1 -0.299169 0.324082 -1 0.343788 -0.248838 1 -0.343788 0.248838 -1 0.373109 -0.167412 1 -0.373109 0.167412 -1 0.386658 -0.0833083 1 -0.386658 0.0833083 -1 0.384615 1.16667e-05 1 -0.384615 -1.16667e-05 -1 0.367792 0.0792667 1 -0.367792 -0.0792667 -1 0.337568 0.15149 1 -0.337568 -0.15149 -1 0.295805 0.214137 1 -0.295805 -0.214137 -1 0.24476 0.265173 1 -0.24476 -0.265173 -1 0.186962 0.303147 1 -0.186962 -0.303147 -1 0.125098 0.327212 1 -0.125098 -0.327212 -1 0.0618938 0.337147 1 -0.0618938 -0.337147 -1 -1.07692e-05 0.333333 1 1.07692e-05 -0.333333 -1 -0.0581615 0.31671 1 0.0581615 -0.31671 -1 -0.110398 0.288708 1 0.110398 -0.288708 -1 -0.154926 0.251167 1 0.154926 -0.251167 -1 -0.190382 0.206232 1 0.190382 -0.206232 -1 -0.215868 0.156247 1 0.215868 -0.156247 -1 -0.230974 0.103635 1 0.230974 -0.103635 -1 -0.235768 0.050795 1 0.235768 -0.050795 -1 -0.230769 -1e-05 1 0.230769 1e-05 -1 -0.216903 -0.0467483 1 0.216903 0.0467483 -1 -0.195432 -0.0877067 1 0.195432 0.0877067 -1 -0.167889 -0.121538 1 0.167889 0.121538 -1 -0.135977 -0.14732 1 0.135977 0.14732 -1 -0.101492 -0.164567 1 0.101492 0.164567 -1 -0.0662277 -0.17323 1 0.0662277 0.17323 -1 -0.0318831 -0.173682 1 0.0318831 0.173682 -1 6.15385e-06 -0.166667 1 -6.15385e-06 0.166667 -1 0.0281431 -0.153247 1 -0.0281431 0.153247 -1 0.05152 -0.13473 1 -0.05152 0.13473 -1 0.0694508 -0.112592 1 -0.0694508 0.112592 -1 0.0815923 -0.088385 1 -0.0815923 0.088385 -1 0.0879462 -0.063655 1 -0.0879462 0.063655 -1 0.0888369 -0.0398583 1 -0.0888369 0.0398583 -1 0.0848769 -0.018285 1 -0.0848769 0.018285 -1 0.0769231 3.33333e-06 1 -0.0769231 -3.33333e-06 -1
How to classify data which is spiral in shape?
You could use SVM with an RBF kernel. Example: import numpy as np import matplotlib.pyplot as plt import mlpy # sudo pip install mlpy f = np.loadtxt("spiral.data") x, y = f[:, :2], f[:, 2] svm = mlpy.
How to classify data which is spiral in shape? You could use SVM with an RBF kernel. Example: import numpy as np import matplotlib.pyplot as plt import mlpy # sudo pip install mlpy f = np.loadtxt("spiral.data") x, y = f[:, :2], f[:, 2] svm = mlpy.LibSvm(svm_type='c_svc', kernel_type='rbf', gamma=100) svm.learn(x, y) xmin, xmax = x[:,0].min()-0.1, x[:,0].max()+0.1 ymin, ymax = x[:,1].min()-0.1, x[:,1].max()+0.1 xx, yy = np.meshgrid(np.arange(xmin, xmax, 0.01), np.arange(ymin, ymax, 0.01)) xnew = np.c_[xx.ravel(), yy.ravel()] ynew = svm.pred(xnew).reshape(xx.shape) fig = plt.figure(1) plt.set_cmap(plt.cm.Paired) plt.pcolormesh(xx, yy, ynew) plt.scatter(x[:,0], x[:,1], c=y) plt.show() You can also use least squares support vector machine. spiral.data: 1 0 1 -1 0 -1 0.971354 0.209317 1 -0.971354 -0.209317 -1 0.906112 0.406602 1 -0.906112 -0.406602 -1 0.807485 0.584507 1 -0.807485 -0.584507 -1 0.679909 0.736572 1 -0.679909 -0.736572 -1 0.528858 0.857455 1 -0.528858 -0.857455 -1 0.360603 0.943128 1 -0.360603 -0.943128 -1 0.181957 0.991002 1 -0.181957 -0.991002 -1 -3.07692e-06 1 1 3.07692e-06 -1 -1 -0.178211 0.970568 1 0.178211 -0.970568 -1 -0.345891 0.90463 1 0.345891 -0.90463 -1 -0.496812 0.805483 1 0.496812 -0.805483 -1 -0.625522 0.67764 1 0.625522 -0.67764 -1 -0.727538 0.52663 1 0.727538 -0.52663 -1 -0.799514 0.35876 1 0.799514 -0.35876 -1 -0.839328 0.180858 1 0.839328 -0.180858 -1 -0.846154 -6.66667e-06 1 0.846154 6.66667e-06 -1 -0.820463 -0.176808 1 0.820463 0.176808 -1 -0.763975 -0.342827 1 0.763975 0.342827 -1 -0.679563 -0.491918 1 0.679563 0.491918 -1 -0.57112 -0.618723 1 0.57112 0.618723 -1 -0.443382 -0.71888 1 0.443382 0.71888 -1 -0.301723 -0.78915 1 0.301723 0.78915 -1 -0.151937 -0.82754 1 0.151937 0.82754 -1 9.23077e-06 -0.833333 1 -9.23077e-06 0.833333 -1 0.148202 -0.807103 1 -0.148202 0.807103 -1 0.287022 -0.750648 1 -0.287022 0.750648 -1 0.411343 -0.666902 1 -0.411343 0.666902 -1 0.516738 -0.559785 1 -0.516738 0.559785 -1 0.599623 -0.43403 1 -0.599623 0.43403 -1 0.65738 -0.294975 1 -0.65738 0.294975 -1 0.688438 -0.14834 1 -0.688438 0.14834 -1 0.692308 1.16667e-05 1 -0.692308 -1.16667e-05 -1 0.669572 0.144297 1 -0.669572 -0.144297 -1 0.621838 0.27905 1 -0.621838 -0.27905 -1 0.551642 0.399325 1 -0.551642 -0.399325 -1 0.462331 0.500875 1 -0.462331 -0.500875 -1 0.357906 0.580303 1 -0.357906 -0.580303 -1 0.242846 0.635172 1 -0.242846 -0.635172 -1 0.12192 0.664075 1 -0.12192 -0.664075 -1 -1.07692e-05 0.666667 1 1.07692e-05 -0.666667 -1 -0.118191 0.643638 1 0.118191 -0.643638 -1 -0.228149 0.596667 1 0.228149 -0.596667 -1 -0.325872 0.528323 1 0.325872 -0.528323 -1 -0.407954 0.441933 1 0.407954 -0.441933 -1 -0.471706 0.341433 1 0.471706 -0.341433 -1 -0.515245 0.231193 1 0.515245 -0.231193 -1 -0.537548 0.115822 1 0.537548 -0.115822 -1 -0.538462 -1.33333e-05 1 0.538462 1.33333e-05 -1 -0.518682 -0.111783 1 0.518682 0.111783 -1 -0.479702 -0.215272 1 0.479702 0.215272 -1 -0.423723 -0.306732 1 0.423723 0.306732 -1 -0.353545 -0.383025 1 0.353545 0.383025 -1 -0.272434 -0.441725 1 0.272434 0.441725 -1 -0.183971 -0.481192 1 0.183971 0.481192 -1 -0.0919062 -0.500612 1 0.0919062 0.500612 -1 1.23077e-05 -0.5 1 -1.23077e-05 0.5 -1 0.0881769 -0.480173 1 -0.0881769 0.480173 -1 0.169275 -0.442687 1 -0.169275 0.442687 -1 0.2404 -0.389745 1 -0.2404 0.389745 -1 0.299169 -0.324082 1 -0.299169 0.324082 -1 0.343788 -0.248838 1 -0.343788 0.248838 -1 0.373109 -0.167412 1 -0.373109 0.167412 -1 0.386658 -0.0833083 1 -0.386658 0.0833083 -1 0.384615 1.16667e-05 1 -0.384615 -1.16667e-05 -1 0.367792 0.0792667 1 -0.367792 -0.0792667 -1 0.337568 0.15149 1 -0.337568 -0.15149 -1 0.295805 0.214137 1 -0.295805 -0.214137 -1 0.24476 0.265173 1 -0.24476 -0.265173 -1 0.186962 0.303147 1 -0.186962 -0.303147 -1 0.125098 0.327212 1 -0.125098 -0.327212 -1 0.0618938 0.337147 1 -0.0618938 -0.337147 -1 -1.07692e-05 0.333333 1 1.07692e-05 -0.333333 -1 -0.0581615 0.31671 1 0.0581615 -0.31671 -1 -0.110398 0.288708 1 0.110398 -0.288708 -1 -0.154926 0.251167 1 0.154926 -0.251167 -1 -0.190382 0.206232 1 0.190382 -0.206232 -1 -0.215868 0.156247 1 0.215868 -0.156247 -1 -0.230974 0.103635 1 0.230974 -0.103635 -1 -0.235768 0.050795 1 0.235768 -0.050795 -1 -0.230769 -1e-05 1 0.230769 1e-05 -1 -0.216903 -0.0467483 1 0.216903 0.0467483 -1 -0.195432 -0.0877067 1 0.195432 0.0877067 -1 -0.167889 -0.121538 1 0.167889 0.121538 -1 -0.135977 -0.14732 1 0.135977 0.14732 -1 -0.101492 -0.164567 1 0.101492 0.164567 -1 -0.0662277 -0.17323 1 0.0662277 0.17323 -1 -0.0318831 -0.173682 1 0.0318831 0.173682 -1 6.15385e-06 -0.166667 1 -6.15385e-06 0.166667 -1 0.0281431 -0.153247 1 -0.0281431 0.153247 -1 0.05152 -0.13473 1 -0.05152 0.13473 -1 0.0694508 -0.112592 1 -0.0694508 0.112592 -1 0.0815923 -0.088385 1 -0.0815923 0.088385 -1 0.0879462 -0.063655 1 -0.0879462 0.063655 -1 0.0888369 -0.0398583 1 -0.0888369 0.0398583 -1 0.0848769 -0.018285 1 -0.0848769 0.018285 -1 0.0769231 3.33333e-06 1 -0.0769231 -3.33333e-06 -1
How to classify data which is spiral in shape? You could use SVM with an RBF kernel. Example: import numpy as np import matplotlib.pyplot as plt import mlpy # sudo pip install mlpy f = np.loadtxt("spiral.data") x, y = f[:, :2], f[:, 2] svm = mlpy.
54,274
How to classify data which is spiral in shape?
I had similar experiments comparing to Franck's answer. Please check this post. Do all machine learning algorithms separate data linearly? In the post we use tree, boosting and K nearest neighbor on spiral data. KNN is most intuitive one, it make the classification according to a given point's neighbors. So, spiral data would not "break the neighbor rule" For tree and boosting model, you can understand it as a "really complicated model that can achieve complied decisions". That is why you can see it can roughly learn the pattern, with some errors. Finally you may search for special clustering or kernel PCA in google to see how can we deal with "connected components".
How to classify data which is spiral in shape?
I had similar experiments comparing to Franck's answer. Please check this post. Do all machine learning algorithms separate data linearly? In the post we use tree, boosting and K nearest neighbor o
How to classify data which is spiral in shape? I had similar experiments comparing to Franck's answer. Please check this post. Do all machine learning algorithms separate data linearly? In the post we use tree, boosting and K nearest neighbor on spiral data. KNN is most intuitive one, it make the classification according to a given point's neighbors. So, spiral data would not "break the neighbor rule" For tree and boosting model, you can understand it as a "really complicated model that can achieve complied decisions". That is why you can see it can roughly learn the pattern, with some errors. Finally you may search for special clustering or kernel PCA in google to see how can we deal with "connected components".
How to classify data which is spiral in shape? I had similar experiments comparing to Franck's answer. Please check this post. Do all machine learning algorithms separate data linearly? In the post we use tree, boosting and K nearest neighbor o
54,275
How to classify data which is spiral in shape?
For this dummy problem you can increase the number of features. One particular way that I found to work is using extreme learning machines. Basically, you create a random matrix $K$ with columns equal to number of old features, $d$, and rows equal to number of new features $d'$(I had to use $d'=300d$). Also, create a random bias vector $b$ with length equal to $d'$. And you need a non-linear activation function $f$. Relu in particular works well --- $Relu(X) = max(X,0)$. Then perform linear logistic regression on the new data $X'=f(XK+b)$ (sloppy numpy or matlab notation for adding $b$ to every row of $XK$). Here is a small code using the linear logistic regression of scikit-learn in python. import numpy as np import matplotlib.pyplot as plt import sklearn.linear_model f = np.loadtxt("spiral.data") x, y = f[:, :2], f[:, 2] new_feature_ratio = 300; def relu(Y): return np.maximum(Y, 0) cls = sklearn.linear_model.LogisticRegression( penalty='l2', C=1000, max_iter=1000) K = np.random.randn(x.shape[1], x.shape[1]*new_feature_ratio) b = np.random.randn(x.shape[1]*new_feature_ratio) cls.fit( relu(np.matmul(x,K) + b) ,y) xmin, xmax = x[:,0].min()-0.1, x[:,0].max()+0.1 ymin, ymax = x[:,1].min()-0.1, x[:,1].max()+0.1 xx, yy = np.meshgrid(np.arange(xmin, xmax, 0.01), np.arange(ymin, ymax, 0.01)) xnew = np.c_[xx.ravel(), yy.ravel()] ynew = cls.predict(relu(np.matmul(xnew,K) + b)).reshape(xx.shape) fig = plt.figure(1) plt.set_cmap(plt.cm.Paired) plt.pcolormesh(xx, yy, ynew) plt.scatter(x[y>0,0], x[y>0,1], color='r') plt.scatter(x[y<0,0], x[y<0,1], color='g') plt.show() spiral.data is the same as Frank's answer. This strategy is basically a neural network where the first layer is chosen randomly rather than being trained.
How to classify data which is spiral in shape?
For this dummy problem you can increase the number of features. One particular way that I found to work is using extreme learning machines. Basically, you create a random matrix $K$ with columns equal
How to classify data which is spiral in shape? For this dummy problem you can increase the number of features. One particular way that I found to work is using extreme learning machines. Basically, you create a random matrix $K$ with columns equal to number of old features, $d$, and rows equal to number of new features $d'$(I had to use $d'=300d$). Also, create a random bias vector $b$ with length equal to $d'$. And you need a non-linear activation function $f$. Relu in particular works well --- $Relu(X) = max(X,0)$. Then perform linear logistic regression on the new data $X'=f(XK+b)$ (sloppy numpy or matlab notation for adding $b$ to every row of $XK$). Here is a small code using the linear logistic regression of scikit-learn in python. import numpy as np import matplotlib.pyplot as plt import sklearn.linear_model f = np.loadtxt("spiral.data") x, y = f[:, :2], f[:, 2] new_feature_ratio = 300; def relu(Y): return np.maximum(Y, 0) cls = sklearn.linear_model.LogisticRegression( penalty='l2', C=1000, max_iter=1000) K = np.random.randn(x.shape[1], x.shape[1]*new_feature_ratio) b = np.random.randn(x.shape[1]*new_feature_ratio) cls.fit( relu(np.matmul(x,K) + b) ,y) xmin, xmax = x[:,0].min()-0.1, x[:,0].max()+0.1 ymin, ymax = x[:,1].min()-0.1, x[:,1].max()+0.1 xx, yy = np.meshgrid(np.arange(xmin, xmax, 0.01), np.arange(ymin, ymax, 0.01)) xnew = np.c_[xx.ravel(), yy.ravel()] ynew = cls.predict(relu(np.matmul(xnew,K) + b)).reshape(xx.shape) fig = plt.figure(1) plt.set_cmap(plt.cm.Paired) plt.pcolormesh(xx, yy, ynew) plt.scatter(x[y>0,0], x[y>0,1], color='r') plt.scatter(x[y<0,0], x[y<0,1], color='g') plt.show() spiral.data is the same as Frank's answer. This strategy is basically a neural network where the first layer is chosen randomly rather than being trained.
How to classify data which is spiral in shape? For this dummy problem you can increase the number of features. One particular way that I found to work is using extreme learning machines. Basically, you create a random matrix $K$ with columns equal
54,276
How to classify data which is spiral in shape?
You won't experience a "spiral" in the real world. But it is one of the more complex yet easy to visualize non-linear datasets. The playground in your question is for building intuition with neural networks. The other answers gave solutions that work but in my opinion miss the point of what can be learned here. In the real world, most non-linear functions span complex hyperdimensional spaces that are impossible to directly visualize. Some intuition behind that here. While there are standard techniques to attempt to represent higher dimensional spaces such as t-SNE and more new ones like the Grand Tour you'll rarely be so lucky as to know ahead of time what the underlying function is. Even if you were, you likely wouldn't be able to manually engineer a sophisticated enough kernel that is generalizable enough. What you do know is that neural networks are good universal approximators. A spiral dataset is merely a convenient tool to demonstrate just how difficult it can be to turn theory into reality. Some notes on systematic approaches to approximate a function with a neural network: A Recipe for Training Neural Networks and Machine Learning Yearning. Following some of those methods, I was able to get the answer in about 30 minutes. I encourage people reading to try to arrive at it on their own first. It's a good analogy for the pain of training models in the real world. :) Here's a solution that should get you to ~0.02 test loss/training loss by epoch 3000 using just the raw X1/X2 inputs. Solution
How to classify data which is spiral in shape?
You won't experience a "spiral" in the real world. But it is one of the more complex yet easy to visualize non-linear datasets. The playground in your question is for building intuition with neural ne
How to classify data which is spiral in shape? You won't experience a "spiral" in the real world. But it is one of the more complex yet easy to visualize non-linear datasets. The playground in your question is for building intuition with neural networks. The other answers gave solutions that work but in my opinion miss the point of what can be learned here. In the real world, most non-linear functions span complex hyperdimensional spaces that are impossible to directly visualize. Some intuition behind that here. While there are standard techniques to attempt to represent higher dimensional spaces such as t-SNE and more new ones like the Grand Tour you'll rarely be so lucky as to know ahead of time what the underlying function is. Even if you were, you likely wouldn't be able to manually engineer a sophisticated enough kernel that is generalizable enough. What you do know is that neural networks are good universal approximators. A spiral dataset is merely a convenient tool to demonstrate just how difficult it can be to turn theory into reality. Some notes on systematic approaches to approximate a function with a neural network: A Recipe for Training Neural Networks and Machine Learning Yearning. Following some of those methods, I was able to get the answer in about 30 minutes. I encourage people reading to try to arrive at it on their own first. It's a good analogy for the pain of training models in the real world. :) Here's a solution that should get you to ~0.02 test loss/training loss by epoch 3000 using just the raw X1/X2 inputs. Solution
How to classify data which is spiral in shape? You won't experience a "spiral" in the real world. But it is one of the more complex yet easy to visualize non-linear datasets. The playground in your question is for building intuition with neural ne
54,277
How to evaluate effect size from a regression output
Maybe an example will be helpful. This very simple example is from Gelman and Hill (2006, p.31-34). We want to predict cognitive test scores of children (kid.score) given their mothers' education (mom.hs) and IQ (mom.iq): mom.hs is a binary predictor indicating whether mother graduated from high school (1) or not (0), and mom.iq is a continuous predictor. The fitted linear regression model is $$\text{kid.score} = 26+6\cdot \text{mom.hs}+0.6 \cdot \text{mom.iq} + \text{error} $$ Now, the interpretation is rather straightforward. For example, for 1 unit increase in mom.iq, I expect 0.6 points increase in kid.score (keeping the value of mom.hs constant). This relationship (between kid.score and mom.iq) is statistically significant, but is it practically significant? Is the effect size, here unstandardized regression coefficient (0.6), large? Actually, I don't know. I need to have an idea about the distribution of the values for dependent and independent variables. But more importantly, I need to know about the theories explaining the relationship between cognitive test score of children and the maternal IQ. Looking at the table and assuming this is a linear model (and unstandardized regression coefficients are reported), for 1 unit increase in "Indicator for Group 1 (1998 Treatment) School", I expect to see 0.25 points decrease in dependent variable (for model 1 and holding other variables constant). Again, I need to know theories explaining the relationship between variables, which will guide me in my interpretation of the magnitude of the effects. I can't think of a benchmark in this context. But, if one function of having a benchmark is to make comparisons possible, standardized coefficients might be worth considering. Standardized (beta, $\hat\beta*$) coefficients are more easily comparable, well, because the variables are standardized to have a mean of 0 and standard deviation 1. You can compare beta coefficients (in standard deviation units) to assess the relative strength of the predictors, for example "One standard deviation increase/decrease in X would yield a $\hat\beta*$ standard deviation increase/decrease in Y". Acock (2014) also argues that they can be interpreted similar to correlations: $\hat\beta* < 0.2$ is considered a weak, $0.2 < \hat\beta* < 0.5$ moderate, and $\hat\beta* > 0.5$ strong effect (p.272), but I can't verify this information from another source. Moreover, I come up with some (strong) warnings about the use of standardized coefficients (Fox, 2016, p.102; Harrell, 2015, p.103-104), this and this are examples. So, repeating once more, to evaluate the size of an effect (based on this output, unstandardized regression coefficients), you need to have information about the variables (e.g., how they are measured, their distributions, range of values, etc.), and the theories explaining the relationship between them. Acock, A. C. (2014). A Gentle Introduction to Stata (4th ed.). Texas: Stata Press. Fox, J. (2016). Applied Regression Analysis and Generalized Linear Models (3rd ed.). Los Angeles: Sage Publications. Gelman, A., & Hill, J. (2006). Data Analysis Using Regression and Multilevel Models. Cambridge: Cambridge University Press. Harrell, F. E. (2015). Regression Modeling Strategies (2nd ed.). Cham: Springer.
How to evaluate effect size from a regression output
Maybe an example will be helpful. This very simple example is from Gelman and Hill (2006, p.31-34). We want to predict cognitive test scores of children (kid.score) given their mothers' education (mom
How to evaluate effect size from a regression output Maybe an example will be helpful. This very simple example is from Gelman and Hill (2006, p.31-34). We want to predict cognitive test scores of children (kid.score) given their mothers' education (mom.hs) and IQ (mom.iq): mom.hs is a binary predictor indicating whether mother graduated from high school (1) or not (0), and mom.iq is a continuous predictor. The fitted linear regression model is $$\text{kid.score} = 26+6\cdot \text{mom.hs}+0.6 \cdot \text{mom.iq} + \text{error} $$ Now, the interpretation is rather straightforward. For example, for 1 unit increase in mom.iq, I expect 0.6 points increase in kid.score (keeping the value of mom.hs constant). This relationship (between kid.score and mom.iq) is statistically significant, but is it practically significant? Is the effect size, here unstandardized regression coefficient (0.6), large? Actually, I don't know. I need to have an idea about the distribution of the values for dependent and independent variables. But more importantly, I need to know about the theories explaining the relationship between cognitive test score of children and the maternal IQ. Looking at the table and assuming this is a linear model (and unstandardized regression coefficients are reported), for 1 unit increase in "Indicator for Group 1 (1998 Treatment) School", I expect to see 0.25 points decrease in dependent variable (for model 1 and holding other variables constant). Again, I need to know theories explaining the relationship between variables, which will guide me in my interpretation of the magnitude of the effects. I can't think of a benchmark in this context. But, if one function of having a benchmark is to make comparisons possible, standardized coefficients might be worth considering. Standardized (beta, $\hat\beta*$) coefficients are more easily comparable, well, because the variables are standardized to have a mean of 0 and standard deviation 1. You can compare beta coefficients (in standard deviation units) to assess the relative strength of the predictors, for example "One standard deviation increase/decrease in X would yield a $\hat\beta*$ standard deviation increase/decrease in Y". Acock (2014) also argues that they can be interpreted similar to correlations: $\hat\beta* < 0.2$ is considered a weak, $0.2 < \hat\beta* < 0.5$ moderate, and $\hat\beta* > 0.5$ strong effect (p.272), but I can't verify this information from another source. Moreover, I come up with some (strong) warnings about the use of standardized coefficients (Fox, 2016, p.102; Harrell, 2015, p.103-104), this and this are examples. So, repeating once more, to evaluate the size of an effect (based on this output, unstandardized regression coefficients), you need to have information about the variables (e.g., how they are measured, their distributions, range of values, etc.), and the theories explaining the relationship between them. Acock, A. C. (2014). A Gentle Introduction to Stata (4th ed.). Texas: Stata Press. Fox, J. (2016). Applied Regression Analysis and Generalized Linear Models (3rd ed.). Los Angeles: Sage Publications. Gelman, A., & Hill, J. (2006). Data Analysis Using Regression and Multilevel Models. Cambridge: Cambridge University Press. Harrell, F. E. (2015). Regression Modeling Strategies (2nd ed.). Cham: Springer.
How to evaluate effect size from a regression output Maybe an example will be helpful. This very simple example is from Gelman and Hill (2006, p.31-34). We want to predict cognitive test scores of children (kid.score) given their mothers' education (mom
54,278
How to use Kullback-leibler divergence if mean and standard deviation of of two Gaussian Distribution is provided?
You can compute pairwise KL divergence as a function of parameters in closed form for two Gaussian distributions $p$ and $q$. The uni-variate case: $KL(p||q) = \log \frac{\sigma_2}{\sigma_1} + \frac{\sigma_{1}^{2} + (\mu_1-\mu_2)^2}{2\sigma_{2}^{2}} - \frac{1}{2}$ and the multi-variate case: $KL(p||q) = \frac{1}{2}\left[\log\frac{|\Sigma_2|}{|\Sigma_1|} - d + \text{tr} (\Sigma_2^{-1}\Sigma_1) + (\mu_2 - \mu_1)^T \Sigma_2^{-1}(\mu_2 - \mu_1)\right]$ as derived here and here. Alternatively, you can try visualizing the cluster overlap by plotting the density of the mixture components.
How to use Kullback-leibler divergence if mean and standard deviation of of two Gaussian Distributio
You can compute pairwise KL divergence as a function of parameters in closed form for two Gaussian distributions $p$ and $q$. The uni-variate case: $KL(p||q) = \log \frac{\sigma_2}{\sigma_1} + \frac{\
How to use Kullback-leibler divergence if mean and standard deviation of of two Gaussian Distribution is provided? You can compute pairwise KL divergence as a function of parameters in closed form for two Gaussian distributions $p$ and $q$. The uni-variate case: $KL(p||q) = \log \frac{\sigma_2}{\sigma_1} + \frac{\sigma_{1}^{2} + (\mu_1-\mu_2)^2}{2\sigma_{2}^{2}} - \frac{1}{2}$ and the multi-variate case: $KL(p||q) = \frac{1}{2}\left[\log\frac{|\Sigma_2|}{|\Sigma_1|} - d + \text{tr} (\Sigma_2^{-1}\Sigma_1) + (\mu_2 - \mu_1)^T \Sigma_2^{-1}(\mu_2 - \mu_1)\right]$ as derived here and here. Alternatively, you can try visualizing the cluster overlap by plotting the density of the mixture components.
How to use Kullback-leibler divergence if mean and standard deviation of of two Gaussian Distributio You can compute pairwise KL divergence as a function of parameters in closed form for two Gaussian distributions $p$ and $q$. The uni-variate case: $KL(p||q) = \log \frac{\sigma_2}{\sigma_1} + \frac{\
54,279
How to use Kullback-leibler divergence if mean and standard deviation of of two Gaussian Distribution is provided?
To complete the answer given by Vadim, there are also many approximation of the Kullback-Leibler divergence between mixtures of Gaussian distributions. These approximations are surprisingly easy to compute and implement. These paper by Hershey & Olsen proposes 7 or 8 different approximations and advise the use of the variational approximation: https://pdfs.semanticscholar.org/4f8d/eabc58014eae708c3e6ee27114535325067b.pdf (Paper title is: Approximating the Kullback Leibler Divergence Between Gaussian Mixture Models.) It will give you a similarity measure of the global mixture and you will not have to compare component by component.
How to use Kullback-leibler divergence if mean and standard deviation of of two Gaussian Distributio
To complete the answer given by Vadim, there are also many approximation of the Kullback-Leibler divergence between mixtures of Gaussian distributions. These approximations are surprisingly easy to co
How to use Kullback-leibler divergence if mean and standard deviation of of two Gaussian Distribution is provided? To complete the answer given by Vadim, there are also many approximation of the Kullback-Leibler divergence between mixtures of Gaussian distributions. These approximations are surprisingly easy to compute and implement. These paper by Hershey & Olsen proposes 7 or 8 different approximations and advise the use of the variational approximation: https://pdfs.semanticscholar.org/4f8d/eabc58014eae708c3e6ee27114535325067b.pdf (Paper title is: Approximating the Kullback Leibler Divergence Between Gaussian Mixture Models.) It will give you a similarity measure of the global mixture and you will not have to compare component by component.
How to use Kullback-leibler divergence if mean and standard deviation of of two Gaussian Distributio To complete the answer given by Vadim, there are also many approximation of the Kullback-Leibler divergence between mixtures of Gaussian distributions. These approximations are surprisingly easy to co
54,280
Regression. Interaction term correlated with the variables
Keep it. It's one of those choices between unbiasedness and precision. The negative effect of having correlated independent variables is that they inflate each other's variance (the statistics that quantifies this phenomenon is called variance inflation factor). The results are enlarged standard error, which leads to lower t-statistics, which lead to higher p-value. If the interaction term is already statistically significant, then the above problem does not concern your model as much. Yet on the other hand, taking it out can have drastic consequence because without that interaction the estimates in the model ($b$ and $c$) can be biased. Once an estimate is biased there is really not much of a point to discuss its precision. For that reason it's better keep the interaction term. Also, I'd suggest careful consideration when checking possible interactions. While we can statistically examine each of them, the importance of having a causal framework behind cannot be emphasized enough. Lastly, many analyses were not powered for checking interactions (most researchers didn't consider this as a formal hypothesis) so be mindful not to fool yourself into thinking there isn't one if there isn't one.
Regression. Interaction term correlated with the variables
Keep it. It's one of those choices between unbiasedness and precision. The negative effect of having correlated independent variables is that they inflate each other's variance (the statistics that qu
Regression. Interaction term correlated with the variables Keep it. It's one of those choices between unbiasedness and precision. The negative effect of having correlated independent variables is that they inflate each other's variance (the statistics that quantifies this phenomenon is called variance inflation factor). The results are enlarged standard error, which leads to lower t-statistics, which lead to higher p-value. If the interaction term is already statistically significant, then the above problem does not concern your model as much. Yet on the other hand, taking it out can have drastic consequence because without that interaction the estimates in the model ($b$ and $c$) can be biased. Once an estimate is biased there is really not much of a point to discuss its precision. For that reason it's better keep the interaction term. Also, I'd suggest careful consideration when checking possible interactions. While we can statistically examine each of them, the importance of having a causal framework behind cannot be emphasized enough. Lastly, many analyses were not powered for checking interactions (most researchers didn't consider this as a formal hypothesis) so be mindful not to fool yourself into thinking there isn't one if there isn't one.
Regression. Interaction term correlated with the variables Keep it. It's one of those choices between unbiasedness and precision. The negative effect of having correlated independent variables is that they inflate each other's variance (the statistics that qu
54,281
Regression. Interaction term correlated with the variables
Consider (approximately) centreing your $X$ and $Y$ variables which will minimise the correlation between them and their interaction. Although it is true that interpretation of the results of a regression can be easier if the predictors are independent it is not essential. After all the reason we do it is to see the effect of each predictor over and above the effect of the others.
Regression. Interaction term correlated with the variables
Consider (approximately) centreing your $X$ and $Y$ variables which will minimise the correlation between them and their interaction. Although it is true that interpretation of the results of a regres
Regression. Interaction term correlated with the variables Consider (approximately) centreing your $X$ and $Y$ variables which will minimise the correlation between them and their interaction. Although it is true that interpretation of the results of a regression can be easier if the predictors are independent it is not essential. After all the reason we do it is to see the effect of each predictor over and above the effect of the others.
Regression. Interaction term correlated with the variables Consider (approximately) centreing your $X$ and $Y$ variables which will minimise the correlation between them and their interaction. Although it is true that interpretation of the results of a regres
54,282
Proof of recurrence between cumulants and central-moments
The equation given by Wikipedia connects cumulants to moments (generally). A proof of a formula connecting cumulants to central moments is found in A Recursive Formulation of the Old Problem of Obtaining Moments from Cumulants and Vice Versa Letting $K(t)$ be the cumulant-generating function, and $M(t)$ the moment-generating function. The relationship between the two is \begin{equation} M(t)=\exp{\left[K(t)\right]} \end{equation} The proof follows by differentiation of this expression and noting that the $n$th derivative can be written as \begin{equation} D^n[M(t)]=\sum_{i=0}^{n-1}\binom{n-1}{i}D^{n-i}[K(t)]D^i[M(t)] \end{equation} Where $D^k$ denotes the $k$th derivative. Now setting $t=0$: \begin{equation} \theta_n=\sum_{i=0}^{n-1}\binom{n-1}{i}\kappa_{n-i}\theta_i\\ \theta_n=\kappa_n+\sum_{i=1}^{n-1}\binom{n-1}{i}\kappa_{n-i}\theta_i\\ \end{equation} Rewriting yields: \begin{equation} \kappa_n = \theta_n-\sum_{i=1}^{n-1}\binom{n-1}{i}\kappa_{n-i}\theta_i \end{equation} In terms of the central moments and cumulants.
Proof of recurrence between cumulants and central-moments
The equation given by Wikipedia connects cumulants to moments (generally). A proof of a formula connecting cumulants to central moments is found in A Recursive Formulation of the Old Problem of Obtain
Proof of recurrence between cumulants and central-moments The equation given by Wikipedia connects cumulants to moments (generally). A proof of a formula connecting cumulants to central moments is found in A Recursive Formulation of the Old Problem of Obtaining Moments from Cumulants and Vice Versa Letting $K(t)$ be the cumulant-generating function, and $M(t)$ the moment-generating function. The relationship between the two is \begin{equation} M(t)=\exp{\left[K(t)\right]} \end{equation} The proof follows by differentiation of this expression and noting that the $n$th derivative can be written as \begin{equation} D^n[M(t)]=\sum_{i=0}^{n-1}\binom{n-1}{i}D^{n-i}[K(t)]D^i[M(t)] \end{equation} Where $D^k$ denotes the $k$th derivative. Now setting $t=0$: \begin{equation} \theta_n=\sum_{i=0}^{n-1}\binom{n-1}{i}\kappa_{n-i}\theta_i\\ \theta_n=\kappa_n+\sum_{i=1}^{n-1}\binom{n-1}{i}\kappa_{n-i}\theta_i\\ \end{equation} Rewriting yields: \begin{equation} \kappa_n = \theta_n-\sum_{i=1}^{n-1}\binom{n-1}{i}\kappa_{n-i}\theta_i \end{equation} In terms of the central moments and cumulants.
Proof of recurrence between cumulants and central-moments The equation given by Wikipedia connects cumulants to moments (generally). A proof of a formula connecting cumulants to central moments is found in A Recursive Formulation of the Old Problem of Obtain
54,283
Proof of recurrence between cumulants and central-moments
The paper mentioned and the formula cited by Itronneberg still refer to "raw" (non-central, "at the origin") moments. To verify this, take n=3: you get $\kappa_3 = \theta_3 - \kappa_1\theta_2$. (EDIT: fixed according to comment by GΓ’teau-Gallois) Hence, $\theta_n$ clearly denotes raw moments: the cumulant and the central moment should coincide for $n=3$. Indeed, the paper quoted doesn't have any reference to central moments. Another paper, however (which also mentions the former paper), does: Relationships Between Central Moments and Cumulants, with Formulae for the Central Moments of Gamma Distributions. And it provides the following formula (eq. (2.2)): $$ \kappa_r = \mu_r - \sum_{j=1}^{r-2} {r-1 \choose j} \mu_j \kappa_{r-j} \qquad r \geq 2. $$ Indeed, if you set $r=2$, you get that the summation is empty, and $\kappa_2 = \mu_2$.
Proof of recurrence between cumulants and central-moments
The paper mentioned and the formula cited by Itronneberg still refer to "raw" (non-central, "at the origin") moments. To verify this, take n=3: you get $\kappa_3 = \theta_3 - \kappa_1\theta_2$. (EDIT:
Proof of recurrence between cumulants and central-moments The paper mentioned and the formula cited by Itronneberg still refer to "raw" (non-central, "at the origin") moments. To verify this, take n=3: you get $\kappa_3 = \theta_3 - \kappa_1\theta_2$. (EDIT: fixed according to comment by GΓ’teau-Gallois) Hence, $\theta_n$ clearly denotes raw moments: the cumulant and the central moment should coincide for $n=3$. Indeed, the paper quoted doesn't have any reference to central moments. Another paper, however (which also mentions the former paper), does: Relationships Between Central Moments and Cumulants, with Formulae for the Central Moments of Gamma Distributions. And it provides the following formula (eq. (2.2)): $$ \kappa_r = \mu_r - \sum_{j=1}^{r-2} {r-1 \choose j} \mu_j \kappa_{r-j} \qquad r \geq 2. $$ Indeed, if you set $r=2$, you get that the summation is empty, and $\kappa_2 = \mu_2$.
Proof of recurrence between cumulants and central-moments The paper mentioned and the formula cited by Itronneberg still refer to "raw" (non-central, "at the origin") moments. To verify this, take n=3: you get $\kappa_3 = \theta_3 - \kappa_1\theta_2$. (EDIT:
54,284
Multi parameter Metropolis-Hastings
You actually have a single joint prior, which is a function of the parameter vector $\theta = [a_1, ..., a_d]$. If the parameters are treated independently, the prior factorizes into a product of the 'individual priors' that you mentioned. That is: $$p(\theta) = \prod_{i = 1}^d p(a_i)$$ Classic Metropolis-Hastings looks the same whether you have a single parameter or multiple parameters; in the multi-parameter case, you just consider the parameter vector as a single object. Let $\theta_t$ be the current parameter vector (at step $t$), $\theta'$ be a new candidate parameter vector drawn from the proposal distribution, and $D$ be the data. Calculate the ratio: $$R_t = \frac{p(D \mid \theta') p(\theta')}{p(D \mid \theta_t) p(\theta_t)}$$ Accept $\theta'$ if $R_t \ge 1$, otherwise accept it with probability $R_t$. Classic Metropolis-Hastings can be slow to converge in high dimensions. If this is a problem, more advanced techniques like Hamiltonian Monte Carlo can be used.
Multi parameter Metropolis-Hastings
You actually have a single joint prior, which is a function of the parameter vector $\theta = [a_1, ..., a_d]$. If the parameters are treated independently, the prior factorizes into a product of the
Multi parameter Metropolis-Hastings You actually have a single joint prior, which is a function of the parameter vector $\theta = [a_1, ..., a_d]$. If the parameters are treated independently, the prior factorizes into a product of the 'individual priors' that you mentioned. That is: $$p(\theta) = \prod_{i = 1}^d p(a_i)$$ Classic Metropolis-Hastings looks the same whether you have a single parameter or multiple parameters; in the multi-parameter case, you just consider the parameter vector as a single object. Let $\theta_t$ be the current parameter vector (at step $t$), $\theta'$ be a new candidate parameter vector drawn from the proposal distribution, and $D$ be the data. Calculate the ratio: $$R_t = \frac{p(D \mid \theta') p(\theta')}{p(D \mid \theta_t) p(\theta_t)}$$ Accept $\theta'$ if $R_t \ge 1$, otherwise accept it with probability $R_t$. Classic Metropolis-Hastings can be slow to converge in high dimensions. If this is a problem, more advanced techniques like Hamiltonian Monte Carlo can be used.
Multi parameter Metropolis-Hastings You actually have a single joint prior, which is a function of the parameter vector $\theta = [a_1, ..., a_d]$. If the parameters are treated independently, the prior factorizes into a product of the
54,285
Multi parameter Metropolis-Hastings
To add to the existing thread, you could think of doing this using what are called piecewise or blockwise updates. If you're considering the gaussian case and block updates you would have candidate values that are generated using a mean vector and a co-variance matrix and everything is accepted or rejected at once. If you're however considering piecewise updates, you would then sample each parameter of interest independently and apply the acceptance criteria individually. This link might help clarify https://theclevermachine.wordpress.com/2012/11/04/mcmc-multivariate-distributions-block-wise-component-wise-updates/
Multi parameter Metropolis-Hastings
To add to the existing thread, you could think of doing this using what are called piecewise or blockwise updates. If you're considering the gaussian case and block updates you would have candidate v
Multi parameter Metropolis-Hastings To add to the existing thread, you could think of doing this using what are called piecewise or blockwise updates. If you're considering the gaussian case and block updates you would have candidate values that are generated using a mean vector and a co-variance matrix and everything is accepted or rejected at once. If you're however considering piecewise updates, you would then sample each parameter of interest independently and apply the acceptance criteria individually. This link might help clarify https://theclevermachine.wordpress.com/2012/11/04/mcmc-multivariate-distributions-block-wise-component-wise-updates/
Multi parameter Metropolis-Hastings To add to the existing thread, you could think of doing this using what are called piecewise or blockwise updates. If you're considering the gaussian case and block updates you would have candidate v
54,286
Mean Absolute Deviation of Student t distributions?
General solution Let $f$ be the density of a symmetrical distribution with zero expectation (such as any Student $t$ distribution with $\nu \gt 1$ degrees of freedom) and $F$ be its distribution function. Integrating by parts for $\mu \ge 0$, observe that $$\int_\mu^\infty t f(t) dt = (t(1-F(t)))\big|_\mu^\infty + \int_\mu^\infty(1-F(t))dt = \mu(1-F(\mu)) + \int_\mu^\infty(1-F(t))dt.$$ Let's call this function $h(\mu)$. Shift the location of the distribution to $\mu \ge 0$. Let's compute its mean absolute deviation from its new mean: $$\operatorname{MAD}(f, \mu) = \int_{-\infty}^\infty |t|f(t-\mu)dt = \int_{-\infty}^\infty |t-\mu| f(t) dt.$$ Break the latter at $0$ and $\mu$ into three integrals: $$\eqalign{ &=\left(\int_{-\infty}^0 + \int_0^\mu + \int_{\mu}^\infty\right) |t-\mu| f(t)dt \\ &=\int_{-\infty}^0 (\mu-t)f(t)dt + \int_0^\mu (\mu-t) f(t)dt + \int_{\mu}^\infty (t-\mu) f(t)dt. }$$ Substitute $t\to -t$ in the first, expand them all, and cancel as many terms as possible: $$\eqalign{ &=(\mu F(0) + h(0)) + (\mu(F(\mu)-F(0)) -h(0) + h(\mu)) + (h(\mu) - \mu(1-F(\mu))) \\ &= \mu(2F(\mu)-1) + 2h(\mu) \\ &= \mu + 2\int_\mu^\infty(1-F(t))dt. }$$ We will find the middle expression most convenient with the Student $t$ distribution, but the final one is rather pretty: it's the mean plus the integral of the tails of the distribution. It shows that for distributions with rapidly decreasing tails, the MAD quickly approximates the mean itself when the mean is large. The curve plots $F$ shifted to the right by $\mu$. The MAD is the total dark shaded area: above the curve for positive values, below it for negative values. The red region between $\mu$ and $2\mu$ is congruent to the gray region beneath the curve from $0$ to $\mu$; together, they fit into the entire gray shaded rectangle of base $\mu$ and height $1$, thereby accounting for the $\mu$ term in the formula. The remaining portions are the two tail areas $\int_\mu^\infty (1-F(t))dt=\int_{-\infty}^{-\mu}F(t)dt$ which, because $F$ is symmetric around $0$, are congruent. Let's shift the distribution to $\mu/\sigma$ and then scale it by a factor of $\sigma$. The scaling must change $\mathbb{E}(|t|)$ by a factor of $|\sigma| = \sigma$ and it scales the mean $\mu/\sigma$ to $\mu$. The formula for the MAD of this location-scale family defined by $f$, with location $\mu$ and scale $\sigma$, therefore is $$\operatorname{MAD}(f,\mu,\sigma) = \mu(2F(\mu/\sigma) - 1) + 2\sigma h(\mu/\sigma).\tag{1}$$ Finally, because we assumed the distribution $F$ was symmetric with zero expectation, the MAD for $\mu$ must equal the MAD for $-\mu$. MAD for the Student t distribution The density function for the Student t distribution with $\nu$ degrees of freedom is $$f_\nu(t) = \frac{\nu^{\nu/2}}{B\left(\frac{\nu}{2}, \frac{1}{2}\right)} \left(t^2 + \nu\right)^{-(1+\nu)/2} = C(\nu)\left(t^2 + \nu\right)^{-(1+\nu)/2} .$$ The inviting substitution $u = t^2+\nu$ makes short work of computing $h(\mu)$. Because $\mu \ge 0$, this is a one-to-one transformation, giving $$\eqalign{ h(\mu) &= C(\nu)\int_\mu^\infty t (t^2 + \nu)^{-(1+\nu)/2}dt =\frac{C(\nu)}{2}\int_{\mu^2 + \nu}^\infty u^{-(1+\nu)/2}du\\ &= \frac{C(\nu)}{(\nu-1)(\mu^2 + \nu)^{(\nu-1)/2}}.}$$ For $\mu=0$ and $\sigma=1$, the symmetry implies $F(\mu/\sigma)=F(0)=1/2$, simplifying $(1)$ to $$\operatorname{MAD}(f_\nu,0,1) = 2h(0) = \frac{2C(\nu)}{(\nu-1)\nu^{(\nu-1)/2}} = \frac{2\sqrt{\nu}}{(\nu-1)B(\nu/2, 1/2)}.$$ Conclusions Neither $(1)$ nor the simplified version for $\mu=0, \sigma=1$ agrees with those on the Wolfram site: The first lacks a factor of $2$. The second has multiple typographical errors. As written, it's clearly wrong because it does not scale correctly with $\sigma$: it needs to be proportional to it. The denominator is correct, but evidently some significant formatting errors occurred in the numerator. Verification We may quickly compare simulated estimates of the MAD with the formula $(1)$. Here is an R implementation. In one million iterations with $\mu=-3, \sigma=4,\nu=2$ it output Simulated Formula 6.390525 6.403124 The two values are not significantly different. This code ought to be readable by non-R users; it suffices to know that pt implements the Student $t$ distribution function $F$ and rt draws random variates from it. C <- function(nu) nu^(nu/2) / beta(nu/2, 1/2) h <- function(nu, mu=0) C(nu) * (1/(nu-1) * (mu^2 + nu)^((1-nu)/2)) MAD <- function(nu, mu=0, sigma=1) { mu <- mu/sigma (mu * (2*pt(mu, nu) - 1) + 2*h(nu, mu))*sigma } set.seed(17) mu <- -3 sigma <- 4 nu <- 2 c(Simulated=mean(abs(mu + sigma*rt(1e6, nu))), Formula=MAD(nu, abs(mu), sigma))
Mean Absolute Deviation of Student t distributions?
General solution Let $f$ be the density of a symmetrical distribution with zero expectation (such as any Student $t$ distribution with $\nu \gt 1$ degrees of freedom) and $F$ be its distribution funct
Mean Absolute Deviation of Student t distributions? General solution Let $f$ be the density of a symmetrical distribution with zero expectation (such as any Student $t$ distribution with $\nu \gt 1$ degrees of freedom) and $F$ be its distribution function. Integrating by parts for $\mu \ge 0$, observe that $$\int_\mu^\infty t f(t) dt = (t(1-F(t)))\big|_\mu^\infty + \int_\mu^\infty(1-F(t))dt = \mu(1-F(\mu)) + \int_\mu^\infty(1-F(t))dt.$$ Let's call this function $h(\mu)$. Shift the location of the distribution to $\mu \ge 0$. Let's compute its mean absolute deviation from its new mean: $$\operatorname{MAD}(f, \mu) = \int_{-\infty}^\infty |t|f(t-\mu)dt = \int_{-\infty}^\infty |t-\mu| f(t) dt.$$ Break the latter at $0$ and $\mu$ into three integrals: $$\eqalign{ &=\left(\int_{-\infty}^0 + \int_0^\mu + \int_{\mu}^\infty\right) |t-\mu| f(t)dt \\ &=\int_{-\infty}^0 (\mu-t)f(t)dt + \int_0^\mu (\mu-t) f(t)dt + \int_{\mu}^\infty (t-\mu) f(t)dt. }$$ Substitute $t\to -t$ in the first, expand them all, and cancel as many terms as possible: $$\eqalign{ &=(\mu F(0) + h(0)) + (\mu(F(\mu)-F(0)) -h(0) + h(\mu)) + (h(\mu) - \mu(1-F(\mu))) \\ &= \mu(2F(\mu)-1) + 2h(\mu) \\ &= \mu + 2\int_\mu^\infty(1-F(t))dt. }$$ We will find the middle expression most convenient with the Student $t$ distribution, but the final one is rather pretty: it's the mean plus the integral of the tails of the distribution. It shows that for distributions with rapidly decreasing tails, the MAD quickly approximates the mean itself when the mean is large. The curve plots $F$ shifted to the right by $\mu$. The MAD is the total dark shaded area: above the curve for positive values, below it for negative values. The red region between $\mu$ and $2\mu$ is congruent to the gray region beneath the curve from $0$ to $\mu$; together, they fit into the entire gray shaded rectangle of base $\mu$ and height $1$, thereby accounting for the $\mu$ term in the formula. The remaining portions are the two tail areas $\int_\mu^\infty (1-F(t))dt=\int_{-\infty}^{-\mu}F(t)dt$ which, because $F$ is symmetric around $0$, are congruent. Let's shift the distribution to $\mu/\sigma$ and then scale it by a factor of $\sigma$. The scaling must change $\mathbb{E}(|t|)$ by a factor of $|\sigma| = \sigma$ and it scales the mean $\mu/\sigma$ to $\mu$. The formula for the MAD of this location-scale family defined by $f$, with location $\mu$ and scale $\sigma$, therefore is $$\operatorname{MAD}(f,\mu,\sigma) = \mu(2F(\mu/\sigma) - 1) + 2\sigma h(\mu/\sigma).\tag{1}$$ Finally, because we assumed the distribution $F$ was symmetric with zero expectation, the MAD for $\mu$ must equal the MAD for $-\mu$. MAD for the Student t distribution The density function for the Student t distribution with $\nu$ degrees of freedom is $$f_\nu(t) = \frac{\nu^{\nu/2}}{B\left(\frac{\nu}{2}, \frac{1}{2}\right)} \left(t^2 + \nu\right)^{-(1+\nu)/2} = C(\nu)\left(t^2 + \nu\right)^{-(1+\nu)/2} .$$ The inviting substitution $u = t^2+\nu$ makes short work of computing $h(\mu)$. Because $\mu \ge 0$, this is a one-to-one transformation, giving $$\eqalign{ h(\mu) &= C(\nu)\int_\mu^\infty t (t^2 + \nu)^{-(1+\nu)/2}dt =\frac{C(\nu)}{2}\int_{\mu^2 + \nu}^\infty u^{-(1+\nu)/2}du\\ &= \frac{C(\nu)}{(\nu-1)(\mu^2 + \nu)^{(\nu-1)/2}}.}$$ For $\mu=0$ and $\sigma=1$, the symmetry implies $F(\mu/\sigma)=F(0)=1/2$, simplifying $(1)$ to $$\operatorname{MAD}(f_\nu,0,1) = 2h(0) = \frac{2C(\nu)}{(\nu-1)\nu^{(\nu-1)/2}} = \frac{2\sqrt{\nu}}{(\nu-1)B(\nu/2, 1/2)}.$$ Conclusions Neither $(1)$ nor the simplified version for $\mu=0, \sigma=1$ agrees with those on the Wolfram site: The first lacks a factor of $2$. The second has multiple typographical errors. As written, it's clearly wrong because it does not scale correctly with $\sigma$: it needs to be proportional to it. The denominator is correct, but evidently some significant formatting errors occurred in the numerator. Verification We may quickly compare simulated estimates of the MAD with the formula $(1)$. Here is an R implementation. In one million iterations with $\mu=-3, \sigma=4,\nu=2$ it output Simulated Formula 6.390525 6.403124 The two values are not significantly different. This code ought to be readable by non-R users; it suffices to know that pt implements the Student $t$ distribution function $F$ and rt draws random variates from it. C <- function(nu) nu^(nu/2) / beta(nu/2, 1/2) h <- function(nu, mu=0) C(nu) * (1/(nu-1) * (mu^2 + nu)^((1-nu)/2)) MAD <- function(nu, mu=0, sigma=1) { mu <- mu/sigma (mu * (2*pt(mu, nu) - 1) + 2*h(nu, mu))*sigma } set.seed(17) mu <- -3 sigma <- 4 nu <- 2 c(Simulated=mean(abs(mu + sigma*rt(1e6, nu))), Formula=MAD(nu, abs(mu), sigma))
Mean Absolute Deviation of Student t distributions? General solution Let $f$ be the density of a symmetrical distribution with zero expectation (such as any Student $t$ distribution with $\nu \gt 1$ degrees of freedom) and $F$ be its distribution funct
54,287
Parameter Estimation for intractable Likelihoods / Alternatives to approximate Bayesian computation
It depends on your model and its probabilistic structure. Here are some options: Composite/Quasi/Pseudo likelihood methods: Varin, Cristiano, Nancy Reid, and David Firth. "An overview of composite likelihood methods." Statistica Sinica (2011): 5-42. This paper contains an overview of composite likelihood methods, which essentially consist of weighted products of functions of the data: $$L_C(\theta;Data) = \prod_{j=1}^K L_j(\theta;Data)^{\omega_j},$$ for which the argmax is a consistent estimator of $\theta$. The choice of the functions $L_j$ and the weight is non-trivial (see the reference for more details). Indirect Inference: Gourieroux, Christian, Alain Monfort, and Eric Renault. "Indirect inference." Journal of applied econometrics 8.S1 (1993): S85-S118. This paper introduces an alternative estimation method, based on simulation, which requires the specification of an auxiliary model, as well as the relationship between the parameters of the original model and the auxiliary model. By using these connections, an approximate estimator, which is not necessarily consistent, is proposed based on simulations of the model. The specification of the auxiliary model seems to require a case by case analysis. The introduction of the following paper has a literature review on other, more particular, methods: Rubio, F. J., & Johansen, A. M. (2013). A simple approach to maximum intractable likelihood estimation. Electronic Journal of Statistics, 7, 1632-1654. This paper proposes a method for maximum likelihood estimation based on ABC. The paper contains a literature review on other estimation methods.
Parameter Estimation for intractable Likelihoods / Alternatives to approximate Bayesian computation
It depends on your model and its probabilistic structure. Here are some options: Composite/Quasi/Pseudo likelihood methods: Varin, Cristiano, Nancy Reid, and David Firth. "An overview of composite
Parameter Estimation for intractable Likelihoods / Alternatives to approximate Bayesian computation It depends on your model and its probabilistic structure. Here are some options: Composite/Quasi/Pseudo likelihood methods: Varin, Cristiano, Nancy Reid, and David Firth. "An overview of composite likelihood methods." Statistica Sinica (2011): 5-42. This paper contains an overview of composite likelihood methods, which essentially consist of weighted products of functions of the data: $$L_C(\theta;Data) = \prod_{j=1}^K L_j(\theta;Data)^{\omega_j},$$ for which the argmax is a consistent estimator of $\theta$. The choice of the functions $L_j$ and the weight is non-trivial (see the reference for more details). Indirect Inference: Gourieroux, Christian, Alain Monfort, and Eric Renault. "Indirect inference." Journal of applied econometrics 8.S1 (1993): S85-S118. This paper introduces an alternative estimation method, based on simulation, which requires the specification of an auxiliary model, as well as the relationship between the parameters of the original model and the auxiliary model. By using these connections, an approximate estimator, which is not necessarily consistent, is proposed based on simulations of the model. The specification of the auxiliary model seems to require a case by case analysis. The introduction of the following paper has a literature review on other, more particular, methods: Rubio, F. J., & Johansen, A. M. (2013). A simple approach to maximum intractable likelihood estimation. Electronic Journal of Statistics, 7, 1632-1654. This paper proposes a method for maximum likelihood estimation based on ABC. The paper contains a literature review on other estimation methods.
Parameter Estimation for intractable Likelihoods / Alternatives to approximate Bayesian computation It depends on your model and its probabilistic structure. Here are some options: Composite/Quasi/Pseudo likelihood methods: Varin, Cristiano, Nancy Reid, and David Firth. "An overview of composite
54,288
Parameter Estimation for intractable Likelihoods / Alternatives to approximate Bayesian computation
You could consider the method of simulated moments, see for instance: http://www.stat.columbia.edu/~gelman/research/published/moments.pdf Otherwise, synthetic likelihood might be another option: http://www.nature.com/nature/journal/v466/n7310/full/nature09319.html
Parameter Estimation for intractable Likelihoods / Alternatives to approximate Bayesian computation
You could consider the method of simulated moments, see for instance: http://www.stat.columbia.edu/~gelman/research/published/moments.pdf Otherwise, synthetic likelihood might be another option: http
Parameter Estimation for intractable Likelihoods / Alternatives to approximate Bayesian computation You could consider the method of simulated moments, see for instance: http://www.stat.columbia.edu/~gelman/research/published/moments.pdf Otherwise, synthetic likelihood might be another option: http://www.nature.com/nature/journal/v466/n7310/full/nature09319.html
Parameter Estimation for intractable Likelihoods / Alternatives to approximate Bayesian computation You could consider the method of simulated moments, see for instance: http://www.stat.columbia.edu/~gelman/research/published/moments.pdf Otherwise, synthetic likelihood might be another option: http
54,289
The distribution of the AUC
AUC can be viewed as Wilcoxon-Mann-Whitney Test. And here is some demo, where for the R code I posted, I first calculate AUC, then use Wilcoxon-Mann-Whitney Test to calculate the number. Then verify both numbers are the same which is 0.911332. For a hypothesis testing, it is not hard to derive confidence interval. Right? Also I do not remember it Wilcoxon-Mann-Whitney Test requires normal distribution.
The distribution of the AUC
AUC can be viewed as Wilcoxon-Mann-Whitney Test. And here is some demo, where for the R code I posted, I first calculate AUC, then use Wilcoxon-Mann-Whitney Test to calculate the number. Then verify b
The distribution of the AUC AUC can be viewed as Wilcoxon-Mann-Whitney Test. And here is some demo, where for the R code I posted, I first calculate AUC, then use Wilcoxon-Mann-Whitney Test to calculate the number. Then verify both numbers are the same which is 0.911332. For a hypothesis testing, it is not hard to derive confidence interval. Right? Also I do not remember it Wilcoxon-Mann-Whitney Test requires normal distribution.
The distribution of the AUC AUC can be viewed as Wilcoxon-Mann-Whitney Test. And here is some demo, where for the R code I posted, I first calculate AUC, then use Wilcoxon-Mann-Whitney Test to calculate the number. Then verify b
54,290
The distribution of the AUC
As hxd1011 said, the AUC is equivalent to the U statistic calculated for the Mann-Whitney-U aka Wilcoxon-rank-sum test, [normalized to [0;1] by the product of observations in each group]. The original paper by Mann and Whitney (1947) contains a proof for approximate normality and the U statistic is approximately Normal distributed even for small samples (~20). In that sense, the AUC could be assumed to be approximately Normal distributed. To clarify some comments here, note that the test itself does not make any assumption on the distribution of the scores (usually the probability predictions from the model) (i.e. the test is non-parametric). However, the exact variance of the AUC can only be calculated based on the distribution of the scores (Cortes & Mohri 2005), which typically do not follow a known distribution. There seem to be different suggestions on how to calculate a useful confidence interval, see Cortes & Mohri (2005) for a summary and their (long) formula in Corollary 1 ([4]): $$ \sigma^2(AUC) = \frac{(m+n+1)(m+n)(m+nβˆ’1)T((m+nβˆ’2)Z_4βˆ’(2mβˆ’n+3kβˆ’10)Z_3)} {72m^2n^2} + \frac{(m+n+1)(m+n)T(m^2βˆ’nm+3kmβˆ’5m+2k^2βˆ’nk+12βˆ’9k)Z_2}{48m^2n^2} βˆ’ \frac{(m+n+1)^2(mβˆ’n)^4 Z_1^2} { 16m^2n^2} βˆ’ \frac{(m+n+1)Q_1 Z_1}{72m^2n^2} + \frac{kQ_0}{144m^2n^2} $$ with: $$ Z_i = \frac{ \sum^{kβˆ’i}_{x=0} \left(\begin{smallmatrix} m+n+1βˆ’i \\ x \end{smallmatrix}\right) } { \sum^k_{x=0} \left(\begin{smallmatrix} m+n+1 \\ x \end{smallmatrix}\right) } $$ $$ T = 3((m βˆ’ n)^2 + m + n) + 2 $$ $$ Q_0 = (m + n + 1)T k^2 + ((βˆ’3n^2 + 3mn + 3m + 1)T βˆ’ 12(3mn + m + n) βˆ’ 8)k + (βˆ’3m^2 +7m + 10n + 3nm + 10)T βˆ’ 4(3mn + m + n + 1) $$ $$ Q_1 = T k^3 + 3(m βˆ’ 1)T k^2 + ((βˆ’3n^2 + 3mn βˆ’ 3m + 8)T βˆ’ 6(6mn + m + n))k + (βˆ’3m^2 +7(m + n) + 3mn)T βˆ’ 2(6mn + m + n) $$ References Cortes, C., & Mohri, M. (2005). Confidence intervals for the area under the ROC curve. In Advances in neural information processing systems (pp. 305-312). https://cs.nyu.edu/~mohri/pub/area.pdf
The distribution of the AUC
As hxd1011 said, the AUC is equivalent to the U statistic calculated for the Mann-Whitney-U aka Wilcoxon-rank-sum test, [normalized to [0;1] by the product of observations in each group]. The original
The distribution of the AUC As hxd1011 said, the AUC is equivalent to the U statistic calculated for the Mann-Whitney-U aka Wilcoxon-rank-sum test, [normalized to [0;1] by the product of observations in each group]. The original paper by Mann and Whitney (1947) contains a proof for approximate normality and the U statistic is approximately Normal distributed even for small samples (~20). In that sense, the AUC could be assumed to be approximately Normal distributed. To clarify some comments here, note that the test itself does not make any assumption on the distribution of the scores (usually the probability predictions from the model) (i.e. the test is non-parametric). However, the exact variance of the AUC can only be calculated based on the distribution of the scores (Cortes & Mohri 2005), which typically do not follow a known distribution. There seem to be different suggestions on how to calculate a useful confidence interval, see Cortes & Mohri (2005) for a summary and their (long) formula in Corollary 1 ([4]): $$ \sigma^2(AUC) = \frac{(m+n+1)(m+n)(m+nβˆ’1)T((m+nβˆ’2)Z_4βˆ’(2mβˆ’n+3kβˆ’10)Z_3)} {72m^2n^2} + \frac{(m+n+1)(m+n)T(m^2βˆ’nm+3kmβˆ’5m+2k^2βˆ’nk+12βˆ’9k)Z_2}{48m^2n^2} βˆ’ \frac{(m+n+1)^2(mβˆ’n)^4 Z_1^2} { 16m^2n^2} βˆ’ \frac{(m+n+1)Q_1 Z_1}{72m^2n^2} + \frac{kQ_0}{144m^2n^2} $$ with: $$ Z_i = \frac{ \sum^{kβˆ’i}_{x=0} \left(\begin{smallmatrix} m+n+1βˆ’i \\ x \end{smallmatrix}\right) } { \sum^k_{x=0} \left(\begin{smallmatrix} m+n+1 \\ x \end{smallmatrix}\right) } $$ $$ T = 3((m βˆ’ n)^2 + m + n) + 2 $$ $$ Q_0 = (m + n + 1)T k^2 + ((βˆ’3n^2 + 3mn + 3m + 1)T βˆ’ 12(3mn + m + n) βˆ’ 8)k + (βˆ’3m^2 +7m + 10n + 3nm + 10)T βˆ’ 4(3mn + m + n + 1) $$ $$ Q_1 = T k^3 + 3(m βˆ’ 1)T k^2 + ((βˆ’3n^2 + 3mn βˆ’ 3m + 8)T βˆ’ 6(6mn + m + n))k + (βˆ’3m^2 +7(m + n) + 3mn)T βˆ’ 2(6mn + m + n) $$ References Cortes, C., & Mohri, M. (2005). Confidence intervals for the area under the ROC curve. In Advances in neural information processing systems (pp. 305-312). https://cs.nyu.edu/~mohri/pub/area.pdf
The distribution of the AUC As hxd1011 said, the AUC is equivalent to the U statistic calculated for the Mann-Whitney-U aka Wilcoxon-rank-sum test, [normalized to [0;1] by the product of observations in each group]. The original
54,291
What is the distribution of $X_i-\bar{X}$ when $X_i$ has $N(\mu,\sigma)$ distribution
Updated with full solution since OP has now solved it. For a complete solution, one needs to first show that $ Y_i:= X_i - \bar{X}$ is a Gaussian random variable, whence it suffices to find its mean and variance to characterize the distribution. Knowing something about Gaussian random vectors makes this straight-forward. This solution is presented first, and then I also provide a more direct argument without use of multivariate techniques. Multivariate solution: If $X_i\overset{iid}{\sim} N(\mu, \sigma^2)$, then $\mathbf{x}: = [X_1, \dots, X_n]' \sim N_n(1_n\mu, \sigma^2I_n)$. We also know that for any conformable matrix $A$, $A'\mathbf{x}$ is Gaussian with mean $A'1_n\mu$ and covariance matrix $\sigma^2A'A$. Consider, without loss of generality, the case $i = 1$. Then we have $$A=[1 - 1/n, -1/n,\dots, -1/n]'.$$ That is, $Y_1$ is Gaussian with mean $A'1_n \mu = 0$ and variance $$\sigma^2 A'A = \sigma^2([1-1/n]^2+(n-1)/n^2) = \sigma^2(n-1)/n.$$ This completes the first solution. Univariate solution: We can write $Y_i = (1-1/n)X_i - \sum_{j\neq i}X_j/n$, where the first term is independent of the second because functions of independent random variables are independent. We also know that sums of independent Gaussian random variables are still Gaussian, and that multiplying a Gaussian random variable by a constant gives another Gaussian random variable. Thus, using standard rules of means and variances, $$(1 - 1/n)X_i\sim N(0, (1-1/n)^2 \sigma^2)$$ and $$ -1\sum_{j\neq i}X_j \sim N(0, (n-1)\sigma^2/n^2), $$ which implies that $$Y_i \sim N(0, \sigma^2(n-1)/n),$$ where we have used that independence implies zero covariance.
What is the distribution of $X_i-\bar{X}$ when $X_i$ has $N(\mu,\sigma)$ distribution
Updated with full solution since OP has now solved it. For a complete solution, one needs to first show that $ Y_i:= X_i - \bar{X}$ is a Gaussian random variable, whence it suffices to find its mean
What is the distribution of $X_i-\bar{X}$ when $X_i$ has $N(\mu,\sigma)$ distribution Updated with full solution since OP has now solved it. For a complete solution, one needs to first show that $ Y_i:= X_i - \bar{X}$ is a Gaussian random variable, whence it suffices to find its mean and variance to characterize the distribution. Knowing something about Gaussian random vectors makes this straight-forward. This solution is presented first, and then I also provide a more direct argument without use of multivariate techniques. Multivariate solution: If $X_i\overset{iid}{\sim} N(\mu, \sigma^2)$, then $\mathbf{x}: = [X_1, \dots, X_n]' \sim N_n(1_n\mu, \sigma^2I_n)$. We also know that for any conformable matrix $A$, $A'\mathbf{x}$ is Gaussian with mean $A'1_n\mu$ and covariance matrix $\sigma^2A'A$. Consider, without loss of generality, the case $i = 1$. Then we have $$A=[1 - 1/n, -1/n,\dots, -1/n]'.$$ That is, $Y_1$ is Gaussian with mean $A'1_n \mu = 0$ and variance $$\sigma^2 A'A = \sigma^2([1-1/n]^2+(n-1)/n^2) = \sigma^2(n-1)/n.$$ This completes the first solution. Univariate solution: We can write $Y_i = (1-1/n)X_i - \sum_{j\neq i}X_j/n$, where the first term is independent of the second because functions of independent random variables are independent. We also know that sums of independent Gaussian random variables are still Gaussian, and that multiplying a Gaussian random variable by a constant gives another Gaussian random variable. Thus, using standard rules of means and variances, $$(1 - 1/n)X_i\sim N(0, (1-1/n)^2 \sigma^2)$$ and $$ -1\sum_{j\neq i}X_j \sim N(0, (n-1)\sigma^2/n^2), $$ which implies that $$Y_i \sim N(0, \sigma^2(n-1)/n),$$ where we have used that independence implies zero covariance.
What is the distribution of $X_i-\bar{X}$ when $X_i$ has $N(\mu,\sigma)$ distribution Updated with full solution since OP has now solved it. For a complete solution, one needs to first show that $ Y_i:= X_i - \bar{X}$ is a Gaussian random variable, whence it suffices to find its mean
54,292
What is the distribution of $X_i-\bar{X}$ when $X_i$ has $N(\mu,\sigma)$ distribution
Student001 already gave a solution for the question and accepted. Micheal M's comments are also very helpful, so I will post another solution following Micheal's suggestion without using matrix notation. $X_1-\bar{X}=X_1-\frac{X_1+X_2+...+X_n}{n}\\=X_1-\frac{X_1}{n}-\frac{X_2+X_3+...+X_n}{n}\\=(1-\frac{1}{n})X_1-\frac{1}{n}(X_2+X_3+...+X_n)$ Next we know that: $(1-\frac{1}{n})X_1\sim N(\frac{n-1}{n}\mu,\frac{(n-1)^2}{n^2}\sigma^2) \tag 1$ $\frac{1}{n}(X_2+X_3+...+X_n)\sim \frac{1}{n}N((n-1)\mu,(n-1)\sigma^2)=N(\frac{n-1}{n},\frac{n-1}{n^2}\sigma^2) \tag 2$ And $(1)-(2)$ has a $N(0,\frac{(n-1)^2+n-1}{n^2}\sigma^2)=N(0,\frac{n-1}{n}\sigma^2)$ i.e $X_1-\bar{X}\sim N(0,\frac{n-1}{n}\sigma^2)$ This result is exactly the same as Student001's results. Another method as suggest by Glen_b: we need to find the variance of $X_i$ and $\bar{X}$ $Var(X_i-\bar{X})=Var(X_i)+Var(\bar{X})-2Cov(X_i,\bar{X})$ The key is to calculate the $Cov(X_i,\bar{X})$ $Cov(X_i,\bar{X})=Cov(X_i, \frac{1}{n}(X_1+X_2+...+X_i+...+X_n))$ We will use the formula: $Cov(X,Y+Z)=Cov(X,Y)+Cov(X,Z)$ $Cov(X_i,\frac{1}{n}(X_1+X_2+...+X_i+...+X_n)=cov(X_i,\frac{1}{n}X_1)+...+Cov(X_i,\frac{1}{n}X_i)+...+Cov(X_i,\frac{1}{n}X_n)$ By i.i.d we know that except $Cov(X_i,\frac{1}{n}X_i)$ all other terms are zeros. $\therefore Cov(X_i,\bar{X})=Cov(X_i,\frac{1}{n}X_i)=\frac{1}{n}Cov(X_i,X_i)=\frac{1}{n}Var(X_i)=\frac{1}{n}\sigma^2$ Finally, $Var(X_i-\bar{X})=Var(X_i)+Var(\bar{X})-2Cov(X_i,\bar{X})=\sigma^2+\frac{\sigma^2}{n}-2\frac{1}{n}\sigma^2=\frac{n-1}{n}\sigma^2$ All methods get same results.
What is the distribution of $X_i-\bar{X}$ when $X_i$ has $N(\mu,\sigma)$ distribution
Student001 already gave a solution for the question and accepted. Micheal M's comments are also very helpful, so I will post another solution following Micheal's suggestion without using matrix notati
What is the distribution of $X_i-\bar{X}$ when $X_i$ has $N(\mu,\sigma)$ distribution Student001 already gave a solution for the question and accepted. Micheal M's comments are also very helpful, so I will post another solution following Micheal's suggestion without using matrix notation. $X_1-\bar{X}=X_1-\frac{X_1+X_2+...+X_n}{n}\\=X_1-\frac{X_1}{n}-\frac{X_2+X_3+...+X_n}{n}\\=(1-\frac{1}{n})X_1-\frac{1}{n}(X_2+X_3+...+X_n)$ Next we know that: $(1-\frac{1}{n})X_1\sim N(\frac{n-1}{n}\mu,\frac{(n-1)^2}{n^2}\sigma^2) \tag 1$ $\frac{1}{n}(X_2+X_3+...+X_n)\sim \frac{1}{n}N((n-1)\mu,(n-1)\sigma^2)=N(\frac{n-1}{n},\frac{n-1}{n^2}\sigma^2) \tag 2$ And $(1)-(2)$ has a $N(0,\frac{(n-1)^2+n-1}{n^2}\sigma^2)=N(0,\frac{n-1}{n}\sigma^2)$ i.e $X_1-\bar{X}\sim N(0,\frac{n-1}{n}\sigma^2)$ This result is exactly the same as Student001's results. Another method as suggest by Glen_b: we need to find the variance of $X_i$ and $\bar{X}$ $Var(X_i-\bar{X})=Var(X_i)+Var(\bar{X})-2Cov(X_i,\bar{X})$ The key is to calculate the $Cov(X_i,\bar{X})$ $Cov(X_i,\bar{X})=Cov(X_i, \frac{1}{n}(X_1+X_2+...+X_i+...+X_n))$ We will use the formula: $Cov(X,Y+Z)=Cov(X,Y)+Cov(X,Z)$ $Cov(X_i,\frac{1}{n}(X_1+X_2+...+X_i+...+X_n)=cov(X_i,\frac{1}{n}X_1)+...+Cov(X_i,\frac{1}{n}X_i)+...+Cov(X_i,\frac{1}{n}X_n)$ By i.i.d we know that except $Cov(X_i,\frac{1}{n}X_i)$ all other terms are zeros. $\therefore Cov(X_i,\bar{X})=Cov(X_i,\frac{1}{n}X_i)=\frac{1}{n}Cov(X_i,X_i)=\frac{1}{n}Var(X_i)=\frac{1}{n}\sigma^2$ Finally, $Var(X_i-\bar{X})=Var(X_i)+Var(\bar{X})-2Cov(X_i,\bar{X})=\sigma^2+\frac{\sigma^2}{n}-2\frac{1}{n}\sigma^2=\frac{n-1}{n}\sigma^2$ All methods get same results.
What is the distribution of $X_i-\bar{X}$ when $X_i$ has $N(\mu,\sigma)$ distribution Student001 already gave a solution for the question and accepted. Micheal M's comments are also very helpful, so I will post another solution following Micheal's suggestion without using matrix notati
54,293
What is the distribution of $X_i-\bar{X}$ when $X_i$ has $N(\mu,\sigma)$ distribution
Staying in the univarite case, since $X_i, i=1$ ,$\dots ,N$ are iid Normally distributed with mean $\mu$ and variance $\sigma^2$, we have, as you mentioned, $E[X_i -\bar X] = E[X_i- \frac{1}{n}\sum_j^N X_j] = E[X_i] - \frac{1}{n}\sum_j^N E[X_j] = \mu - \frac{1}{n}n\mu =0$ For the variance, notice that $Var[X_i-\bar X] = Var[X_i] + Var[\bar{X}] - 2Cov(X_i, \bar{X})$ Where $Cov(X_i, \bar{X}) = Cov(X_i, \frac{1}{n}\sum_j^N X_j) = Cov(X_i, \frac{1}{n}X_i)$ by independence. This should lead you to your answer. Also, you technically have to check that the resulting distribution is itself normal. Then, using the shortcut argument that the normal distribution is entirely determined by its first two moments, we have that: $X_i - \bar X \sim N \left (E[X_i - \bar X], Var[X_i - \bar X] \right )$
What is the distribution of $X_i-\bar{X}$ when $X_i$ has $N(\mu,\sigma)$ distribution
Staying in the univarite case, since $X_i, i=1$ ,$\dots ,N$ are iid Normally distributed with mean $\mu$ and variance $\sigma^2$, we have, as you mentioned, $E[X_i -\bar X] = E[X_i- \frac{1}{n}\sum_j^
What is the distribution of $X_i-\bar{X}$ when $X_i$ has $N(\mu,\sigma)$ distribution Staying in the univarite case, since $X_i, i=1$ ,$\dots ,N$ are iid Normally distributed with mean $\mu$ and variance $\sigma^2$, we have, as you mentioned, $E[X_i -\bar X] = E[X_i- \frac{1}{n}\sum_j^N X_j] = E[X_i] - \frac{1}{n}\sum_j^N E[X_j] = \mu - \frac{1}{n}n\mu =0$ For the variance, notice that $Var[X_i-\bar X] = Var[X_i] + Var[\bar{X}] - 2Cov(X_i, \bar{X})$ Where $Cov(X_i, \bar{X}) = Cov(X_i, \frac{1}{n}\sum_j^N X_j) = Cov(X_i, \frac{1}{n}X_i)$ by independence. This should lead you to your answer. Also, you technically have to check that the resulting distribution is itself normal. Then, using the shortcut argument that the normal distribution is entirely determined by its first two moments, we have that: $X_i - \bar X \sim N \left (E[X_i - \bar X], Var[X_i - \bar X] \right )$
What is the distribution of $X_i-\bar{X}$ when $X_i$ has $N(\mu,\sigma)$ distribution Staying in the univarite case, since $X_i, i=1$ ,$\dots ,N$ are iid Normally distributed with mean $\mu$ and variance $\sigma^2$, we have, as you mentioned, $E[X_i -\bar X] = E[X_i- \frac{1}{n}\sum_j^
54,294
Categorizing Continuous Random Variable in Logistic Regression
Instead of throwing away data by categorizing, you could consider fitting your continuous predictor as a spline function with a specified number of knots or with the number of knots chosen by cross-validation. That will use up no more degrees of freedom than categorization. If you are willing to envision up to 8 categories, it's not clear that categorization is really simpler than a well-modeled continuous variable, and predictions of new cases with the continuous fit should be better, too. Using spline functions in formulas with the rms package in R, as I recall, does this naturally; check the documentation. Added in response to edited question and comments: Non-statisticians might be better served by a set of illustrative examples drawn from a model based on the continuous predictor. You could choose examples so that they seem like categories ("very high","high", "medium", "low", "very low") even if the model doesn't itself depend on the categorization. One situation where categorization in the model itself might be useful is if there really are distinct underlying classes of cases that your continuous estimator is obfuscating. With some effort such an example and some rationale can be found for a 2-class situation with high errors in measuring their 2 distinct values along a continuous scale, but it's hard to see how that would generalize to more than 2 classes.
Categorizing Continuous Random Variable in Logistic Regression
Instead of throwing away data by categorizing, you could consider fitting your continuous predictor as a spline function with a specified number of knots or with the number of knots chosen by cross-va
Categorizing Continuous Random Variable in Logistic Regression Instead of throwing away data by categorizing, you could consider fitting your continuous predictor as a spline function with a specified number of knots or with the number of knots chosen by cross-validation. That will use up no more degrees of freedom than categorization. If you are willing to envision up to 8 categories, it's not clear that categorization is really simpler than a well-modeled continuous variable, and predictions of new cases with the continuous fit should be better, too. Using spline functions in formulas with the rms package in R, as I recall, does this naturally; check the documentation. Added in response to edited question and comments: Non-statisticians might be better served by a set of illustrative examples drawn from a model based on the continuous predictor. You could choose examples so that they seem like categories ("very high","high", "medium", "low", "very low") even if the model doesn't itself depend on the categorization. One situation where categorization in the model itself might be useful is if there really are distinct underlying classes of cases that your continuous estimator is obfuscating. With some effort such an example and some rationale can be found for a 2-class situation with high errors in measuring their 2 distinct values along a continuous scale, but it's hard to see how that would generalize to more than 2 classes.
Categorizing Continuous Random Variable in Logistic Regression Instead of throwing away data by categorizing, you could consider fitting your continuous predictor as a spline function with a specified number of knots or with the number of knots chosen by cross-va
54,295
Categorizing Continuous Random Variable in Logistic Regression
Since it seems like "ease of interpretation" is important to you, I think you would be interested to learn about nomograms, which are essentially a model represented in a diagrammatical way. Instead of relying on some ad hoc categorization procedure, you can fit ornate trends using statistically principled methods such as regression splines, and then represent the equation in the form of a nomogram. Predictions are made by drawing a line through the values of the predictor variables. More information about both regression splines and nomograms can be found in Regression Modeling Strategies by Frank Harrell.
Categorizing Continuous Random Variable in Logistic Regression
Since it seems like "ease of interpretation" is important to you, I think you would be interested to learn about nomograms, which are essentially a model represented in a diagrammatical way. Instead o
Categorizing Continuous Random Variable in Logistic Regression Since it seems like "ease of interpretation" is important to you, I think you would be interested to learn about nomograms, which are essentially a model represented in a diagrammatical way. Instead of relying on some ad hoc categorization procedure, you can fit ornate trends using statistically principled methods such as regression splines, and then represent the equation in the form of a nomogram. Predictions are made by drawing a line through the values of the predictor variables. More information about both regression splines and nomograms can be found in Regression Modeling Strategies by Frank Harrell.
Categorizing Continuous Random Variable in Logistic Regression Since it seems like "ease of interpretation" is important to you, I think you would be interested to learn about nomograms, which are essentially a model represented in a diagrammatical way. Instead o
54,296
Categorizing Continuous Random Variable in Logistic Regression
I always think you can do most task by two approaches: knowledge driven and data driven include binning your continuous features. By knowledge driven, you can think about what binning will make sense from what the actual feature represents. For example, if you are binning a household income, you definitely can find some references on basic statistics of the US household income and use those statistical metrics to bin it (e.g., what is the typical value for middle class, rich etc.). By data driven, you are essential want to use this binning to improve your model performance. You can think about you are essentially doing feature engineering or basis expansion. Suppose you want to sacrifice your interpretability, you can even use Neural Network to "train the basis expansion", where you expand one continuous features to many "engineered features", those engineered features can be continuous or discrete. I am thinking you are using RPART to bin, is similar to this approach. Best research always come with combining both knowledge driven and data driven, where you use knowledge to specify a "rough shape of the model" and use data to fit it to get more details. In your case of binning continuous variables, you may also do this. I am not sure if my answer is too high level, but feel free to ask me to explain any part in details.
Categorizing Continuous Random Variable in Logistic Regression
I always think you can do most task by two approaches: knowledge driven and data driven include binning your continuous features. By knowledge driven, you can think about what binning will make sense
Categorizing Continuous Random Variable in Logistic Regression I always think you can do most task by two approaches: knowledge driven and data driven include binning your continuous features. By knowledge driven, you can think about what binning will make sense from what the actual feature represents. For example, if you are binning a household income, you definitely can find some references on basic statistics of the US household income and use those statistical metrics to bin it (e.g., what is the typical value for middle class, rich etc.). By data driven, you are essential want to use this binning to improve your model performance. You can think about you are essentially doing feature engineering or basis expansion. Suppose you want to sacrifice your interpretability, you can even use Neural Network to "train the basis expansion", where you expand one continuous features to many "engineered features", those engineered features can be continuous or discrete. I am thinking you are using RPART to bin, is similar to this approach. Best research always come with combining both knowledge driven and data driven, where you use knowledge to specify a "rough shape of the model" and use data to fit it to get more details. In your case of binning continuous variables, you may also do this. I am not sure if my answer is too high level, but feel free to ask me to explain any part in details.
Categorizing Continuous Random Variable in Logistic Regression I always think you can do most task by two approaches: knowledge driven and data driven include binning your continuous features. By knowledge driven, you can think about what binning will make sense
54,297
Categorizing Continuous Random Variable in Logistic Regression
Thanks to those who tried to answer it. However, I don't think either of these answers are that much helpful to me. In fact there is a phd thesis written on this available here. There are also some R packages e.g. CatPredi that can be used as well.
Categorizing Continuous Random Variable in Logistic Regression
Thanks to those who tried to answer it. However, I don't think either of these answers are that much helpful to me. In fact there is a phd thesis written on this available here. There are also some R
Categorizing Continuous Random Variable in Logistic Regression Thanks to those who tried to answer it. However, I don't think either of these answers are that much helpful to me. In fact there is a phd thesis written on this available here. There are also some R packages e.g. CatPredi that can be used as well.
Categorizing Continuous Random Variable in Logistic Regression Thanks to those who tried to answer it. However, I don't think either of these answers are that much helpful to me. In fact there is a phd thesis written on this available here. There are also some R
54,298
A bar graph and its look - should I add titles, where should values go etc
I agree with EdM's point that "bar plots simply have too much ink for the information conveyed." Here's a ggplot2 version of his answer: library(ggplot2) df <- data.frame(years=c(1991, 1993, 1997, 2001, 2005, 2007, 2011, 2015), freq=c(43.20, 52.13, 47.93, 46.29, 40.57, 53.88, 48.92, 50.92)) p <- (ggplot(df, aes(x=years, y=freq)) + geom_line(size=1.25, color="#999999") + geom_point(size=3.5, color="black") + theme_bw() + theme(panel.border=element_blank(), panel.grid.minor=element_blank(), axis.title.y=element_text(vjust=1.25)) + scale_x_continuous("", breaks=seq(1990, 2015, 5), minor_breaks=NULL) + scale_y_continuous("percentage turnout", limits=c(36, 59), breaks=seq(40, 55, 5), minor_breaks=NULL)) p ggsave("percentage_turnout_over_time.png", p, width=10, height=8) Which produces this: Edit: here's a version with numbers on the graph: p <- (ggplot(df, aes(x=years, y=freq, label=freq)) + geom_line(size=1.25, color="#999999") + geom_point(size=3.5, color="black") + geom_text(vjust=c(2, -1, -1.5*sign(diff(diff(df$freq))) + 0.5)) + theme_bw() + theme(panel.border=element_blank(), panel.grid.minor=element_blank(), axis.title.y=element_text(vjust=1.25)) + scale_x_continuous("", breaks=seq(1990, 2015, 5), minor_breaks=NULL) + scale_y_continuous("percentage turnout", limits=c(36, 59), breaks=seq(40, 55, 5), minor_breaks=NULL)) p ggsave("percentage_turnout_over_time_with_text.png", p, width=10, height=8) Nick Cox's comment under the original post is convincing: I see no harm in showing numbers too. People often want to read numbers off graphs just as they (should) want to read numbers off tables. Also, offering graph PLUS table in a paper would often be rejected by reviewers as too much space devoted to the same information, so hybridising graph and table is perfectly defensible.
A bar graph and its look - should I add titles, where should values go etc
I agree with EdM's point that "bar plots simply have too much ink for the information conveyed." Here's a ggplot2 version of his answer: library(ggplot2) df <- data.frame(years=c(1991, 1993, 1997, 2
A bar graph and its look - should I add titles, where should values go etc I agree with EdM's point that "bar plots simply have too much ink for the information conveyed." Here's a ggplot2 version of his answer: library(ggplot2) df <- data.frame(years=c(1991, 1993, 1997, 2001, 2005, 2007, 2011, 2015), freq=c(43.20, 52.13, 47.93, 46.29, 40.57, 53.88, 48.92, 50.92)) p <- (ggplot(df, aes(x=years, y=freq)) + geom_line(size=1.25, color="#999999") + geom_point(size=3.5, color="black") + theme_bw() + theme(panel.border=element_blank(), panel.grid.minor=element_blank(), axis.title.y=element_text(vjust=1.25)) + scale_x_continuous("", breaks=seq(1990, 2015, 5), minor_breaks=NULL) + scale_y_continuous("percentage turnout", limits=c(36, 59), breaks=seq(40, 55, 5), minor_breaks=NULL)) p ggsave("percentage_turnout_over_time.png", p, width=10, height=8) Which produces this: Edit: here's a version with numbers on the graph: p <- (ggplot(df, aes(x=years, y=freq, label=freq)) + geom_line(size=1.25, color="#999999") + geom_point(size=3.5, color="black") + geom_text(vjust=c(2, -1, -1.5*sign(diff(diff(df$freq))) + 0.5)) + theme_bw() + theme(panel.border=element_blank(), panel.grid.minor=element_blank(), axis.title.y=element_text(vjust=1.25)) + scale_x_continuous("", breaks=seq(1990, 2015, 5), minor_breaks=NULL) + scale_y_continuous("percentage turnout", limits=c(36, 59), breaks=seq(40, 55, 5), minor_breaks=NULL)) p ggsave("percentage_turnout_over_time_with_text.png", p, width=10, height=8) Nick Cox's comment under the original post is convincing: I see no harm in showing numbers too. People often want to read numbers off graphs just as they (should) want to read numbers off tables. Also, offering graph PLUS table in a paper would often be rejected by reviewers as too much space devoted to the same information, so hybridising graph and table is perfectly defensible.
A bar graph and its look - should I add titles, where should values go etc I agree with EdM's point that "bar plots simply have too much ink for the information conveyed." Here's a ggplot2 version of his answer: library(ggplot2) df <- data.frame(years=c(1991, 1993, 1997, 2
54,299
A bar graph and its look - should I add titles, where should values go etc
Maybe "Tufte`rize" your plot: ggplot(dat, aes(years, freq)) + geom_bar(stat = "identity", width=0.55, fill="grey")+ scale_y_continuous(breaks = seq(0,50,10)) + geom_hline(yintercept= seq(0,50,10), col="white") + theme_classic(base_size = 16) + theme(axis.ticks=element_blank()) + labs(x=NULL, y=NULL) + ggtitle("freq per year") or ggplot(dat, aes(years, freq)) + geom_bar(stat = "identity", width=0.55, fill="grey64")+ scale_y_continuous(breaks = seq(0,65,10)) + theme_classic(base_size = 18) + theme(axis.text.y=element_blank(), axis.ticks=element_blank(), axis.line.y=element_blank()) + labs(x=NULL, y=NULL) + geom_text(aes(label=format(freq,decimal.mark = ",")), vjust=-.3, size = 4)
A bar graph and its look - should I add titles, where should values go etc
Maybe "Tufte`rize" your plot: ggplot(dat, aes(years, freq)) + geom_bar(stat = "identity", width=0.55, fill="grey")+ scale_y_continuous(breaks = seq(0,50,10)) + geom_hline(yintercept= seq(0,50
A bar graph and its look - should I add titles, where should values go etc Maybe "Tufte`rize" your plot: ggplot(dat, aes(years, freq)) + geom_bar(stat = "identity", width=0.55, fill="grey")+ scale_y_continuous(breaks = seq(0,50,10)) + geom_hline(yintercept= seq(0,50,10), col="white") + theme_classic(base_size = 16) + theme(axis.ticks=element_blank()) + labs(x=NULL, y=NULL) + ggtitle("freq per year") or ggplot(dat, aes(years, freq)) + geom_bar(stat = "identity", width=0.55, fill="grey64")+ scale_y_continuous(breaks = seq(0,65,10)) + theme_classic(base_size = 18) + theme(axis.text.y=element_blank(), axis.ticks=element_blank(), axis.line.y=element_blank()) + labs(x=NULL, y=NULL) + geom_text(aes(label=format(freq,decimal.mark = ",")), vjust=-.3, size = 4)
A bar graph and its look - should I add titles, where should values go etc Maybe "Tufte`rize" your plot: ggplot(dat, aes(years, freq)) + geom_bar(stat = "identity", width=0.55, fill="grey")+ scale_y_continuous(breaks = seq(0,50,10)) + geom_hline(yintercept= seq(0,50
54,300
A bar graph and its look - should I add titles, where should values go etc
This is not a good application of a bar plot. Tufte would not be pleased, even with the allegedly "Tufte`rized" bar plots recommended in another answer. Bar plots simply have too much ink used for the information conveyed. See Tufte's website and books, and if possible attend one of his seminars on how to display data effectively. Displaying these data requires no more than a line plot of turnout versus year. The line plot, with equal spacing of actual years along the x-axis, will also remove the implication contained in the bar plot that the observations are equally spaced in time. Here's an example that emphasizes the changes over the range of observed values: I find that the connecting lines between the points help to keep track of the time relationships, even though there are only 8, unequally spaced, observations. Nick Cox rightly noted in a comment that this might tend to suggest linear changes in between dates. The lines are gray, with black dots at the observations, in an attempt to de-emphasize such a suggestion. If you are more interested in changes with respect to a baseline of 0% turnout you could adjust the y-axis limits accordingly. But have you ever had anything close to 0% turnout for an election? Also, figures are not good places to show results down to 2 decimal places. For that, use a table. Code is all from R base graphics. You can probably do something more elegant with ggplot but I have little experience with that. First, change your "years" into numeric from text: dat$years <- as.numeric(as.character(dat$years)) Then for the plot: plot (freq~years,data=dat,xlim=c(1990,2016),xlab="Year",ylab="Percentage Turnout",type="l",axes=FALSE,col="gray") points (freq~years,data=dat,pch=19) axis(1,at=seq(1990,2020,10)) axis(2,at=seq(42,54,6)) Standard R graphics put a potentially distracting box around the entire plot, which is omitted here by the axes=FALSE specification, in the plot command that draws the gray lines. The points command then places the points. The separate specifications with axis allows control over where the tick marks are placed and labeled; R may tend to over-label a bit to some people's taste.
A bar graph and its look - should I add titles, where should values go etc
This is not a good application of a bar plot. Tufte would not be pleased, even with the allegedly "Tufte`rized" bar plots recommended in another answer. Bar plots simply have too much ink used for the
A bar graph and its look - should I add titles, where should values go etc This is not a good application of a bar plot. Tufte would not be pleased, even with the allegedly "Tufte`rized" bar plots recommended in another answer. Bar plots simply have too much ink used for the information conveyed. See Tufte's website and books, and if possible attend one of his seminars on how to display data effectively. Displaying these data requires no more than a line plot of turnout versus year. The line plot, with equal spacing of actual years along the x-axis, will also remove the implication contained in the bar plot that the observations are equally spaced in time. Here's an example that emphasizes the changes over the range of observed values: I find that the connecting lines between the points help to keep track of the time relationships, even though there are only 8, unequally spaced, observations. Nick Cox rightly noted in a comment that this might tend to suggest linear changes in between dates. The lines are gray, with black dots at the observations, in an attempt to de-emphasize such a suggestion. If you are more interested in changes with respect to a baseline of 0% turnout you could adjust the y-axis limits accordingly. But have you ever had anything close to 0% turnout for an election? Also, figures are not good places to show results down to 2 decimal places. For that, use a table. Code is all from R base graphics. You can probably do something more elegant with ggplot but I have little experience with that. First, change your "years" into numeric from text: dat$years <- as.numeric(as.character(dat$years)) Then for the plot: plot (freq~years,data=dat,xlim=c(1990,2016),xlab="Year",ylab="Percentage Turnout",type="l",axes=FALSE,col="gray") points (freq~years,data=dat,pch=19) axis(1,at=seq(1990,2020,10)) axis(2,at=seq(42,54,6)) Standard R graphics put a potentially distracting box around the entire plot, which is omitted here by the axes=FALSE specification, in the plot command that draws the gray lines. The points command then places the points. The separate specifications with axis allows control over where the tick marks are placed and labeled; R may tend to over-label a bit to some people's taste.
A bar graph and its look - should I add titles, where should values go etc This is not a good application of a bar plot. Tufte would not be pleased, even with the allegedly "Tufte`rized" bar plots recommended in another answer. Bar plots simply have too much ink used for the